The Turing Test is (Ultimately, Maybe) Invalid

Discussion in 'Chatter' started by mimus, Oct 14, 2007.

  1. mimus

    mimus Guest

    After reflection on program- detection of flooding and sporging of Usenet
    newsgroups, I have concluded that the "Turing Test" is ultimately invalid,
    if there is any ultimacy in the matter:

    Ultimately, if there is any etc., and in the worst- case scenario, any
    rule used by a human to try to distinguish whether, say, a Usenet poster
    is a human or a bot, can be expressed as an algorithm, and an opposing
    algorithm developed, generally (A) one essentially oppositely-
    implementing that algorithm, but without ruling out (B) one simulating
    opposition, none of which is proof of the intelligence of such bot:

    (A) Noting by way of preface the suggestive existence of everyday
    character- recognition programs (already damnably complex already, eh?),
    would one want to bet that no semantical analytical programs can be
    developed (and therefore bots giving semantically- correct responses to
    statements and questions)? what about rational analytical programs?

    (B) As for, say, (topical) original content, originality, which is
    ultimately unproveable anyway, a bot could easily plagiarize such from
    somewhere else, preferably multiple and obscure sources.

    Happily, for program- detection of Usenet newsgroup- flooding and sporge-
    attacks as well as elsewhere, such bots or other annoying programs are far
    from ultimate in their fields, and the detection- program implementation
    of multiple rules and algorithms (bearing in mind diminishing returns vis-
    a- vis computational intensity) should allow such detection, until new
    bots or programs are developed which implement the appropriate counter-
    algorithms, in the usual "race" of measure and counter- measure.

    --
    tinmimus99@hotmail.com

    smeeter 11 or maybe 12

    mp 10

    mhm 29x13

    I wonder what I have been up to.

    < _Beyond Apollo_
     
  2. mixed nuts

    mixed nuts Guest

    mimus wrote:
    > After reflection on program- detection of flooding and sporging of Usenet
    > newsgroups, I have concluded that the "Turing Test" is ultimately invalid,
    > if there is any ultimacy in the matter:
    >
    > Ultimately, if there is any etc., and in the worst- case scenario, any
    > rule used by a human to try to distinguish whether, say, a Usenet poster
    > is a human or a bot, can be expressed as an algorithm, and an opposing
    > algorithm developed, generally (A) one essentially oppositely-
    > implementing that algorithm, but without ruling out (B) one simulating
    > opposition, none of which is proof of the intelligence of such bot:
    >
    > (A) Noting by way of preface the suggestive existence of everyday
    > character- recognition programs (already damnably complex already, eh?),
    > would one want to bet that no semantical analytical programs can be
    > developed (and therefore bots giving semantically- correct responses to
    > statements and questions)? what about rational analytical programs?
    >
    > (B) As for, say, (topical) original content, originality, which is
    > ultimately unproveable anyway, a bot could easily plagiarize such from
    > somewhere else, preferably multiple and obscure sources.
    >
    > Happily, for program- detection of Usenet newsgroup- flooding and sporge-
    > attacks as well as elsewhere, such bots or other annoying programs are far
    > from ultimate in their fields, and the detection- program implementation
    > of multiple rules and algorithms (bearing in mind diminishing returns vis-
    > a- vis computational intensity) should allow such detection, until new
    > bots or programs are developed which implement the appropriate counter-
    > algorithms, in the usual "race" of measure and counter- measure.
    >

    If it steals your sandwich and licks your face before you can smack it
    with a newspaper, it's intelligent.

    If it gnaws a hole in the side of yer house, sneaks into the kitchen,
    electrocutes itself while gnawing on the stove wiring and burns down
    your house, it has not achieved intelligent status.

    Aren't you supposed to be mowing?

    --
    nuts
     
  3. mimus

    mimus Guest

    On Sun, 14 Oct 2007 16:39:41 -0400, mixed nuts wrote:

    > mimus wrote:
    >> After reflection on program- detection of flooding and sporging of Usenet
    >> newsgroups, I have concluded that the "Turing Test" is ultimately invalid,
    >> if there is any ultimacy in the matter:
    >>
    >> Ultimately, if there is any etc., and in the worst- case scenario, any
    >> rule used by a human to try to distinguish whether, say, a Usenet poster
    >> is a human or a bot, can be expressed as an algorithm, and an opposing
    >> algorithm developed, generally (A) one essentially oppositely-
    >> implementing that algorithm, but without ruling out (B) one simulating
    >> opposition, none of which is proof of the intelligence of such bot:
    >>
    >> (A) Noting by way of preface the suggestive existence of everyday
    >> character- recognition programs (already damnably complex already, eh?),
    >> would one want to bet that no semantical analytical programs can be
    >> developed (and therefore bots giving semantically- correct responses to
    >> statements and questions)? what about rational analytical programs?
    >>
    >> (B) As for, say, (topical) original content, originality, which is
    >> ultimately unproveable anyway, a bot could easily plagiarize such from
    >> somewhere else, preferably multiple and obscure sources.
    >>
    >> Happily, for program- detection of Usenet newsgroup- flooding and sporge-
    >> attacks as well as elsewhere, such bots or other annoying programs are far
    >> from ultimate in their fields, and the detection- program implementation
    >> of multiple rules and algorithms (bearing in mind diminishing returns vis-
    >> a- vis computational intensity) should allow such detection, until new
    >> bots or programs are developed which implement the appropriate counter-
    >> algorithms, in the usual "race" of measure and counter- measure.
    >>

    > If it steals your sandwich and licks your face before you can smack it
    > with a newspaper, it's intelligent.
    >
    > If it gnaws a hole in the side of yer house, sneaks into the kitchen,
    > electrocutes itself while gnawing on the stove wiring and burns down
    > your house, it has not achieved intelligent status.
    >
    > Aren't you supposed to be mowing?


    Manana, man.

    --
    tinmimus99@hotmail.com

    smeeter 11 or maybe 12

    mp 10

    mhm 29x13

    You want a job and a lizard to ride?

    < _The Einstein Intersection_
     
  4. Shirley

    Shirley Guest

    "mimus" <tinmimus99@hotmail.com> wrote in message
    news:vdKdnVJYsu7y7I_anZ2dnUVZ_smnnZ2d@giganews.com...
    > After reflection on program- detection of flooding and sporging of Usenet
    > newsgroups, I have concluded that the "Turing Test" is ultimately invalid,
    > if there is any ultimacy in the matter:
    >
    > Ultimately, if there is any etc., and in the worst- case scenario, any
    > rule used by a human to try to distinguish whether, say, a Usenet poster
    > is a human or a bot, can be expressed as an algorithm, and an opposing
    > algorithm developed, generally (A) one essentially oppositely-
    > implementing that algorithm, but without ruling out (B) one simulating
    > opposition, none of which is proof of the intelligence of such bot:
    >
    > (A) Noting by way of preface the suggestive existence of everyday
    > character- recognition programs (already damnably complex already, eh?),
    > would one want to bet that no semantical analytical programs can be
    > developed (and therefore bots giving semantically- correct responses to
    > statements and questions)? what about rational analytical programs?
    >
    > (B) As for, say, (topical) original content, originality, which is
    > ultimately unproveable anyway, a bot could easily plagiarize such from
    > somewhere else, preferably multiple and obscure sources.
    >
    > Happily, for program- detection of Usenet newsgroup- flooding and sporge-
    > attacks as well as elsewhere, such bots or other annoying programs are far
    > from ultimate in their fields, and the detection- program implementation
    > of multiple rules and algorithms (bearing in mind diminishing returns vis-
    > a- vis computational intensity) should allow such detection, until new
    > bots or programs are developed which implement the appropriate counter-
    > algorithms, in the usual "race" of measure and counter- measure.


    Do you suffer from *hippopotomonostrosesquippedaliophobia*

    I hope I spelled it right?

    >
    > --
    > tinmimus99@hotmail.com
    >
    > smeeter 11 or maybe 12
    >
    > mp 10
    >
    > mhm 29x13
    >
    > I wonder what I have been up to.
    >
    > < _Beyond Apollo_
    >
     
  5. mimus

    mimus Guest

    On Sun, 14 Oct 2007 18:20:52 -0400, Shirley wrote:

    > "mimus" <tinmimus99@hotmail.com> wrote in message
    > news:vdKdnVJYsu7y7I_anZ2dnUVZ_smnnZ2d@giganews.com...
    >
    >> After reflection on program- detection of flooding and sporging of
    >> Usenet newsgroups, I have concluded that the "Turing Test" is
    >> ultimately invalid, if there is any ultimacy in the matter:
    >>
    >> Ultimately, if there is any etc., and in the worst- case scenario, any
    >> rule used by a human to try to distinguish whether, say, a Usenet
    >> poster is a human or a bot, can be expressed as an algorithm, and an
    >> opposing algorithm developed, generally (A) one essentially oppositely-
    >> implementing that algorithm, but without ruling out (B) one simulating
    >> opposition, none of which is proof of the intelligence of such bot:
    >>
    >> (A) Noting by way of preface the suggestive existence of everyday
    >> character- recognition programs (already damnably complex already,
    >> eh?), would one want to bet that no semantical analytical programs can
    >> be developed (and therefore bots giving semantically- correct responses
    >> to statements and questions)? what about rational analytical programs?
    >>
    >> (B) As for, say, (topical) original content, originality, which is
    >> ultimately unproveable anyway, a bot could easily plagiarize such from
    >> somewhere else, preferably multiple and obscure sources.
    >>
    >> Happily, for program- detection of Usenet newsgroup- flooding and
    >> sporge- attacks as well as elsewhere, such bots or other annoying
    >> programs are far from ultimate in their fields, and the detection-
    >> program implementation of multiple rules and algorithms (bearing in
    >> mind diminishing returns vis- a- vis computational intensity) should
    >> allow such detection, until new bots or programs are developed which
    >> implement the appropriate counter- algorithms, in the usual "race" of
    >> measure and counter- measure.

    >
    > Do you suffer from *hippopotomonostrosesquippedaliophobia*


    Bah.

    There's not a word up there over five syllables.

    > I hope I spelled it right?


    Would it matter?

    --
    tinmimus99@hotmail.com

    smeeter 11 or maybe 12

    mp 10

    mhm 29x13

    You want a job and a lizard to ride?

    < _The Einstein Intersection_
     
  6. Shirley

    Shirley Guest

    "mimus" <tinmimus99@hotmail.com> wrote in message
    news:U4OdnTKjm7BhPI_anZ2dnUVZ_tyknZ2d@giganews.com...
    > On Sun, 14 Oct 2007 18:20:52 -0400, Shirley wrote:
    >
    >> "mimus" <tinmimus99@hotmail.com> wrote in message
    >> news:vdKdnVJYsu7y7I_anZ2dnUVZ_smnnZ2d@giganews.com...
    >>
    >>> After reflection on program- detection of flooding and sporging of
    >>> Usenet newsgroups, I have concluded that the "Turing Test" is
    >>> ultimately invalid, if there is any ultimacy in the matter:
    >>>
    >>> Ultimately, if there is any etc., and in the worst- case scenario, any
    >>> rule used by a human to try to distinguish whether, say, a Usenet
    >>> poster is a human or a bot, can be expressed as an algorithm, and an
    >>> opposing algorithm developed, generally (A) one essentially oppositely-
    >>> implementing that algorithm, but without ruling out (B) one simulating
    >>> opposition, none of which is proof of the intelligence of such bot:
    >>>
    >>> (A) Noting by way of preface the suggestive existence of everyday
    >>> character- recognition programs (already damnably complex already,
    >>> eh?), would one want to bet that no semantical analytical programs can
    >>> be developed (and therefore bots giving semantically- correct responses
    >>> to statements and questions)? what about rational analytical programs?
    >>>
    >>> (B) As for, say, (topical) original content, originality, which is
    >>> ultimately unproveable anyway, a bot could easily plagiarize such from
    >>> somewhere else, preferably multiple and obscure sources.
    >>>
    >>> Happily, for program- detection of Usenet newsgroup- flooding and
    >>> sporge- attacks as well as elsewhere, such bots or other annoying
    >>> programs are far from ultimate in their fields, and the detection-
    >>> program implementation of multiple rules and algorithms (bearing in
    >>> mind diminishing returns vis- a- vis computational intensity) should
    >>> allow such detection, until new bots or programs are developed which
    >>> implement the appropriate counter- algorithms, in the usual "race" of
    >>> measure and counter- measure.

    >>
    >> Do you suffer from *hippopotomonostrosesquippedaliophobia*

    >
    > Bah.


    I know that word...

    >
    > There's not a word up there over five syllables.


    Yeah, but I only know about algorithm's from watching *Numbers* on TV. I am
    not the brightest bulb in the chandelier.

    >
    >> I hope I spelled it right?

    >
    > Would it matter?


    Not in the least bit.

    >
    > --
    > tinmimus99@hotmail.com
    >
    > smeeter 11 or maybe 12
    >
    > mp 10
    >
    > mhm 29x13
    >
    > You want a job and a lizard to ride?
    >
    > < _The Einstein Intersection_
    >
     
  7. mixed nuts

    mixed nuts Guest

    mimus wrote:
    > On Sun, 14 Oct 2007 16:39:41 -0400, mixed nuts wrote:
    >
    >
    >>mimus wrote:
    >>
    >>>After reflection on program- detection of flooding and sporging of Usenet
    >>>newsgroups, I have concluded that the "Turing Test" is ultimately invalid,
    >>>if there is any ultimacy in the matter:
    >>>
    >>>Ultimately, if there is any etc., and in the worst- case scenario, any
    >>>rule used by a human to try to distinguish whether, say, a Usenet poster
    >>>is a human or a bot, can be expressed as an algorithm, and an opposing
    >>>algorithm developed, generally (A) one essentially oppositely-
    >>>implementing that algorithm, but without ruling out (B) one simulating
    >>>opposition, none of which is proof of the intelligence of such bot:
    >>>
    >>>(A) Noting by way of preface the suggestive existence of everyday
    >>>character- recognition programs (already damnably complex already, eh?),
    >>>would one want to bet that no semantical analytical programs can be
    >>>developed (and therefore bots giving semantically- correct responses to
    >>>statements and questions)? what about rational analytical programs?
    >>>
    >>>(B) As for, say, (topical) original content, originality, which is
    >>>ultimately unproveable anyway, a bot could easily plagiarize such from
    >>>somewhere else, preferably multiple and obscure sources.
    >>>
    >>>Happily, for program- detection of Usenet newsgroup- flooding and sporge-
    >>>attacks as well as elsewhere, such bots or other annoying programs are far
    >>>from ultimate in their fields, and the detection- program implementation
    >>>of multiple rules and algorithms (bearing in mind diminishing returns vis-
    >>>a- vis computational intensity) should allow such detection, until new
    >>>bots or programs are developed which implement the appropriate counter-
    >>>algorithms, in the usual "race" of measure and counter- measure.
    >>>

    >>
    >>If it steals your sandwich and licks your face before you can smack it
    >>with a newspaper, it's intelligent.
    >>
    >>If it gnaws a hole in the side of yer house, sneaks into the kitchen,
    >>electrocutes itself while gnawing on the stove wiring and burns down
    >>your house, it has not achieved intelligent status.
    >>
    >>Aren't you supposed to be mowing?

    >
    > Manana, man.
    >

    Should be a nice mowing day in Ohio. Then rain the rest of the week. I
    gets sun and wind and cold nights all week and the leaves are already
    flying but work has to happen. Mulching blade and blower at the ready
    for next weekend.

    --
    nuts
     
  8. headkase

    headkase Guest

    On Oct 15, 8:20 am, "Shirley" <bigd1...@bellsoutj.net> wrote:
    > "mimus" <tinmimu...@hotmail.com> wrote in message
    >
    > news:vdKdnVJYsu7y7I_anZ2dnUVZ_smnnZ2d@giganews.com...
    >
    >
    >
    >
    >
    > > After reflection on program- detection of flooding and sporging of Usenet
    > > newsgroups, I have concluded that the "Turing Test" is ultimately invalid,
    > > if there is any ultimacy in the matter:

    >
    > > Ultimately, if there is any etc., and in the worst- case scenario, any
    > > rule used by a human to try to distinguish whether, say, a Usenet poster
    > > is a human or a bot, can be expressed as an algorithm, and an opposing
    > > algorithm developed, generally (A) one essentially oppositely-
    > > implementing that algorithm, but without ruling out (B) one simulating
    > > opposition, none of which is proof of the intelligence of such bot:

    >
    > > (A) Noting by way of preface the suggestive existence of everyday
    > > character- recognition programs (already damnably complex already, eh?),
    > > would one want to bet that no semantical analytical programs can be
    > > developed (and therefore bots giving semantically- correct responses to
    > > statements and questions)? what about rational analytical programs?

    >
    > > (B) As for, say, (topical) original content, originality, which is
    > > ultimately unproveable anyway, a bot could easily plagiarize such from
    > > somewhere else, preferably multiple and obscure sources.

    >
    > > Happily, for program- detection of Usenet newsgroup- flooding and sporge-
    > > attacks as well as elsewhere, such bots or other annoying programs are far
    > > from ultimate in their fields, and the detection- program implementation
    > > of multiple rules and algorithms (bearing in mind diminishing returns vis-
    > > a- vis computational intensity) should allow such detection, until new
    > > bots or programs are developed which implement the appropriate counter-
    > > algorithms, in the usual "race" of measure and counter- measure.

    >
    > Do you suffer from *hippopotomonostrosesquippedaliophobia*
    >


    now speak english or go and sit in the corner with.....this dunce hat
    that does not, i repeat does not belong to me.

    hk
    > I hope I spelled it right?
    >
    >
    >
    >
    >
    > > --
    > > tinmimu...@hotmail.com

    >
    > > smeeter 11 or maybe 12

    >
    > > mp 10

    >
    > > mhm 29x13

    >
    > > I wonder what I have been up to.

    >
    > > < _Beyond Apollo_- Hide quoted text -

    >
    > - Show quoted text -- Hide quoted text -
    >
    > - Show quoted text -
     
  9. Shirley

    Shirley Guest

    "headkase" <psykoexgirlfriend@hotmail.com> wrote in message
    news:1192444427.995396.32860@i38g2000prf.googlegroups.com...
    > On Oct 15, 8:20 am, "Shirley" <bigd1...@bellsoutj.net> wrote:
    >> "mimus" <tinmimu...@hotmail.com> wrote in message
    >>
    >> news:vdKdnVJYsu7y7I_anZ2dnUVZ_smnnZ2d@giganews.com...
    >>
    >>
    >>
    >>
    >>
    >> > After reflection on program- detection of flooding and sporging of
    >> > Usenet
    >> > newsgroups, I have concluded that the "Turing Test" is ultimately
    >> > invalid,
    >> > if there is any ultimacy in the matter:

    >>
    >> > Ultimately, if there is any etc., and in the worst- case scenario, any
    >> > rule used by a human to try to distinguish whether, say, a Usenet
    >> > poster
    >> > is a human or a bot, can be expressed as an algorithm, and an opposing
    >> > algorithm developed, generally (A) one essentially oppositely-
    >> > implementing that algorithm, but without ruling out (B) one simulating
    >> > opposition, none of which is proof of the intelligence of such bot:

    >>
    >> > (A) Noting by way of preface the suggestive existence of everyday
    >> > character- recognition programs (already damnably complex already,
    >> > eh?),
    >> > would one want to bet that no semantical analytical programs can be
    >> > developed (and therefore bots giving semantically- correct responses to
    >> > statements and questions)? what about rational analytical programs?

    >>
    >> > (B) As for, say, (topical) original content, originality, which is
    >> > ultimately unproveable anyway, a bot could easily plagiarize such from
    >> > somewhere else, preferably multiple and obscure sources.

    >>
    >> > Happily, for program- detection of Usenet newsgroup- flooding and
    >> > sporge-
    >> > attacks as well as elsewhere, such bots or other annoying programs are
    >> > far
    >> > from ultimate in their fields, and the detection- program
    >> > implementation
    >> > of multiple rules and algorithms (bearing in mind diminishing returns
    >> > vis-
    >> > a- vis computational intensity) should allow such detection, until new
    >> > bots or programs are developed which implement the appropriate counter-
    >> > algorithms, in the usual "race" of measure and counter- measure.

    >>
    >> Do you suffer from *hippopotomonostrosesquippedaliophobia*
    >>

    >
    > now speak english or go and sit in the corner with.....this dunce hat
    > that does not, i repeat does not belong to me.


    <polite snickering from the pit>

    WE all believe that this party hat does NOT belong to HEADKASE don't we?
     
  10. mimus

    mimus Guest

    On Mon, 15 Oct 2007 08:32:24 -0400, Shirley wrote:

    > "headkase" <psykoexgirlfriend@hotmail.com> wrote in message
    > news:1192444427.995396.32860@i38g2000prf.googlegroups.com...
    >
    >> On Oct 15, 8:20 am, "Shirley" <bigd1...@bellsoutj.net> wrote:
    >>
    >>> "mimus" <tinmimu...@hotmail.com> wrote in message
    >>> news:vdKdnVJYsu7y7I_anZ2dnUVZ_smnnZ2d@giganews.com...
    >>>
    >>> > After reflection on program- detection of flooding and sporging of
    >>> > Usenet
    >>> > newsgroups, I have concluded that the "Turing Test" is ultimately
    >>> > invalid,
    >>> > if there is any ultimacy in the matter:
    >>>
    >>> > Ultimately, if there is any etc., and in the worst- case scenario, any
    >>> > rule used by a human to try to distinguish whether, say, a Usenet
    >>> > poster
    >>> > is a human or a bot, can be expressed as an algorithm, and an opposing
    >>> > algorithm developed, generally (A) one essentially oppositely-
    >>> > implementing that algorithm, but without ruling out (B) one simulating
    >>> > opposition, none of which is proof of the intelligence of such bot:
    >>>
    >>> > (A) Noting by way of preface the suggestive existence of everyday
    >>> > character- recognition programs (already damnably complex already,
    >>> > eh?),
    >>> > would one want to bet that no semantical analytical programs can be
    >>> > developed (and therefore bots giving semantically- correct responses to
    >>> > statements and questions)? what about rational analytical programs?
    >>>
    >>> > (B) As for, say, (topical) original content, originality, which is
    >>> > ultimately unproveable anyway, a bot could easily plagiarize such from
    >>> > somewhere else, preferably multiple and obscure sources.
    >>>
    >>> > Happily, for program- detection of Usenet newsgroup- flooding and
    >>> > sporge-
    >>> > attacks as well as elsewhere, such bots or other annoying programs are
    >>> > far
    >>> > from ultimate in their fields, and the detection- program
    >>> > implementation
    >>> > of multiple rules and algorithms (bearing in mind diminishing returns
    >>> > vis-
    >>> > a- vis computational intensity) should allow such detection, until new
    >>> > bots or programs are developed which implement the appropriate counter-
    >>> > algorithms, in the usual "race" of measure and counter- measure.
    >>>
    >>> Do you suffer from *hippopotomonostrosesquippedaliophobia*

    >>
    >> now speak english or go and sit in the corner with.....this dunce hat
    >> that does not, i repeat does not belong to me.

    >
    > <polite snickering from the pit>
    >
    > WE all believe that this party hat does NOT belong to HEADKASE don't we?


    Is the . . . stuff . . . in the Pit supposed to be snickering?

    --
    tinmimus99@hotmail.com

    smeeter 11 or maybe 12

    mp 10

    mhm 29x13

    The hell with the Galactic Overlords
    and their tastes in literature.

    < _The Day of the Burning_
     
  11. Aratzio

    Aratzio Guest

    On Mon, 15 Oct 2007 12:41:33 -0400, in
    alt.alien.vampire.flonk.flonk.flonk, mimus <tinmimus99@hotmail.com>
    bloviated:

    >On Mon, 15 Oct 2007 08:32:24 -0400, Shirley wrote:
    >
    >> "headkase" <psykoexgirlfriend@hotmail.com> wrote in message
    >> news:1192444427.995396.32860@i38g2000prf.googlegroups.com...
    >>
    >>> On Oct 15, 8:20 am, "Shirley" <bigd1...@bellsoutj.net> wrote:
    >>>
    >>>> "mimus" <tinmimu...@hotmail.com> wrote in message
    >>>> news:vdKdnVJYsu7y7I_anZ2dnUVZ_smnnZ2d@giganews.com...
    >>>>
    >>>> > After reflection on program- detection of flooding and sporging of
    >>>> > Usenet
    >>>> > newsgroups, I have concluded that the "Turing Test" is ultimately
    >>>> > invalid,
    >>>> > if there is any ultimacy in the matter:
    >>>>
    >>>> > Ultimately, if there is any etc., and in the worst- case scenario, any
    >>>> > rule used by a human to try to distinguish whether, say, a Usenet
    >>>> > poster
    >>>> > is a human or a bot, can be expressed as an algorithm, and an opposing
    >>>> > algorithm developed, generally (A) one essentially oppositely-
    >>>> > implementing that algorithm, but without ruling out (B) one simulating
    >>>> > opposition, none of which is proof of the intelligence of such bot:
    >>>>
    >>>> > (A) Noting by way of preface the suggestive existence of everyday
    >>>> > character- recognition programs (already damnably complex already,
    >>>> > eh?),
    >>>> > would one want to bet that no semantical analytical programs can be
    >>>> > developed (and therefore bots giving semantically- correct responses to
    >>>> > statements and questions)? what about rational analytical programs?
    >>>>
    >>>> > (B) As for, say, (topical) original content, originality, which is
    >>>> > ultimately unproveable anyway, a bot could easily plagiarize such from
    >>>> > somewhere else, preferably multiple and obscure sources.
    >>>>
    >>>> > Happily, for program- detection of Usenet newsgroup- flooding and
    >>>> > sporge-
    >>>> > attacks as well as elsewhere, such bots or other annoying programs are
    >>>> > far
    >>>> > from ultimate in their fields, and the detection- program
    >>>> > implementation
    >>>> > of multiple rules and algorithms (bearing in mind diminishing returns
    >>>> > vis-
    >>>> > a- vis computational intensity) should allow such detection, until new
    >>>> > bots or programs are developed which implement the appropriate counter-
    >>>> > algorithms, in the usual "race" of measure and counter- measure.
    >>>>
    >>>> Do you suffer from *hippopotomonostrosesquippedaliophobia*
    >>>
    >>> now speak english or go and sit in the corner with.....this dunce hat
    >>> that does not, i repeat does not belong to me.

    >>
    >> <polite snickering from the pit>
    >>
    >> WE all believe that this party hat does NOT belong to HEADKASE don't we?

    >
    >Is the . . . stuff . . . in the Pit supposed to be snickering?


    If you think evul moaning and gnashing of teeth is "snickering".
     
  12. mimus

    mimus Guest

    On Mon, 15 Oct 2007 16:57:00 +0000, Aratzio wrote:

    > On Mon, 15 Oct 2007 12:41:33 -0400, in
    > alt.alien.vampire.flonk.flonk.flonk, mimus <tinmimus99@hotmail.com>
    > bloviated:
    >
    >>On Mon, 15 Oct 2007 08:32:24 -0400, Shirley wrote:
    >>
    >>> "headkase" <psykoexgirlfriend@hotmail.com> wrote in message
    >>> news:1192444427.995396.32860@i38g2000prf.googlegroups.com...
    >>>
    >>>> On Oct 15, 8:20 am, "Shirley" <bigd1...@bellsoutj.net> wrote:
    >>>>
    >>>>> "mimus" <tinmimu...@hotmail.com> wrote in message
    >>>>> news:vdKdnVJYsu7y7I_anZ2dnUVZ_smnnZ2d@giganews.com...
    >>>>>
    >>>>> > After reflection on program- detection of flooding and sporging of
    >>>>> > Usenet
    >>>>> > newsgroups, I have concluded that the "Turing Test" is ultimately
    >>>>> > invalid,
    >>>>> > if there is any ultimacy in the matter:
    >>>>>
    >>>>> > Ultimately, if there is any etc., and in the worst- case scenario, any
    >>>>> > rule used by a human to try to distinguish whether, say, a Usenet
    >>>>> > poster
    >>>>> > is a human or a bot, can be expressed as an algorithm, and an opposing
    >>>>> > algorithm developed, generally (A) one essentially oppositely-
    >>>>> > implementing that algorithm, but without ruling out (B) one simulating
    >>>>> > opposition, none of which is proof of the intelligence of such bot:
    >>>>>
    >>>>> > (A) Noting by way of preface the suggestive existence of everyday
    >>>>> > character- recognition programs (already damnably complex already,
    >>>>> > eh?),
    >>>>> > would one want to bet that no semantical analytical programs can be
    >>>>> > developed (and therefore bots giving semantically- correct responses to
    >>>>> > statements and questions)? what about rational analytical programs?
    >>>>>
    >>>>> > (B) As for, say, (topical) original content, originality, which is
    >>>>> > ultimately unproveable anyway, a bot could easily plagiarize such from
    >>>>> > somewhere else, preferably multiple and obscure sources.
    >>>>>
    >>>>> > Happily, for program- detection of Usenet newsgroup- flooding and
    >>>>> > sporge-
    >>>>> > attacks as well as elsewhere, such bots or other annoying programs are
    >>>>> > far
    >>>>> > from ultimate in their fields, and the detection- program
    >>>>> > implementation
    >>>>> > of multiple rules and algorithms (bearing in mind diminishing returns
    >>>>> > vis-
    >>>>> > a- vis computational intensity) should allow such detection, until new
    >>>>> > bots or programs are developed which implement the appropriate counter-
    >>>>> > algorithms, in the usual "race" of measure and counter- measure.
    >>>>>
    >>>>> Do you suffer from *hippopotomonostrosesquippedaliophobia*
    >>>>
    >>>> now speak english or go and sit in the corner with.....this dunce hat
    >>>> that does not, i repeat does not belong to me.
    >>>
    >>> <polite snickering from the pit>
    >>>
    >>> WE all believe that this party hat does NOT belong to HEADKASE don't we?

    >>
    >>Is the . . . stuff . . . in the Pit supposed to be snickering?

    >
    > If you think evul moaning and gnashing of teeth is "snickering".


    I know "polite snickering" when I read it.

    --
    tinmimus99@hotmail.com

    smeeter 11 or maybe 12

    mp 10

    mhm 29x13

    The hell with the Galactic Overlords
    and their tastes in literature.

    < _The Day of the Burning_
     
  13. Aratzio

    Aratzio Guest

    On Mon, 15 Oct 2007 13:09:56 -0400, in
    alt.alien.vampire.flonk.flonk.flonk, mimus <tinmimus99@hotmail.com>
    bloviated:

    >On Mon, 15 Oct 2007 16:57:00 +0000, Aratzio wrote:
    >
    >> On Mon, 15 Oct 2007 12:41:33 -0400, in
    >> alt.alien.vampire.flonk.flonk.flonk, mimus <tinmimus99@hotmail.com>
    >> bloviated:
    >>
    >>>On Mon, 15 Oct 2007 08:32:24 -0400, Shirley wrote:
    >>>
    >>>> "headkase" <psykoexgirlfriend@hotmail.com> wrote in message
    >>>> news:1192444427.995396.32860@i38g2000prf.googlegroups.com...
    >>>>
    >>>>> On Oct 15, 8:20 am, "Shirley" <bigd1...@bellsoutj.net> wrote:
    >>>>>
    >>>>>> "mimus" <tinmimu...@hotmail.com> wrote in message
    >>>>>> news:vdKdnVJYsu7y7I_anZ2dnUVZ_smnnZ2d@giganews.com...
    >>>>>>
    >>>>>> > After reflection on program- detection of flooding and sporging of
    >>>>>> > Usenet
    >>>>>> > newsgroups, I have concluded that the "Turing Test" is ultimately
    >>>>>> > invalid,
    >>>>>> > if there is any ultimacy in the matter:
    >>>>>>
    >>>>>> > Ultimately, if there is any etc., and in the worst- case scenario, any
    >>>>>> > rule used by a human to try to distinguish whether, say, a Usenet
    >>>>>> > poster
    >>>>>> > is a human or a bot, can be expressed as an algorithm, and an opposing
    >>>>>> > algorithm developed, generally (A) one essentially oppositely-
    >>>>>> > implementing that algorithm, but without ruling out (B) one simulating
    >>>>>> > opposition, none of which is proof of the intelligence of such bot:
    >>>>>>
    >>>>>> > (A) Noting by way of preface the suggestive existence of everyday
    >>>>>> > character- recognition programs (already damnably complex already,
    >>>>>> > eh?),
    >>>>>> > would one want to bet that no semantical analytical programs can be
    >>>>>> > developed (and therefore bots giving semantically- correct responses to
    >>>>>> > statements and questions)? what about rational analytical programs?
    >>>>>>
    >>>>>> > (B) As for, say, (topical) original content, originality, which is
    >>>>>> > ultimately unproveable anyway, a bot could easily plagiarize such from
    >>>>>> > somewhere else, preferably multiple and obscure sources.
    >>>>>>
    >>>>>> > Happily, for program- detection of Usenet newsgroup- flooding and
    >>>>>> > sporge-
    >>>>>> > attacks as well as elsewhere, such bots or other annoying programs are
    >>>>>> > far
    >>>>>> > from ultimate in their fields, and the detection- program
    >>>>>> > implementation
    >>>>>> > of multiple rules and algorithms (bearing in mind diminishing returns
    >>>>>> > vis-
    >>>>>> > a- vis computational intensity) should allow such detection, until new
    >>>>>> > bots or programs are developed which implement the appropriate counter-
    >>>>>> > algorithms, in the usual "race" of measure and counter- measure.
    >>>>>>
    >>>>>> Do you suffer from *hippopotomonostrosesquippedaliophobia*
    >>>>>
    >>>>> now speak english or go and sit in the corner with.....this dunce hat
    >>>>> that does not, i repeat does not belong to me.
    >>>>
    >>>> <polite snickering from the pit>
    >>>>
    >>>> WE all believe that this party hat does NOT belong to HEADKASE don't we?
    >>>
    >>>Is the . . . stuff . . . in the Pit supposed to be snickering?

    >>
    >> If you think evul moaning and gnashing of teeth is "snickering".

    >
    >I know "polite snickering" when I read it.


    How about the gibbering of the war shrews?
     
  14. mimus

    mimus Guest

    On Mon, 15 Oct 2007 17:26:23 +0000, Aratzio wrote:

    > On Mon, 15 Oct 2007 13:09:56 -0400, in
    > alt.alien.vampire.flonk.flonk.flonk, mimus <tinmimus99@hotmail.com>
    > bloviated:
    >
    >>On Mon, 15 Oct 2007 16:57:00 +0000, Aratzio wrote:
    >>
    >>> On Mon, 15 Oct 2007 12:41:33 -0400, in
    >>> alt.alien.vampire.flonk.flonk.flonk, mimus <tinmimus99@hotmail.com>
    >>> bloviated:
    >>>
    >>>>On Mon, 15 Oct 2007 08:32:24 -0400, Shirley wrote:
    >>>>
    >>>>> "headkase" <psykoexgirlfriend@hotmail.com> wrote in message
    >>>>> news:1192444427.995396.32860@i38g2000prf.googlegroups.com...
    >>>>>
    >>>>>> On Oct 15, 8:20 am, "Shirley" <bigd1...@bellsoutj.net> wrote:
    >>>>>>
    >>>>>>> "mimus" <tinmimu...@hotmail.com> wrote in message
    >>>>>>> news:vdKdnVJYsu7y7I_anZ2dnUVZ_smnnZ2d@giganews.com...
    >>>>>>>
    >>>>>>> > After reflection on program- detection of flooding and sporging of
    >>>>>>> > Usenet
    >>>>>>> > newsgroups, I have concluded that the "Turing Test" is ultimately
    >>>>>>> > invalid,
    >>>>>>> > if there is any ultimacy in the matter:
    >>>>>>>
    >>>>>>> > Ultimately, if there is any etc., and in the worst- case scenario, any
    >>>>>>> > rule used by a human to try to distinguish whether, say, a Usenet
    >>>>>>> > poster
    >>>>>>> > is a human or a bot, can be expressed as an algorithm, and an opposing
    >>>>>>> > algorithm developed, generally (A) one essentially oppositely-
    >>>>>>> > implementing that algorithm, but without ruling out (B) one simulating
    >>>>>>> > opposition, none of which is proof of the intelligence of such bot:
    >>>>>>>
    >>>>>>> > (A) Noting by way of preface the suggestive existence of everyday
    >>>>>>> > character- recognition programs (already damnably complex already,
    >>>>>>> > eh?),
    >>>>>>> > would one want to bet that no semantical analytical programs can be
    >>>>>>> > developed (and therefore bots giving semantically- correct responses to
    >>>>>>> > statements and questions)? what about rational analytical programs?
    >>>>>>>
    >>>>>>> > (B) As for, say, (topical) original content, originality, which is
    >>>>>>> > ultimately unproveable anyway, a bot could easily plagiarize such from
    >>>>>>> > somewhere else, preferably multiple and obscure sources.
    >>>>>>>
    >>>>>>> > Happily, for program- detection of Usenet newsgroup- flooding and
    >>>>>>> > sporge-
    >>>>>>> > attacks as well as elsewhere, such bots or other annoying programs are
    >>>>>>> > far
    >>>>>>> > from ultimate in their fields, and the detection- program
    >>>>>>> > implementation
    >>>>>>> > of multiple rules and algorithms (bearing in mind diminishing returns
    >>>>>>> > vis-
    >>>>>>> > a- vis computational intensity) should allow such detection, until new
    >>>>>>> > bots or programs are developed which implement the appropriate counter-
    >>>>>>> > algorithms, in the usual "race" of measure and counter- measure.
    >>>>>>>
    >>>>>>> Do you suffer from *hippopotomonostrosesquippedaliophobia*
    >>>>>>
    >>>>>> now speak english or go and sit in the corner with.....this dunce hat
    >>>>>> that does not, i repeat does not belong to me.
    >>>>>
    >>>>> <polite snickering from the pit>
    >>>>>
    >>>>> WE all believe that this party hat does NOT belong to HEADKASE don't we?
    >>>>
    >>>>Is the . . . stuff . . . in the Pit supposed to be snickering?
    >>>
    >>> If you think evul moaning and gnashing of teeth is "snickering".

    >>
    >>I know "polite snickering" when I read it.

    >
    > How about the gibbering of the war shrews?


    In the Pit or on TV?

    --
    tinmimus99@hotmail.com

    smeeter 11 or maybe 12

    mp 10

    mhm 29x13

    I wonder what I have been up to.

    < _Beyond Apollo_
     
  15. Aratzio

    Aratzio Guest

    On Mon, 15 Oct 2007 13:43:08 -0400, in
    alt.alien.vampire.flonk.flonk.flonk, mimus <tinmimus99@hotmail.com>
    bloviated:

    >On Mon, 15 Oct 2007 17:26:23 +0000, Aratzio wrote:
    >
    >> On Mon, 15 Oct 2007 13:09:56 -0400, in
    >> alt.alien.vampire.flonk.flonk.flonk, mimus <tinmimus99@hotmail.com>
    >> bloviated:
    >>
    >>>On Mon, 15 Oct 2007 16:57:00 +0000, Aratzio wrote:
    >>>
    >>>> On Mon, 15 Oct 2007 12:41:33 -0400, in
    >>>> alt.alien.vampire.flonk.flonk.flonk, mimus <tinmimus99@hotmail.com>
    >>>> bloviated:
    >>>>
    >>>>>On Mon, 15 Oct 2007 08:32:24 -0400, Shirley wrote:
    >>>>>
    >>>>>> "headkase" <psykoexgirlfriend@hotmail.com> wrote in message
    >>>>>> news:1192444427.995396.32860@i38g2000prf.googlegroups.com...
    >>>>>>
    >>>>>>> On Oct 15, 8:20 am, "Shirley" <bigd1...@bellsoutj.net> wrote:
    >>>>>>>
    >>>>>>>> "mimus" <tinmimu...@hotmail.com> wrote in message
    >>>>>>>> news:vdKdnVJYsu7y7I_anZ2dnUVZ_smnnZ2d@giganews.com...
    >>>>>>>>
    >>>>>>>> > After reflection on program- detection of flooding and sporging of
    >>>>>>>> > Usenet
    >>>>>>>> > newsgroups, I have concluded that the "Turing Test" is ultimately
    >>>>>>>> > invalid,
    >>>>>>>> > if there is any ultimacy in the matter:
    >>>>>>>>
    >>>>>>>> > Ultimately, if there is any etc., and in the worst- case scenario, any
    >>>>>>>> > rule used by a human to try to distinguish whether, say, a Usenet
    >>>>>>>> > poster
    >>>>>>>> > is a human or a bot, can be expressed as an algorithm, and an opposing
    >>>>>>>> > algorithm developed, generally (A) one essentially oppositely-
    >>>>>>>> > implementing that algorithm, but without ruling out (B) one simulating
    >>>>>>>> > opposition, none of which is proof of the intelligence of such bot:
    >>>>>>>>
    >>>>>>>> > (A) Noting by way of preface the suggestive existence of everyday
    >>>>>>>> > character- recognition programs (already damnably complex already,
    >>>>>>>> > eh?),
    >>>>>>>> > would one want to bet that no semantical analytical programs can be
    >>>>>>>> > developed (and therefore bots giving semantically- correct responses to
    >>>>>>>> > statements and questions)? what about rational analytical programs?
    >>>>>>>>
    >>>>>>>> > (B) As for, say, (topical) original content, originality, which is
    >>>>>>>> > ultimately unproveable anyway, a bot could easily plagiarize such from
    >>>>>>>> > somewhere else, preferably multiple and obscure sources.
    >>>>>>>>
    >>>>>>>> > Happily, for program- detection of Usenet newsgroup- flooding and
    >>>>>>>> > sporge-
    >>>>>>>> > attacks as well as elsewhere, such bots or other annoying programs are
    >>>>>>>> > far
    >>>>>>>> > from ultimate in their fields, and the detection- program
    >>>>>>>> > implementation
    >>>>>>>> > of multiple rules and algorithms (bearing in mind diminishing returns
    >>>>>>>> > vis-
    >>>>>>>> > a- vis computational intensity) should allow such detection, until new
    >>>>>>>> > bots or programs are developed which implement the appropriate counter-
    >>>>>>>> > algorithms, in the usual "race" of measure and counter- measure.
    >>>>>>>>
    >>>>>>>> Do you suffer from *hippopotomonostrosesquippedaliophobia*
    >>>>>>>
    >>>>>>> now speak english or go and sit in the corner with.....this dunce hat
    >>>>>>> that does not, i repeat does not belong to me.
    >>>>>>
    >>>>>> <polite snickering from the pit>
    >>>>>>
    >>>>>> WE all believe that this party hat does NOT belong to HEADKASE don't we?
    >>>>>
    >>>>>Is the . . . stuff . . . in the Pit supposed to be snickering?
    >>>>
    >>>> If you think evul moaning and gnashing of teeth is "snickering".
    >>>
    >>>I know "polite snickering" when I read it.

    >>
    >> How about the gibbering of the war shrews?

    >
    >In the Pit or on TV?


    Dunno, depends, did DAEV hook up the PitCams?
     
  16. Shirley

    Shirley Guest

    "mimus" <tinmimus99@hotmail.com> wrote in message
    news:oYKdndYghdJiCo7anZ2dnUVZ_q7inZ2d@giganews.com...
    > On Mon, 15 Oct 2007 08:32:24 -0400, Shirley wrote:
    >
    >> "headkase" <psykoexgirlfriend@hotmail.com> wrote in message
    >> news:1192444427.995396.32860@i38g2000prf.googlegroups.com...
    >>
    >>> On Oct 15, 8:20 am, "Shirley" <bigd1...@bellsoutj.net> wrote:
    >>>
    >>>> "mimus" <tinmimu...@hotmail.com> wrote in message
    >>>> news:vdKdnVJYsu7y7I_anZ2dnUVZ_smnnZ2d@giganews.com...
    >>>>
    >>>> > After reflection on program- detection of flooding and sporging of
    >>>> > Usenet
    >>>> > newsgroups, I have concluded that the "Turing Test" is ultimately
    >>>> > invalid,
    >>>> > if there is any ultimacy in the matter:
    >>>>
    >>>> > Ultimately, if there is any etc., and in the worst- case scenario,
    >>>> > any
    >>>> > rule used by a human to try to distinguish whether, say, a Usenet
    >>>> > poster
    >>>> > is a human or a bot, can be expressed as an algorithm, and an
    >>>> > opposing
    >>>> > algorithm developed, generally (A) one essentially oppositely-
    >>>> > implementing that algorithm, but without ruling out (B) one
    >>>> > simulating
    >>>> > opposition, none of which is proof of the intelligence of such bot:
    >>>>
    >>>> > (A) Noting by way of preface the suggestive existence of everyday
    >>>> > character- recognition programs (already damnably complex already,
    >>>> > eh?),
    >>>> > would one want to bet that no semantical analytical programs can be
    >>>> > developed (and therefore bots giving semantically- correct responses
    >>>> > to
    >>>> > statements and questions)? what about rational analytical programs?
    >>>>
    >>>> > (B) As for, say, (topical) original content, originality, which is
    >>>> > ultimately unproveable anyway, a bot could easily plagiarize such
    >>>> > from
    >>>> > somewhere else, preferably multiple and obscure sources.
    >>>>
    >>>> > Happily, for program- detection of Usenet newsgroup- flooding and
    >>>> > sporge-
    >>>> > attacks as well as elsewhere, such bots or other annoying programs
    >>>> > are
    >>>> > far
    >>>> > from ultimate in their fields, and the detection- program
    >>>> > implementation
    >>>> > of multiple rules and algorithms (bearing in mind diminishing returns
    >>>> > vis-
    >>>> > a- vis computational intensity) should allow such detection, until
    >>>> > new
    >>>> > bots or programs are developed which implement the appropriate
    >>>> > counter-
    >>>> > algorithms, in the usual "race" of measure and counter- measure.
    >>>>
    >>>> Do you suffer from *hippopotomonostrosesquippedaliophobia*
    >>>
    >>> now speak english or go and sit in the corner with.....this dunce hat
    >>> that does not, i repeat does not belong to me.

    >>
    >> <polite snickering from the pit>
    >>
    >> WE all believe that this party hat does NOT belong to HEADKASE don't we?

    >
    > Is the . . . stuff . . . in the Pit supposed to be snickering?


    When Dave is away...Shirley can teach the pit to snicker and belly laugh...

    <one big belly laugh heard from the pit>

    Wait until Dave sees what the shed can do.

    >
    > --
    > tinmimus99@hotmail.com
    >
    > smeeter 11 or maybe 12
    >
    > mp 10
    >
    > mhm 29x13
    >
    > The hell with the Galactic Overlords
    > and their tastes in literature.
    >
    > < _The Day of the Burning_
    >
     
  17. mimus

    mimus Guest

    On Mon, 15 Oct 2007 13:43:08 -0400, mimus wrote:

    > On Mon, 15 Oct 2007 17:26:23 +0000, Aratzio wrote:
    >
    >> On Mon, 15 Oct 2007 13:09:56 -0400, in
    >> alt.alien.vampire.flonk.flonk.flonk, mimus <tinmimus99@hotmail.com>
    >> bloviated:
    >>
    >>>On Mon, 15 Oct 2007 16:57:00 +0000, Aratzio wrote:
    >>>
    >>>> On Mon, 15 Oct 2007 12:41:33 -0400, in
    >>>> alt.alien.vampire.flonk.flonk.flonk, mimus <tinmimus99@hotmail.com>
    >>>> bloviated:
    >>>>
    >>>>>On Mon, 15 Oct 2007 08:32:24 -0400, Shirley wrote:
    >>>>>
    >>>>>> "headkase" <psykoexgirlfriend@hotmail.com> wrote in message
    >>>>>> news:1192444427.995396.32860@i38g2000prf.googlegroups.com...
    >>>>>>
    >>>>>>> On Oct 15, 8:20 am, "Shirley" <bigd1...@bellsoutj.net> wrote:
    >>>>>>>
    >>>>>>>> "mimus" <tinmimu...@hotmail.com> wrote in message
    >>>>>>>> news:vdKdnVJYsu7y7I_anZ2dnUVZ_smnnZ2d@giganews.com...
    >>>>>>>>
    >>>>>>>> > After reflection on program- detection of flooding and sporging of
    >>>>>>>> > Usenet
    >>>>>>>> > newsgroups, I have concluded that the "Turing Test" is ultimately
    >>>>>>>> > invalid,
    >>>>>>>> > if there is any ultimacy in the matter:
    >>>>>>>>
    >>>>>>>> > Ultimately, if there is any etc., and in the worst- case scenario, any
    >>>>>>>> > rule used by a human to try to distinguish whether, say, a Usenet
    >>>>>>>> > poster
    >>>>>>>> > is a human or a bot, can be expressed as an algorithm, and an opposing
    >>>>>>>> > algorithm developed, generally (A) one essentially oppositely-
    >>>>>>>> > implementing that algorithm, but without ruling out (B) one simulating
    >>>>>>>> > opposition, none of which is proof of the intelligence of such bot:
    >>>>>>>>
    >>>>>>>> > (A) Noting by way of preface the suggestive existence of everyday
    >>>>>>>> > character- recognition programs (already damnably complex already,
    >>>>>>>> > eh?),
    >>>>>>>> > would one want to bet that no semantical analytical programs can be
    >>>>>>>> > developed (and therefore bots giving semantically- correct responses to
    >>>>>>>> > statements and questions)? what about rational analytical programs?
    >>>>>>>>
    >>>>>>>> > (B) As for, say, (topical) original content, originality, which is
    >>>>>>>> > ultimately unproveable anyway, a bot could easily plagiarize such from
    >>>>>>>> > somewhere else, preferably multiple and obscure sources.
    >>>>>>>>
    >>>>>>>> > Happily, for program- detection of Usenet newsgroup- flooding and
    >>>>>>>> > sporge-
    >>>>>>>> > attacks as well as elsewhere, such bots or other annoying programs are
    >>>>>>>> > far
    >>>>>>>> > from ultimate in their fields, and the detection- program
    >>>>>>>> > implementation
    >>>>>>>> > of multiple rules and algorithms (bearing in mind diminishing returns
    >>>>>>>> > vis-
    >>>>>>>> > a- vis computational intensity) should allow such detection, until new
    >>>>>>>> > bots or programs are developed which implement the appropriate counter-
    >>>>>>>> > algorithms, in the usual "race" of measure and counter- measure.
    >>>>>>>>
    >>>>>>>> Do you suffer from *hippopotomonostrosesquippedaliophobia*
    >>>>>>>
    >>>>>>> now speak english or go and sit in the corner with.....this dunce hat
    >>>>>>> that does not, i repeat does not belong to me.
    >>>>>>
    >>>>>> <polite snickering from the pit>
    >>>>>>
    >>>>>> WE all believe that this party hat does NOT belong to HEADKASE don't we?
    >>>>>
    >>>>>Is the . . . stuff . . . in the Pit supposed to be snickering?
    >>>>
    >>>> If you think evul moaning and gnashing of teeth is "snickering".
    >>>
    >>>I know "polite snickering" when I read it.

    >>
    >> How about the gibbering of the war shrews?

    >
    > In the Pit or on TV?


    Here's a nice hot example:

    http://www.militaryreporters.org/sanchez_101207.html

    --
    tinmimus99@hotmail.com

    smeeter 11 or maybe 12

    mp 10

    mhm 29x13

    There's no such thing as a trained snake, OK?

    < _Strip Tease_
     
  18. mixed nuts

    mixed nuts Guest

    mimus wrote:
    > On Mon, 15 Oct 2007 13:43:08 -0400, mimus wrote:
    >
    >>On Mon, 15 Oct 2007 17:26:23 +0000, Aratzio wrote:
    >>
    >>>On Mon, 15 Oct 2007 13:09:56 -0400, in
    >>>alt.alien.vampire.flonk.flonk.flonk, mimus <tinmimus99@hotmail.com>
    >>>bloviated:
    >>>
    >>>>On Mon, 15 Oct 2007 16:57:00 +0000, Aratzio wrote:
    >>>>
    >>>>>On Mon, 15 Oct 2007 12:41:33 -0400, in
    >>>>>alt.alien.vampire.flonk.flonk.flonk, mimus <tinmimus99@hotmail.com>
    >>>>>bloviated:
    >>>>>
    >>>>>>On Mon, 15 Oct 2007 08:32:24 -0400, Shirley wrote:
    >>>>>>
    >>>>>>>"headkase" <psykoexgirlfriend@hotmail.com> wrote in message
    >>>>>>>news:1192444427.995396.32860@i38g2000prf.googlegroups.com...
    >>>>>>>
    >>>>>>>>On Oct 15, 8:20 am, "Shirley" <bigd1...@bellsoutj.net> wrote:
    >>>>>>>>
    >>>>>>>>>"mimus" <tinmimu...@hotmail.com> wrote in message
    >>>>>>>>>news:vdKdnVJYsu7y7I_anZ2dnUVZ_smnnZ2d@giganews.com...
    >>>>>>>>>
    >>>>>>>>>>After reflection on program- detection of flooding and sporging of
    >>>>>>>>>>Usenet
    >>>>>>>>>>newsgroups, I have concluded that the "Turing Test" is ultimately
    >>>>>>>>>>invalid,
    >>>>>>>>>>if there is any ultimacy in the matter:
    >>>>>>>>>
    >>>>>>>>>>Ultimately, if there is any etc., and in the worst- case scenario, any
    >>>>>>>>>>rule used by a human to try to distinguish whether, say, a Usenet
    >>>>>>>>>>poster
    >>>>>>>>>>is a human or a bot, can be expressed as an algorithm, and an opposing
    >>>>>>>>>>algorithm developed, generally (A) one essentially oppositely-
    >>>>>>>>>>implementing that algorithm, but without ruling out (B) one simulating
    >>>>>>>>>>opposition, none of which is proof of the intelligence of such bot:
    >>>>>>>>>
    >>>>>>>>>>(A) Noting by way of preface the suggestive existence of everyday
    >>>>>>>>>>character- recognition programs (already damnably complex already,
    >>>>>>>>>>eh?),
    >>>>>>>>>>would one want to bet that no semantical analytical programs can be
    >>>>>>>>>>developed (and therefore bots giving semantically- correct responses to
    >>>>>>>>>>statements and questions)? what about rational analytical programs?
    >>>>>>>>>
    >>>>>>>>>>(B) As for, say, (topical) original content, originality, which is
    >>>>>>>>>>ultimately unproveable anyway, a bot could easily plagiarize such from
    >>>>>>>>>>somewhere else, preferably multiple and obscure sources.
    >>>>>>>>>
    >>>>>>>>>>Happily, for program- detection of Usenet newsgroup- flooding and
    >>>>>>>>>>sporge-
    >>>>>>>>>>attacks as well as elsewhere, such bots or other annoying programs are
    >>>>>>>>>>far
    >>>>>>>>>>from ultimate in their fields, and the detection- program
    >>>>>>>>>>implementation
    >>>>>>>>>>of multiple rules and algorithms (bearing in mind diminishing returns
    >>>>>>>>>>vis-
    >>>>>>>>>>a- vis computational intensity) should allow such detection, until new
    >>>>>>>>>>bots or programs are developed which implement the appropriate counter-
    >>>>>>>>>>algorithms, in the usual "race" of measure and counter- measure.
    >>>>>>>>>
    >>>>>>>>>Do you suffer from *hippopotomonostrosesquippedaliophobia*
    >>>>>>>>
    >>>>>>>>now speak english or go and sit in the corner with.....this dunce hat
    >>>>>>>>that does not, i repeat does not belong to me.
    >>>>>>>
    >>>>>>><polite snickering from the pit>
    >>>>>>>
    >>>>>>>WE all believe that this party hat does NOT belong to HEADKASE don't we?
    >>>>>>
    >>>>>>Is the . . . stuff . . . in the Pit supposed to be snickering?
    >>>>>
    >>>>>If you think evul moaning and gnashing of teeth is "snickering".
    >>>>
    >>>>I know "polite snickering" when I read it.
    >>>
    >>>How about the gibbering of the war shrews?

    >>
    >>In the Pit or on TV?

    >
    > Here's a nice hot example:
    >
    > http://www.militaryreporters.org/sanchez_101207.html
    >

    He's yelling very loudly which doesn't seem, to me, to qualify as
    gibbering. He doth protest a bit too much, and in an ASR33 style.

    I'd say he's guilty.

    --
    nuts
     
  19. headkase

    headkase Guest

    On Oct 16, 2:41 am, mimus <tinmimu...@hotmail.com> wrote:
    > On Mon, 15 Oct 2007 08:32:24 -0400, Shirley wrote:
    > > "headkase" <psykoexgirlfri...@hotmail.com> wrote in message
    > >news:1192444427.995396.32860@i38g2000prf.googlegroups.com...

    >
    > >> On Oct 15, 8:20 am, "Shirley" <bigd1...@bellsoutj.net> wrote:

    >
    > >>> "mimus" <tinmimu...@hotmail.com> wrote in message
    > >>>news:vdKdnVJYsu7y7I_anZ2dnUVZ_smnnZ2d@giganews.com...

    >
    > >>> > After reflection on program- detection of flooding and sporging of
    > >>> > Usenet
    > >>> > newsgroups, I have concluded that the "Turing Test" is ultimately
    > >>> > invalid,
    > >>> > if there is any ultimacy in the matter:

    >
    > >>> > Ultimately, if there is any etc., and in the worst- case scenario, any
    > >>> > rule used by a human to try to distinguish whether, say, a Usenet
    > >>> > poster
    > >>> > is a human or a bot, can be expressed as an algorithm, and an opposing
    > >>> > algorithm developed, generally (A) one essentially oppositely-
    > >>> > implementing that algorithm, but without ruling out (B) one simulating
    > >>> > opposition, none of which is proof of the intelligence of such bot:

    >
    > >>> > (A) Noting by way of preface the suggestive existence of everyday
    > >>> > character- recognition programs (already damnably complex already,
    > >>> > eh?),
    > >>> > would one want to bet that no semantical analytical programs can be
    > >>> > developed (and therefore bots giving semantically- correct responses to
    > >>> > statements and questions)? what about rational analytical programs?

    >
    > >>> > (B) As for, say, (topical) original content, originality, which is
    > >>> > ultimately unproveable anyway, a bot could easily plagiarize such from
    > >>> > somewhere else, preferably multiple and obscure sources.

    >
    > >>> > Happily, for program- detection of Usenet newsgroup- flooding and
    > >>> > sporge-
    > >>> > attacks as well as elsewhere, such bots or other annoying programs are
    > >>> > far
    > >>> > from ultimate in their fields, and the detection- program
    > >>> > implementation
    > >>> > of multiple rules and algorithms (bearing in mind diminishing returns
    > >>> > vis-
    > >>> > a- vis computational intensity) should allow such detection, until new
    > >>> > bots or programs are developed which implement the appropriate counter-
    > >>> > algorithms, in the usual "race" of measure and counter- measure.

    >
    > >>> Do you suffer from *hippopotomonostrosesquippedaliophobia*

    >
    > >> now speak english or go and sit in the corner with.....this dunce hat
    > >> that does not, i repeat does not belong to me.

    >
    > > <polite snickering from the pit>

    >
    > > WE all believe that this party hat does NOT belong to HEADKASE don't we?

    >
    > Is the . . . stuff . . . in the Pit supposed to be snickering?


    well it is better then the pit residents throwing the oversized mutant
    peanuts that they seem to have a never ending supply of...

    >
    > --
    > tinmimu...@hotmail.com
    >
    > smeeter 11 or maybe 12
    >
    > mp 10
    >
    > mhm 29x13
    >
    > The hell with the Galactic Overlords
    > and their tastes in literature.
    >
    > < _The Day of the Burning_- Hide quoted text -
    >
    > - Show quoted text -
     
  20. Immortalist

    Immortalist Guest

    On Oct 14, 12:44 pm, mimus <tinmimu...@hotmail.com> wrote:
    > After reflection on program- detection of flooding and sporging of Usenet
    > newsgroups, I have concluded that the "Turing Test" is ultimately invalid,
    > if there is any ultimacy in the matter:
    >


    If humans cannot prove that other people exist or not, how could we
    judge a machine that imitates the problem?

    Solipsism (Latin: solus, alone + ipse, self) is the philosophical idea
    that "My mind is the only thing that I know exists". Solipsism is an
    epistemological or metaphysical position that knowledge of anything
    outside the mind is unjustified. The external world and other minds
    cannot be known and might not exist. In the history of philosophy,
    solipsism has served as a skeptical hypothesis.

    http://en.wikipedia.org/wiki/Solipsism

    > Ultimately, if there is any etc., and in the worst- case scenario, any
    > rule used by a human to try to distinguish whether, say, a Usenet poster
    > is a human or a bot, can be expressed as an algorithm, and an opposing
    > algorithm developed, generally (A) one essentially oppositely-
    > implementing that algorithm, but without ruling out (B) one simulating
    > opposition, none of which is proof of the intelligence of such bot:
    >
    > (A) Noting by way of preface the suggestive existence of everyday
    > character- recognition programs (already damnably complex already, eh?),
    > would one want to bet that no semantical analytical programs can be
    > developed (and therefore bots giving semantically- correct responses to
    > statements and questions)? what about rational analytical programs?
    >
    > (B) As for, say, (topical) original content, originality, which is
    > ultimately unproveable anyway, a bot could easily plagiarize such from
    > somewhere else, preferably multiple and obscure sources.
    >
    > Happily, for program- detection of Usenet newsgroup- flooding and sporge-
    > attacks as well as elsewhere, such bots or other annoying programs are far
    > from ultimate in their fields, and the detection- program implementation
    > of multiple rules and algorithms (bearing in mind diminishing returns vis-
    > a- vis computational intensity) should allow such detection, until new
    > bots or programs are developed which implement the appropriate counter-
    > algorithms, in the usual "race" of measure and counter- measure.
    >
    > --
    > tinmimu...@hotmail.com
    >
    > smeeter 11 or maybe 12
    >
    > mp 10
    >
    > mhm 29x13
    >
    > I wonder what I have been up to.
    >
    > < _Beyond Apollo_
     

Share This Page