TheHarry BinswangerLetter

  • This topic has 5 voices and 9 replies.
Viewing 9 reply threads
  • Author
    Posts
    • #103955 test
      | DIR.

      I highly recommend watching this talk on some new ChatGPT features/functions.   Some of it is technical.  The most exciting part starts at around 20:00.  

      https://www.youtube.com/watch?v=U9mJuUkhUzk   

      ARI needs to consider creating an Ayn Rand GPT trained only on Ayn Rand’s writings as well as an ARI GPT.  

      *sb

    • #147566 test
      | DIR.

      Re: Zachary Kauble’s post 103955 of 11/7/23

      Love the idea about ARI-GPT!

      From this video, it seems that data is the oil/petroleum of the 21st century.  And ARI has all that raw data on the life and philosophy of Ayn Rand.

      https://www.youtube.com/watch?v=EIVTf-C6oQo

      *sb

    • #147576 test
      | DIR.

      Re: Zachary Kauble’s post 103955 of 11/7/23

      It’s certainly amazing.

      As for an Objectivist front end to it, color me a solid No.

      As I watched the talk, I kept hoping he would get to the part about how ChatGPT just makes stuff up when it feels like it, and how they know it’s a huge problem, and what they’re doing to fix it. No such luck.

      To double-check on their progress to date, I just asked it this question:

      Does the claim that Atlas Shrugged is a defense of well-poisoning have merit?

      Its response (edited for brevity):

      The accusation that Atlas Shrugged is a defense of “well-poisoning” likely stems from Ayn Rand’s philosophical stance and the actions of some characters in the novel.

      Critics argue that the novel can be interpreted as portraying those who disagree with Objectivism as inherently corrupt, weak, or misguided, rather than engaging in substantive debate.

      It’s essential to recognize that interpretations of literature can vary, and the perception of whether Atlas Shrugged defends well-poisoning may depend on one’s philosophical background and perspective on Rand’s ideas.

      Summarizing: “Yeah, maybe, whatever.”

      /sb

    • #147595 test
      | DIR.

      Re: Jim Hanson’s post 147576 of 11/9/23

      Your example is precisely why this is important.  These GPTs are the future.  People will get used to using them like we are used to Google searches.  A properly trained GPT can be (or soon can be) instructed to cite only Ayn Rand’s original writings.  

      *sb

    • #147602 test
      | DIR.

      Re: Zachary Kauble’s post 147595 of 11/13/23

      I’ve mainly learned computing from Harvard’s CS50; this is their talk on the subject:

      https://www.youtube.com/watch?v=vw-KWfKwvTQ

      /sb

    • #147603 test
      | DIR.

      Re: Zachary Kauble’s post 147595 of 11/13/23

      Well, think about how, and whether, stuffing it with text–any text–could possibly make a dent in wishy-washy non-answers like the one it gave me. The thing sounds like a middle-schooler who hasn’t read the material trying to B.S. his way to the minimum required word count. As of course it must, given its architecture.

      Ayn Rand was a writer of breathtaking clarity and precision. To run her careful formulations through a statistical blender such as ChatGPT would destroy them.

      *sb

    • #147605 test
      | DIR.

      Re: Jim Hanson’s post 147603 of 11/16/23

      Well, think about how, and whether, stuffing it with text–any text–could possibly make a dent in wishy-washy non-answers like the one it gave me.

      I thought the answer was spot on.

      You asked:

      Does the claim that Atlas Shrugged is a defense of well-poisoning have merit?

      And it answered:

      It’s essential to recognize that interpretations of literature can vary, and the perception of whether Atlas Shrugged defends well-poisoning may depend on one’s philosophical background and perspective on Rand’s ideas.

      In other words: it depends on who you ask. 

      ChatGTP is not a person, it does not think, so it can’t give an objective answer. What it does is assemble information from a given data set. If that data set is composed of the opinions of all people, then the answer is: it depends.

      It’s just like Wikipedia. It’s not a source of objectivity; it’s where you go if you want a condensation of the prevalent views on a given subject.

      /sb

    • #147606 test
      | DIR.

      Re: Jim Allard’s post 147605 of 11/17/23

      Its answer would have been okay–not good, because not contentful–if I had asked it what other people would say, but I didn’t.

      I agree completely with your point that it can’t give objective answers. That’s kinda my point too. 

      If I were to go to some page on ARI’s website and type in some sort of query, the very last thing I would want would be a condensation of the prevalent views. I would want an objectively validated response. And if I discovered I hadn’t gotten one, I would start to doubt ARI.

      Or consider: if somebody trying to learn about Objectivism were to ask ARI’s hypothetical GPT bot my question, and got an answer like the one I got, what would that teach him? Maybe that Objectivism is non-judgmental? Or that there’s a diversity of opinion on this question amongst Objectivists? Or that ARI subscribes to the notion that it’s proper to be tolerant of alternative viewpoints about it?

      *sb

    • #147614 test
      | DIR.

      Re: Jim Hanson’s post 147606 of 11/17/23

      Or consider: if somebody trying to learn about Objectivism were to ask ARI’s hypothetical GPT bot my question, and got an answer like the one I got, what would that teach him?

      You wouldn’t get the same answer. ARI’s AI would give you ARI’s views on any issue. A Marxist AI would give Marxist views on any issue.

      Machines like websites and AI do not think — they cannot be objective — they just take the information they are given and present it to the user in a new form.

      *sb

    • #147615 test
      | DIR.

      Re: Jim Allard’s post 147614 of 11/18/23

      Machines like websites and AI do not think — they cannot be objective — they just take the information they are given and present it to the user in a new form.

      If you had said “text” instead of “information,” I would have agreed with you. What you get from ChatGPT et al. is a simulacrum of information. 

      From what the video said, I think these custom GPTs consist of a specialized module or front-end attached to OpenAI’s common core engine. The core engine was trained on some vast corpus of text taken off the internet, and was responsible for the answer I got. The specialized module would be the thing trained on, for example, AR’s work. It would contribute bits of text when the word-probabilities were in its favor.

      /sb

Viewing 9 reply threads
  • You must be logged in to reply to this topic.