Thank you for the second part of this fascinating essay, Ann. A couple of thoughts, off the top of my head. There is of course language meant to lift humanity up and language meant to help you figure out how your new washing machine or sewing machine, which is likely to be a computer, works. I guess what I'm thinking is, I can see that AI taking over some things might not be so bad, or at any rate, much worse? For example, the DMV manual, which I had to read this summer to take a test (after 40 yrs of driving, long story). I am not convinced that a computer could make it that much worse, even if it had never driven a car before. (I did love the suggestion to "dress for all weathers" to take my test. I am still trying to figure out how to do that!)
There is language all over the place that is inaccurate or does not conform to reality, that is written by people, not machines.
But I don't especially want to read a novel written by a machine because I'm not interested enough in what machines can do. And this may be a flaw in me. Since the machines we make are just reflections of us ... I feel like with AI we could be trying to find out more about who we are. More about ourselves and how our minds work. But it seems the main motivation for AI development is making money, for the usual suspects, not enhancing human potential.
I agree completely that there should be transparency at every level so that people know what they are getting and where it came from. But we don't have that now and it is hard to see how we will have more of it in the future, in a way that matters. It's not like computers have taught any of us patience .. or to read the fine print ... of course 'they' don't really want us to read the fine print .. they just want to make the world a better place, for gosh sakes!
Anyway these are just some ramblings from my tired brain this evening ... thank you again for an educational and inspiring read.
Thank you for these discerning observations! My own experience of chat bots so far is that they do not even help at the owner's manual level—except to make the writing of them faster and cheaper—because they don't know, unless they are told, how to predict for people's questions and confusions. I see them making a lot more garbled and half-baked texts of the sort we get half translated now along with a cheap appliance; the standards of writing continuing to go down as copyeditors everywhere get skimmed from budgets. We keep hearing that one needs to be experienced in how to produce the right prompts to get a good result, but that indicates that we can easily go wrong (potentially without knowing it) if we ask it the wrong questions.
You list so vividly and evocatively what writing does, or should do, you might consider offering this as a NYT op-ed, and get the message out.
Oh you are so kind! Hard to get the attendion of those folks...
Thank you for the second part of this fascinating essay, Ann. A couple of thoughts, off the top of my head. There is of course language meant to lift humanity up and language meant to help you figure out how your new washing machine or sewing machine, which is likely to be a computer, works. I guess what I'm thinking is, I can see that AI taking over some things might not be so bad, or at any rate, much worse? For example, the DMV manual, which I had to read this summer to take a test (after 40 yrs of driving, long story). I am not convinced that a computer could make it that much worse, even if it had never driven a car before. (I did love the suggestion to "dress for all weathers" to take my test. I am still trying to figure out how to do that!)
There is language all over the place that is inaccurate or does not conform to reality, that is written by people, not machines.
But I don't especially want to read a novel written by a machine because I'm not interested enough in what machines can do. And this may be a flaw in me. Since the machines we make are just reflections of us ... I feel like with AI we could be trying to find out more about who we are. More about ourselves and how our minds work. But it seems the main motivation for AI development is making money, for the usual suspects, not enhancing human potential.
I agree completely that there should be transparency at every level so that people know what they are getting and where it came from. But we don't have that now and it is hard to see how we will have more of it in the future, in a way that matters. It's not like computers have taught any of us patience .. or to read the fine print ... of course 'they' don't really want us to read the fine print .. they just want to make the world a better place, for gosh sakes!
Anyway these are just some ramblings from my tired brain this evening ... thank you again for an educational and inspiring read.
Thank you for these discerning observations! My own experience of chat bots so far is that they do not even help at the owner's manual level—except to make the writing of them faster and cheaper—because they don't know, unless they are told, how to predict for people's questions and confusions. I see them making a lot more garbled and half-baked texts of the sort we get half translated now along with a cheap appliance; the standards of writing continuing to go down as copyeditors everywhere get skimmed from budgets. We keep hearing that one needs to be experienced in how to produce the right prompts to get a good result, but that indicates that we can easily go wrong (potentially without knowing it) if we ask it the wrong questions.
https://freakonomics.com/podcast/a-i-is-changing-everything-does-that-include-you/
I heard some of this the other week on the radio, but couldn't finish. I'm going to listen to it later today. Thought you might be interested!