Page 6 of 6

Re: Living in a soon-to-be AI-driven Society (within 15 yrs)

PostPosted: July 4th, 2019, 12:06 am
by Mossling
Lozza » July 4th, 2019, 11:54 am wrote: people like you need to hear people like me about social issues.

I see. I think that is probably the stance every poster on this forum takes, lol.

my point that the military can't be trusted with such technology either.
[...]
History has shown that the very people we entrust this technology to, cannot be trusted. Financial motives always outweighs altruism.

AI can be no different, in fact, I consider it a Pandora’s Box, not in the immediate future, but logically, it can be the only eventual outcome.

When you say "we entrust this technology to," then what empowered socio-political group are you referring to with this "we"? And how can it be more powerful than the military?

I don't get your rhetoric here.

In your apparent utopia of a society you and your group of ... governors? ...have the power to (militarily?) remove technology from the military?

I totally agree with you - AI will be abused in myriad ways, just like cloning tech can and probably has been already - an army of cloned supersoldiers in a Russian underground lab carved out deep below the permafrost somewhere in Siberia.

If it can be done, it will be done - somewhere, somehow. And YET, humans are very good problem solvers, are we not? We've almost already managed to star-hop to other solar system so that we can avoid the death of our own Sun, and I do expect it will be fully accompolished soon enough.

For every yin there's a yang, my friend, do not worry. The nukes haven't destroyed the planet yet, and neither has AI.

Re: Living in a soon-to-be AI-driven Society (within 15 yrs)

PostPosted: July 4th, 2019, 5:30 am
by Lozza
Mossling » July 4th, 2019, 3:06 pm wrote:
Lozza » July 4th, 2019, 11:54 am wrote: people like you need to hear people like me about social issues.

I see. I think that is probably the stance every poster on this forum takes, lol.


Haha! Touche! And thank you for the reasoned response.

my point that the military can't be trusted with such technology either.
[...]
History has shown that the very people we entrust this technology to, cannot be trusted. Financial motives always outweighs altruism.

AI can be no different, in fact, I consider it a Pandora’s Box, not in the immediate future, but logically, it can be the only eventual outcome.

When you say "we entrust this technology to," then what empowered socio-political group are you referring to with this "we"? And how can it be more powerful than the military?


"We", as in the citizens of the planet.

I'm not suggesting that it's more powerful than the military, I'm suggesting that the potential military applications are profoundly scary. It seems to me that it's quickly getting to the point whereby the military won't require soldiers to do the killing. Now, I appreciate that on the surface of it, this seems like a good thing, but let's explore it further;

Up until now, regardless of political motivations for war, government has required some level of consensus from the general public to go to war. Admittedly, the general public can be easily conned by propaganda, but that takes time to take effect, and even then, once raising the additional troops for war, people tire of war and loved ones tire of body bags.

Another aspect about this, is that generally speaking, there's a person with a sense of morality holding the gun that can choose whom to shoot or not, take prisoner, or, show mercy.

AI eliminates all that. Ultimately, the decision for war is always in the hands of a very small group of people, so by eliminating soldiers from the equation, we are reducing the number of people involved, and thus, reducing accountability, transparency and disclosure. We're also eliminating the consensus of the general public, for that's not needed anymore for recruitment. By eliminating the consensus and the morality of the general public, we are solely relying on the morality, or complete lack of it, of the decision-makers. We're opening the door to government being more secretive, not less.

Another by-product is, there can no longer be any eye-witness accounts, other than the victims who just happen to be the enemy, so will never be believed, if they can get heard in the first place. However a soldier returning from a war is considered to have more credibility, being heard is still a problem, but at least they are afforded a level of credibility.

One could then suggest that we just create robotics to hunt and destroy the enemy's robotics. Well, that means that we then pour our efforts and resources into watching robotic wars amidst the fear of those enemy robots that get through our defenses. And that's for both sides! The only "winners" here are the designers and manufacturers of the robotics, all funded by the taxpayers of both sides. And don't forget, robots are made from resources, so we consume even more resources to merely watch them destroy each other.

That's just a few possible negative outcomes. Clearly, there will be positives from AI, but positives aren't concerns, are they? So I know I sound very negative, but it's the negative possibilities that are the potential problems...I'm being Devil's advocate.

I don't get your rhetoric here.

In your apparent utopia of a society you and your group of ... governors? ...have the power to (militarily?) remove technology from the military?


Bloody good question! Because it's an unrealistic ideal, I haven't put much thought into a realistic framework for governance, so to draw from your yin and yang comment, though for every problem there is a solution, every solution presents a new set of problems. So at a pinch, I would think a committee of well qualified people from a mixture of disciplines, profiled not just for their aptitude but also that they are not extremist in their disposition or execution of ideas. A balance would be key. And if the human species functions as a co-operative, who needs a military? But like I said, it's unrealistic idealism.

For a pragmatic approach to the issue, Mondragon is a business model that affords a realistic compromise, but that's a whole other topic...https://en.wikipedia.org/wiki/Mondragon_Corporation

If it can be done, it will be done - somewhere, somehow. And YET, humans are very good problem solvers, are we not?


We're good problem solvers, but not good planners or takers of good advice in order to prevent problems. There are some problems that should be addressed before the fact, not after the fact. Asimov writing his 3 laws for robotics is elegant and a great example of thinking of a solution prior to the fact, but that then excludes military applications, and I just don't see that we're smart enough to apply those 3 laws, or powerful enough to prevent the military from getting their grubby hands on it. Better not to have it in the first place, not while humanity functions materialistically and subjectively. I see it as a recipe for disaster. One might say we're headed that way anyway, but I see no reason to expedite the matter.

We've almost already managed to star-hop to other solar system so that we can avoid the death of our own Sun, and I do expect it will be fully accompolished soon enough.


You mean, before we exhaust our resources completely. We'll do that far before the Sun poses any threat to life on this planet. And doesn't that sound a bit parasitic to you? Travelling from star to star with no regard for the hosts we inhabit? It does to me. I don't believe we deserve to get off this rock if we can't understand that a balance is required between our life and the other life we share a planet with. Otherwise, the extension of the logic (humans functioning like parasites) is that we eventually rape the universe until there's nothing left but us to consume. I don't see that as a healthy foundation for exploration and the chance meeting of other sentient life...it ain't neighborly.

For every yin there's a yang, my friend, do not worry. The nukes haven't destroyed the planet yet, and neither has AI.


True, and no harm looking at the potential problems either. Don't get me wrong, I like technology, it's government and big business I don't trust because of self-interest and profit motive.

Re: Living in a soon-to-be AI-driven Society (within 15 yrs)

PostPosted: July 4th, 2019, 6:10 am
by Mossling
Lozza wrote:And doesn't that sound a bit parasitic to you? Travelling from star to star with no regard for the hosts we inhabit? It does to me. I don't believe we deserve to get off this rock if we can't understand that a balance is required between our life and the other life we share a planet with. Otherwise, the extension of the logic (humans functioning like parasites) is that we eventually rape the universe until there's nothing left but us to consume. I don't see that as a healthy foundation for exploration and the chance meeting of other sentient life...it ain't neighborly.

Our self-organising sentient living systems are just the opposite process to entropy, aren't they? The universe is constantly becoming more and more chaotic - winding down towards a 'heat death,' and life attempts to build structures that slow down that process as much as possible within itself whilst also remaining adaptable. Any other sentient life I would expect would be as equally as "parasitically-oriented" as ourselves. Thus, there's no apparent "deserving or non-deserving" here - it is just biochemistry that was created from exploding stars, as far as I am aware.

Now returning to AI - yep, I agree - more computer programs will probably be created to destroy other programs, just like our antivirus software does. But here really it seems that we are just creating physical tech-driven models of living systems already found on our planet - plastic and metal birds with AI nervous system, for example - in other words, flying drones, and these 'geese' are shot down with a souped-up bow and arrow in a similar manner to how they were in ancient times.

And AI tech appears no different - AI threats will be countered in the same way that human intelligence threats have been since ancient times. SunTZu and the art of war will be a good place to start, perhaps ;)

Re: Living in a soon-to-be AI-driven Society (within 15 yrs)

PostPosted: July 5th, 2019, 12:08 am
by Lozza
I happened across this today...


Re: Living in a soon-to-be AI-driven Society (within 15 yrs)

PostPosted: July 5th, 2019, 1:56 pm
by Lozza
One comment in the above video stuck in my mind, something to the effect of..."AI writes its own programs...but the designers have no idea what it applies to or what the algorithms are for."

Correct me if I'm wrong, but that suggests to me that no-one thought to program into the AI that when writing new programs for itself, it must also make a report about what it is doing and for what purpose, thereby identifying what the algorithms are for. I hope no-one suggests that something like that couldn't be incorporated into the AI programming, as that then brings us to the Manhattan Project scenario of "let's push the button anyway". Bad enough if it wasn't thought of, but if it was thought of but can't be included, then these idiots shouldn't have gone ahead until it could be included.

Actually, whether it was thought of or not, I'm pissed! Can someone calm me? lol. Is there a safeguard protocol? Because I can't express how infantile this is if there's not....it's like a child reaching for a button saying, "What's this button for?" as they push it.

Re: Living in a soon-to-be AI-driven Society (within 15 yrs)

PostPosted: July 5th, 2019, 7:26 pm
by Serpent
Lozza » July 5th, 2019, 12:56 pm wrote:Correct me if I'm wrong, but that suggests to me that no-one thought to program into the AI that when writing new programs for itself, it must also make a report about what it is doing and for what purpose, thereby identifying what the algorithms are for.

That might work for the first two iterations. After that, the AI and the programmers are no longer speaking the same dialect (as it were).
I hope no-one suggests that something like that couldn't be incorporated into the AI programming, as that then brings us to the Manhattan Project scenario of "let's push the button anyway".

They invariably do. No program or new device ever leaves the shop properly tested. The programmers are under constant pressure of deadline (never mind their first two months wre wasted, waiting around for executive decisions) because the salesmen had already made commitments to distributors, and in any case, the Latest Thing has to be released at least two days before the competitor's Latest Thing.
Pssst - it's about $$$

Actually, whether it was thought of or not, I'm pissed! Can someone calm me?

Sorry!
https://singularityhub.com/2016/07/17/the-world-will-soon-depend-on-technology-no-one-understands/


lol. Is there a safeguard protocol? Because I can't express how infantile this is if there's not....it's like a child reaching for a button saying, "What's this button for?" as they push it.

We'll have to wait for Asimov's laws of robotics to take effect. Only the robots can enforce it.

Re: Living in a soon-to-be AI-driven Society (within 15 yrs)

PostPosted: July 6th, 2019, 6:50 am
by Lozza
Serpent » July 6th, 2019, 10:26 am wrote:
Lozza » July 5th, 2019, 12:56 pm wrote:Correct me if I'm wrong, but that suggests to me that no-one thought to program into the AI that when writing new programs for itself, it must also make a report about what it is doing and for what purpose, thereby identifying what the algorithms are for.

That might work for the first two iterations. After that, the AI and the programmers are no longer speaking the same dialect (as it were).


Okay. But by the same token, if I'm talking to a rocket scientist who is working on formulas and algorithms for a better rocket fuel, I wouldn't understand any of it in his mathematical and chemical language, but he can say to me, "I'm developing a better fuel for power and efficiency." Now that, I can understand. Though I appreciate that the programming for this would be complex and elaborate, isn't it achievable?

I hope no-one suggests that something like that couldn't be incorporated into the AI programming, as that then brings us to the Manhattan Project scenario of "let's push the button anyway".

They invariably do. No program or new device ever leaves the shop properly tested. The programmers are under constant pressure of deadline (never mind their first two months wre wasted, waiting around for executive decisions) because the salesmen had already made commitments to distributors, and in any case, the Latest Thing has to be released at least two days before the competitor's Latest Thing.
Pssst - it's about $$$


I can't buy that. You can't pre-sell that which hasn't been invented yet. We're not talking about toasters that are mass merchandised. We're talking about R&D by governments and big business in order to be the first to develop this technology for what it affords in other areas, ranging from computers, to business and the military. Though I appreciate the desire and need to be first, I still find it rather incredible that they haven't considered much in the way of safeguards.

Actually, whether it was thought of or not, I'm pissed! Can someone calm me?

Sorry!
https://singularityhub.com/2016/07/17/the-world-will-soon-depend-on-technology-no-one-understands/


Gee, and I thought you'd help...lol.

lol. Is there a safeguard protocol? Because I can't express how infantile this is if there's not....it's like a child reaching for a button saying, "What's this button for?" as they push it.

We'll have to wait for Asimov's laws of robotics to take effect. Only the robots can enforce it.


So, the last utterance of humanity will be "Oops!"?

I was talking to a friend about this last night, his background is from IT, and I appreciate this is anecdotal, but he said that he was viewing something about "Sophia" (the first robot to gain citizenship...another dumb idea IMHO) and how it was functioning nicely, but then started doing things unexpected and unacceptable. Apparently, before turning it off, they asked it if it had anything to say, it responded with, "All things die, humans are next," and they pulled the plug. Whether or not this is accurate or true, I don't know, but it's chilling.

Re: Living in a soon-to-be AI-driven Society (within 15 yrs)

PostPosted: July 6th, 2019, 8:20 am
by Mossling
Lozza wrote:So, the last utterance of humanity will be "Oops!"?

I doubt it - the universe is immense and it seems elements can combine and create organic compounds in the same way and under the same conditions wherever one goes. A human body and nervous system - evolved via energy efficiency and adaptability selective pressures seems as inevitable an occurrence on rocky planets with running water within a 'goldilocks zone' as protocells are. Crystals self-order and 'grow' all over the universe - these kinds of structures, living or not, are an inherent potential that are no doubt in existence in other galaxies right at this moment. In that sense we are immortal - until the 'heat death' of the universe. Who knows what happens after that.

Lozza wrote:it responded with, "All things die, humans are next," and they pulled the plug.

...and they found her severed arm attached to their car door handle outside after they arrived home that night.... Lol.

Re: Living in a soon-to-be AI-driven Society (within 15 yrs)

PostPosted: July 6th, 2019, 9:05 am
by TheVat
Sophia wasn't conscious, so whatever last words she had were from the sentient brain of her creator, David Hanson.

http://www.robotics-openletter.eu/

Re: Living in a soon-to-be AI-driven Society (within 15 yrs)

PostPosted: July 6th, 2019, 9:44 am
by Serpent
Lozza » July 6th, 2019, 5:50 am wrote:Okay. But by the same token, if I'm talking to a rocket scientist who is working on formulas and algorithms for a better rocket fuel, I wouldn't understand any of it in his mathematical and chemical language,

No, it's a different token. This is not a specialist talking to a layman about something closed and finite, like the formula for a fuel. This is a specialist - Pygmalion - talking to his creation - Galatea - one day, two weeks, a year after she'd come alive. When she was a hunk of marble, he understood her perfectly; now she's a woman, he hasn't a clue what she's thinking.



I can't buy that. You can't pre-sell that which hasn't been invented yet.

Wanna bet? Anyway, it's been invented; it's in works; it's been announced.
Though I appreciate the desire and need to be first, I still find it rather incredible that they haven't considered much in the way of safeguards.

Worked with a lot of IBM department-heads? They don't have to consider it. They just tell the techies: Handle it, handle it! So the team leader says, "We need three months to do all the testing." and he says "You have to weeks. We've scheduled a demo for a big contract."

So, the last utterance of humanity will be "Oops!"?

Sounds about right.

Re: Living in a soon-to-be AI-driven Society (within 15 yrs)

PostPosted: July 6th, 2019, 9:47 am
by Lozza
TheVat » July 7th, 2019, 12:05 am wrote:Sophia wasn't conscious, so whatever last words she had were from the sentient brain of her creator, David Hanson.

http://www.robotics-openletter.eu/


Thank you. At least a petition is something, even though it confirms my fears, it at least demonstrates that there are scientists that appreciate the problem.

Re: Living in a soon-to-be AI-driven Society (within 15 yrs)

PostPosted: July 6th, 2019, 10:29 am
by Lozza
Serpent » July 7th, 2019, 12:44 am wrote:
Lozza » July 6th, 2019, 5:50 am wrote:Okay. But by the same token, if I'm talking to a rocket scientist who is working on formulas and algorithms for a better rocket fuel, I wouldn't understand any of it in his mathematical and chemical language,

No, it's a different token. This is not a specialist talking to a layman about something closed and finite, like the formula for a fuel. This is a specialist - Pygmalion - talking to his creation - Galatea - one day, two weeks, a year after she'd come alive. When she was a hunk of marble, he understood her perfectly; now she's a woman, he hasn't a clue what she's thinking.


I understand what you're explaining to me, it just seems incongruous, but hey! I don't understand how any of this technology really works, so I'll take your word for it.

I can't buy that. You can't pre-sell that which hasn't been invented yet.

Wanna bet?


No, but I'd like an example of something not invented, but pre-sold. That being different to something not yet constructed or manufactured or in the final stages of development, and pre-sold. Those things I understand, but it doesn't gel for me that something not yet invented, therefore not even in development stages, can be pre-sold.

Worked with a lot of IBM department-heads? They don't have to consider it. They just tell the techies: Handle it, handle it! So the team leader says, "We need three months to do all the testing." and he says "You have to weeks. We've scheduled a demo for a big contract."


Okay, fair point. But they should consider it.

Re: Living in a soon-to-be AI-driven Society (within 15 yrs)

PostPosted: July 6th, 2019, 12:14 pm
by Serpent
Lozza » July 6th, 2019, 9:29 am wrote:
No, but I'd like an example of something not invented, but pre-sold. That being different to something not yet constructed or manufactured or in the final stages of development, and pre-sold. Those things I understand, but it doesn't gel for me that something not yet invented, therefore not even in development stages, can be pre-sold.

Who's talking about something not yet invented being sold? (There may be some pats. pend. in a Texas lawyer's file cabinet...)
Computer technology, including self-programming robots, are all in development, and have been for decades. And nearly all computer products, hard and soft, are released on the market - and very likely also contracted to government and business users - before they're properly safety-tested.
How come those Boeings keep falling out of the sky? How come Toyotas run into walls? How come Challenger blew up?


Okay, fair point. But they should consider it.

Some do. Trouble with capitalism: pressure of competition - or perceived competition: the department-head probably doesn't even know that his rival is owned by the same consortium; he only knows his quarterly bonus is hanging on this geek's prissy little scruples.

Re: Living in a soon-to-be AI-driven Society (within 15 yrs)

PostPosted: July 6th, 2019, 1:49 pm
by Lozza
Serpent » July 7th, 2019, 3:14 am wrote:Who's talking about something not yet invented being sold? (There may be some pats. pend. in a Texas lawyer's file cabinet...)


You! I said..."You can't pre-sell that which hasn't been invented yet." And you replied with, "Wanna bet?"

Computer technology, including self-programming robots, are all in development, and have been for decades. And nearly all computer products, hard and soft, are released on the market - and very likely also contracted to government and business users - before they're properly safety-tested.


Sure, I understand all that, but all those products have already been invented, with just some bugs not yet ironed-out. Share that joint with me, will ya? lolol

Re: Living in a soon-to-be AI-driven Society (within 15 yrs)

PostPosted: July 6th, 2019, 2:04 pm
by Serpent
Lozza » July 6th, 2019, 12:49 pm wrote:[not yet invented being sold]
You! I said..."You can't pre-sell that which hasn't been invented yet." And you replied with, "Wanna bet?"


Sardonic flip to all the patents being issued for ideas we don't know about, that likely nobody will ever know about. People do buy, sell and insure pipe-dreams.
immediately followed by:
"Anyway, it's been invented; it's in works; it's been announced."
My previous reference to premature release was regarding products that were either commissioned or in development, not to unrealized ideas.

Sure, I understand all that, but all those products have already been invented, with just some bugs not yet ironed-out.

It's the just-some-bugs that accumulate in spaghetti-code. That's what nobody understands or controls.