Haha! Touche! And thank you for the reasoned response.
my point that the military can't be trusted with such technology either.
[...]
History has shown that the very people we entrust this technology to, cannot be trusted. Financial motives always outweighs altruism.
AI can be no different, in fact, I consider it a Pandora’s Box, not in the immediate future, but logically, it can be the only eventual outcome.
When you say "we entrust this technology to," then what empowered socio-political group are you referring to with this "we"? And how can it be more powerful than the military?
"We", as in the citizens of the planet.
I'm not suggesting that it's more powerful than the military, I'm suggesting that the potential military applications are profoundly scary. It seems to me that it's quickly getting to the point whereby the military won't require soldiers to do the killing. Now, I appreciate that on the surface of it, this seems like a good thing, but let's explore it further;
Up until now, regardless of political motivations for war, government has required some level of consensus from the general public to go to war. Admittedly, the general public can be easily conned by propaganda, but that takes time to take effect, and even then, once raising the additional troops for war, people tire of war and loved ones tire of body bags.
Another aspect about this, is that generally speaking, there's a person with a sense of morality holding the gun that can choose whom to shoot or not, take prisoner, or, show mercy.
AI eliminates all that. Ultimately, the decision for war is always in the hands of a very small group of people, so by eliminating soldiers from the equation, we are reducing the number of people involved, and thus, reducing accountability, transparency and disclosure. We're also eliminating the consensus of the general public, for that's not needed anymore for recruitment. By eliminating the consensus and the morality of the general public, we are solely relying on the morality, or complete lack of it, of the decision-makers. We're opening the door to government being more secretive, not less.
Another by-product is, there can no longer be any eye-witness accounts, other than the victims who just happen to be the enemy, so will never be believed, if they can get heard in the first place. However a soldier returning from a war is considered to have more credibility, being heard is still a problem, but at least they are afforded a level of credibility.
One could then suggest that we just create robotics to hunt and destroy the enemy's robotics. Well, that means that we then pour our efforts and resources into watching robotic wars amidst the fear of those enemy robots that get through our defenses. And that's for both sides! The only "winners" here are the designers and manufacturers of the robotics, all funded by the taxpayers of both sides. And don't forget, robots are made from resources, so we consume even more resources to merely watch them destroy each other.
That's just a few possible negative outcomes. Clearly, there will be positives from AI, but positives aren't concerns, are they? So I know I sound very negative, but it's the negative possibilities that are the potential problems...I'm being Devil's advocate.
I don't get your rhetoric here.
In your apparent utopia of a society you and your group of ... governors? ...have the power to (militarily?) remove technology from the military?
Bloody good question! Because it's an unrealistic ideal, I haven't put much thought into a realistic framework for governance, so to draw from your yin and yang comment, though for every problem there is a solution, every solution presents a new set of problems. So at a pinch, I would think a committee of well qualified people from a mixture of disciplines, profiled not just for their aptitude but also that they are not extremist in their disposition or execution of ideas. A balance would be key. And if the human species functions as a co-operative, who needs a military? But like I said, it's unrealistic idealism.
For a pragmatic approach to the issue, Mondragon is a business model that affords a realistic compromise, but that's a whole other topic...
https://en.wikipedia.org/wiki/Mondragon_CorporationIf it can be done, it will be done - somewhere, somehow. And YET, humans are very good problem solvers, are we not?
We're good problem solvers, but not good planners or takers of good advice in order to prevent problems. There are some problems that should be addressed before the fact, not after the fact. Asimov writing his 3 laws for robotics is elegant and a great example of thinking of a solution prior to the fact, but that then excludes military applications, and I just don't see that we're smart enough to apply those 3 laws, or powerful enough to prevent the military from getting their grubby hands on it. Better not to have it in the first place, not while humanity functions materialistically and subjectively. I see it as a recipe for disaster. One might say we're headed that way anyway, but I see no reason to expedite the matter.
We've almost already managed to star-hop to other solar system so that we can avoid the death of our own Sun, and I do expect it will be fully accompolished soon enough.
You mean, before we exhaust our resources completely. We'll do that far before the Sun poses any threat to life on this planet. And doesn't that sound a bit parasitic to you? Travelling from star to star with no regard for the hosts we inhabit? It does to me. I don't believe we deserve to get off this rock if we can't understand that a balance is required between our life and the other life we share a planet with. Otherwise, the extension of the logic (humans functioning like parasites) is that we eventually rape the universe until there's nothing left but us to consume. I don't see that as a healthy foundation for exploration and the chance meeting of other sentient life...it ain't neighborly.
For every yin there's a yang, my friend, do not worry. The nukes haven't destroyed the planet yet, and neither has AI.
True, and no harm looking at the potential problems either. Don't get me wrong, I like technology, it's government and big business I don't trust because of self-interest and profit motive.