![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
zetreque » March 20th, 2018, 1:36 pm wrote:
Bullshit technology that does NOTHING to help humanity if you ask me. People can research and build it as a toy all the want but I will always be against these things on the road.
A human driver or pilot is going to fear for his own life. If I put my life into the hands of a robot that has no fear of death then that's just idiotic.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
zetreque » March 20th, 2018, 3:02 pm wrote:When you have other humans on the road, they might not die if they hit someone else, but they will fear the repercussions about injuring or killing someone and going to jail for manslaughter. A robot doesn't have that concern. If you have a human cab driver, he fears for his own life so he isn't going to crash the car out of fear for his own life. If a robot is driving your car, it doesn't have that same fear for it's own life.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
zetreque » March 20th, 2018, 3:22 pm wrote:I have not seen anything to tell me that facial or object recognition is up to human comparison yet. And whoever is programming the ethics into these cars is somehow deciding everyone's fate off of their ethics? Everyone has slightly different ethics so how can you force a certain ethical code on everyone?
![]() |
![]() |
![]() |
![]() |
Mossling » Mon Mar 19, 2018 11:30 pm wrote:At the end of the day - if someone does run out into the road in front of any vehicle travelling at significant speed, then there's going to be harm done no matter who is behind the wheel. The opitimized programmed responses could probably reduce such harm, don't you think?
![]() |
![]() |
![]() |
![]() |
zetreque » March 20th, 2018, 4:09 pm wrote:Mossling » Mon Mar 19, 2018 11:30 pm wrote:At the end of the day - if someone does run out into the road in front of any vehicle travelling at significant speed, then there's going to be harm done no matter who is behind the wheel. The opitimized programmed responses could probably reduce such harm, don't you think?
One last comment before I go to bed. Going back to the news of the fatality. I'm wondering if AI is going to end up being the scapegoat for deaths now? The blame is going to be lost. Who is to blame? The programmer at the auto company? The owner of the AI vehicle? Or will blame like in this situation be pushed off onto the "homeless" pedestrian?
And another problem with programmed ethics. You mention being able to have choices in how you program ethics into the vehicle. Most people don't even take the time to learn the Cruz control feature or all the features working their car radio. I doubt they are going to really get into programming the ethics which then goes back to the scapegoating issue pushing blame back onto the invisible intelligence.
And the bigger issue I think is subconscious ethics. We can all talk about ethics and tell people what our ethics are, but our real ethics come through our actions. You can't tell what someone's ethics is going to be until that split second decision on whether to hit the pedestrian crossing or the car in the oncoming or adjacent lane. It would take one hell of an advance camera network sensory system to calculate all of that and how a human would respond in the infinite situations out there in the world.
Another thing I just thought of is if we are putting our hands into the ethics forced upon us in these programed ethical AI's, does that take away our freedom in a way? For example, Society democratically (though not really democratic) decides what laws we live by. Our laws are a representation of our overall society's ethics. Laws take away freedom though and many who do not agree with certain laws because they have different ethics claim loss of freedom. If you have one auto maker deciding the ethics for everyone, then it's not a democratic process to determine those ethics force upon everyone on the decisions the AI makes.
![]() |
![]() |
![]() |
![]() |
I have no doubt that they will be able to at some point in the future, if not right now.
![]() |
![]() |
![]() |
![]() |
There are wandering animals, debris from landslides, drunk guys lying around that can easily look like a garbage bag inflated by the wind, and so on and so forth....
|
![]() |
![]() |
![]() |
![]() |
In a 2,000-sq ft grow space, leafy greens and herbs are planted in individual pots housed in 4ft by 8ft white “grow modules”, which weigh about 800lb.
Autonomous machines do the heavy lifting, farming and sensing. “Angus”, which the Iron Ox co-founder Brandon Alexander described as “incredibly intelligent” and like a self-driving car (he gushed about being “very proud of it”), is a 1,000lb machine that moves around the farm, sensing and lifting, and transporting grow modules to the processing area.
There, a robotic arm, which is also autonomous, harvests the plants by gripping the pots. This reduces damage to the plant itself – which Alexander said was devilishly hard to accomplish and required developing a way for the machine to recognize plants as such and then be able to analyze them at a submillimeter scale. The robotic arm has four Lidar sensors and can “see” in 3D thanks to two cameras, which also allow it to identify diseases, pests and abnormalities, according to the company.
[...]
he said to expect “rapid adoption”. “[Farmers] are looking for technological solutions,” said Slaughter.
[...]
Iron Ox plans to begin selling its produce to some Bay Area restaurants and grocery stores later this year and sell to the entire region next year, with a goal of opening several more farms around urban centers in the coming years to reduce produce transportation times and costs.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
BadgerJelly » October 9th, 2018, 11:59 pm wrote:Are we already living in an AI driven society? I would argue quite strongly that we are due to the algorithms in use for advertising and the manner in which information is distributed.
![]() |
![]() |
![]() |
![]() |
Right - at what point does the balance tip?
![]() |
![]() |
![]() |
![]() |
BadgerJelly » October 10th, 2018, 7:12 pm wrote:Note: I don’t class “robots” as AI.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Serpent » February 26th, 2019, 1:05 am wrote:I don't believe "thinking and feeling" a la Robin Williams' Andrew is the issue here.
The issue is the socio-economic changes taking place with the advent of technology.
It's the human functions that machines - computers, robots, automated forklifts and harvester, factories and warehouses - can perform more efficiently and cheaply than people can.
There are, in fact, very few jobs that some mechanical device can't perform, now or in the near future.
So that brings up three important questions:
How does this change the way money is deployed?
What happens to the people?
How is the environment altered?
It is happening, and changes have already taken place; more changes will take place. How do we control the process? How do we adapt?
What's the end-point and purpose?
![]() |
![]() |
![]() |
![]() |
BadgerJelly » February 25th, 2019, 4:49 pm wrote:I guess I do get a little confused by what people mean when talking about “AI” because we are already essentially living in an “AI-driven society”.
If we’re talking about automaton then, yes. It does appear that simple labour will be replaced by automated units even more in the future due to these technologies becoming more cost effective.
If we’re talking about thinking feeling “AI” then, nope. Nothing anytime soon as far as I’ve heard from anyone reasonable in the given field.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
BadgerJelly » February 26th, 2019, 7:40 pm wrote:Moss -
Confusing? Yes, hence my issue with what you mean by AI exactly. Look at what we’re using right now. It is an AI system ... but if you don’t mean AI as in computers then do you mean actual “intelligent” beings made by humans. Conscious systems. That is a long way off. We’re already so far down the automated road that I doubt we can turn back now.
It doesn’t take much to see that the internet has birthed multiple AI systems which sift through data faster than any human can. AI in that sense is already here, yet it’s creeped in so subtley that many haven’t noticed (especially younger generations).
In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert.[1] Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code.[2] The first expert systems were created in the 1970s and then proliferated in the 1980s.[3] Expert systems were among the first truly successful forms of artificial intelligence (AI) software.[4][5][6][7][8] An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.[9]
IBM has suggested that Watson would be enabled to supplement its databanks with information pulled from the Internet in real time. This not only sounds sexy but probably reflects the true cost of preparing knowledge for Watson, but it carries the implication that data from the Internet potentially has the same value as expert knowledge that has been thoroughly vetted.
Crowd-sourcing (aka the wisdom of crowds) is a fancy name for making decisions based on anecdotal experience rather than on statistically valid samples studied under controlled conditions.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Mossling » February 26th, 2019, 5:24 am wrote:[ How do we control the process? How do we adapt?
What's the end-point and purpose?
This has already been tackled on this thread.
GBI/UBI is inevitable. The people who would do the robots' jobs in the event of some mass breakdown epidemic need to be there on standby - kind of like a 'standing economic army' that get paid for being there as backup.
So the training for the jobs still needs to happen,
Badger Jelly -- I mentioned consciousness because I don’t believe positions like “doctors” (as in diagnosticians) can be replaced because human being are complex and I believe consciousness is needed to spot clues - that said I do think some jobs will be lost.
In short we are already living in an AI driven society. I think it’s just a case of us waking up to the fact a few generations down the line.
![]() |
![]() |
Users browsing this forum: No registered users and 10 guests