Fish paste! A Tesla in Norway in ‘autopilot’ mode knocked a biker off. Car didn’t see him. A boffin tested several autonomous car systems using his bike as guinea pig. They didn’t see him/bike!!!! Para phrased from article I’ve just read in a bike mag. We’re doooooomed
Fonterra/Anchor have several very large "Lights Out" warehouses here in NZ, the one at work handles up to 28 Tonnes of milk powder per hour, that's bagging, pelletizing, wrapping, stacking, storing, prioritizing and shipping, that's just for the 1 Dryer (Dryer 4), it's probably about 60 odd people they don't have to employ. Our site has 4 Dryers, Cheese, Casein, AMF plant, Whey division, plus support Depts. etc., and of course Waste Water where I work, and employs about 600 people. It is in the top 3 biggest Dairy Plants in the world handling up to 14 Million Litres of milk per day, If all the Depts. were like Dyer 4 it would employ only a fraction of these people, is this progress?
Watching the sex dolls and artificial intelligence, Bradders, have you bought one yet....for your friend ?
We have a lifetime of training or “programming” that goes into our minds. Decades of object tracking, collision avoidance, seeing cause & affect etc... on top of that we have hard programmed reflexes. Computers can be programmed, over a very long time, to have the same wealth of knowledge we do. Unfortunately, when the programming isn’t there or there’s a situation that doesn’t fit the bill, the computer can’t resort to basic instinct. That kind of judgment and thought process requires AI. For all this talk of AI, none of these systems are actually AI in the true sense. True AI would not have to be programmed about wether to crash into the school kids or the kitten factory. Rightly or wrongly, it would make a snap decision based on its own learning experiences and its own views on morality. As the ending of the episode showed, they will now have to go and program the cold tyre scenario in (or more likely they won’t bother and will just use warmers). True AI, they could simply leave the car to make its own mistakes and learn the limit of tyres as a human would. That’s simply not possible. You can’t create a conscience, we’re simply cheating by programming it as in-depth as we can.
Disclaimer - I haven't seen the programme so this could end up in the off topic thread. I did some research in AI as a post grad, on expert systems. It was a long time ago and I've not kept up. Back then expert systems had a narrow knowledge domain. They were extreme specialists created for specific, complex problems. There was no intelligence, just a different kind of data processing, sometimes called an inference engine. They could give the illusion of intelligence in their knowledge domain but all that meant is that they'd produce the same outcome as a human expert. This wasn't really surprising because we'd use human experts to help create the expert systems. At that time, neural nets were kicking off and they had the ability to build concepts from positive and negative examples. For instance, show it different types of chair as positive examples and things that are similar to chairs but not actually chairs as negative examples. After enough examples (or "training") it had a grasp of "chairness" and you could show it an arbitrary chair or non-chair and it would label it correctly. Big deal, right. Well, a neural net could be used to learn about tyre grip given the right sensors. You'd probably get through a few bikes while it learned the concept though. The point is, given the right sensors and enough time (and patience) a neural net could learn just about any narrow domain task. The combination of neural nets and expert systems can be impressive, but step outside of the knowledge domain and they are completely useless. They can give intelligent outcomes but is that the same thing as being intelligent? I've heard it argued both ways. One of the worst was Donald Michie trying to convince us that a thermostat was intelligent because it "knew" when to turn the heating on and off. Dick! To me, that was a huge climb-down from the early, bold claims of what AI researchers originally promised. Does it come down to whether you view intelligence as the process or the outcome? I'm not sure.
When I were t'lad we thought black screen watches that when you pressed the button, red led numbers came up and showed you the time. Digital calculators could be turned upside down and spell shell oil or boobies. Now I'm not a T'lad if you ask da yoof what the time is or the most basic equasion and most will reach for their smart phone. highly intelligent robots based on algorithms will almost certainly replace many us in many of the jobs we do, as to a.i. to interact on a human level, I think for now it is more likely as jh said we’re simply cheating by programming it as in-depth as we can. Computers after all are still largely a yes/no machine. I mentioned mobiles because as we become more lazy/dependent on machines, our own levels of interaction are lowered to a level that autobots may just well serve our needs. Be honest, wherther you would do it or not, how many thought if thinking sex bots were mainstream, what would be the point of relationships. I say this as people in general seem to be less wiling to compromise these days seeing the world of one as the most important. On that basis a robotic companion who happens to be a sex machine also could seem to be the answer, almost like humanoid ciri or alexa. We are walking lazily, selfishly into technology where being human is seen as the weakest link. I do feel we need to walk a bit slower and think it through a bit more before we take too many steps, not all steps forward are positive.
Siri is the most wank thing in the whole of existence. I’ve given up on it but occasionally am forced to use it on the bike (intercom). “Call Jenna”... “here are some places I found for Henna”. In a sex robot, it’d be a nightmare. “Let’s do doggy style”... “ok, let’s do pedophile”
I watched the programme and was shocked at how poorly the race car handled stuff like "grip" and "steering input"....I guess automotive automation and AI are still in their infancy and have a long way to go to even mimic the sensory perception of a human, let alone surpass them (in this example). Production processes that have strictly defined boundaries and routines are a different kettle of fish altogether (Anchor example above). There are theories that our brain is little more than a biological computer, that in time, will be digitised in to a computer based model (along with the "soul" and "feelings" etc). If this is correct (and more scientist are saying that it is), it is a matter of "when" and not "if".
Yes, but do androids dream of sheep ? At which point can a complex machine achieve self awareness, if it ever can. I know I am am self aware but I can only have your word that you are self aware.