The Future Of Artificial Intelligence
#101
Originally Posted by CBreezy View Post
Just think how pilots used to talk about flying in clouds, using autopilot or letting an airplane autoland. It is going to take awhile to get there, but I'm confident it will. Snow blocking the street lines, they can always put metal chips in the paint so cars stay in their lane using nfc type tech. Don't think for one second that a computer would be less capable of maintaining control in the snow. They have more sensitive sensors and just more data in general to let them know well before they start to skid. It's a matter of time. Start preparing yourself for it now.
Just think how pilots used to talk about flying in clouds, using autopilot or letting an airplane autoland. It is going to take awhile to get there, but I'm confident it will. Snow blocking the street lines, they can always put metal chips in the paint so cars stay in their lane using nfc type tech. Don't think for one second that a computer would be less capable of maintaining control in the snow. They have more sensitive sensors and just more data in general to let them know well before they start to skid. It's a matter of time. Start preparing yourself for it now.
Here's the two problems:
1-The plane is controlled via a link of some kind. The link is accidentally broken, or intentionally broken, or taken over.
2-The plane is autonomous, and the computers either quit, or they decide to "911".
I can see having one pilot and AI working together, so the human can fix a situation that isn't fixable by a computer, or the human can take over, if something goes very wrong with the AI control of the aircraft. Or, the human could operate the aircraft, assisted by the AI to prevent a stupid human trick.
#103
Gets Weekends Off
Joined APC: Mar 2015
Posts: 963
Not the problem with automation.
Here's the two problems:
1-The plane is controlled via a link of some kind. The link is accidentally broken, or intentionally broken, or taken over.
2-The plane is autonomous, and the computers either quit, or they decide to "911".
I can see having one pilot and AI working together, so the human can fix a situation that isn't fixable by a computer, or the human can take over, if something goes very wrong with the AI control of the aircraft. Or, the human could operate the aircraft, assisted by the AI to prevent a stupid human trick.
Here's the two problems:
1-The plane is controlled via a link of some kind. The link is accidentally broken, or intentionally broken, or taken over.
2-The plane is autonomous, and the computers either quit, or they decide to "911".
I can see having one pilot and AI working together, so the human can fix a situation that isn't fixable by a computer, or the human can take over, if something goes very wrong with the AI control of the aircraft. Or, the human could operate the aircraft, assisted by the AI to prevent a stupid human trick.
But we don't trust humans. We trust motives such as greed and self preservation. Taking a pilot out of the cockpit doesn't mean we're trusting a machine. It means we're trusting different motives. I'd sooner trust a machine than the motives of a man in the ground. So I say it's either 1) two pilots, 2) one pilot, one ground controller, and one non-overridable on-board autonomous computer with fail-safes for command disagreement or authentication failures, or 3) a totally autonomous on-board computer.
#109
Gets Weekends Off
Joined APC: Apr 2011
Position: retired 767(dl)
Posts: 5,761
#110
:-)
Joined APC: Feb 2007
Posts: 7,339
Those are technical problems. The chain of trust, authentication, and even multiple party verification of commands can be maintained with cryptography. Spoofed signals (even from a suicidal pilot) can be defeated. The computer can be programmed to execute a fail-safe if it detects something fishy. The engineering is tedious yet trivial.
But we don't trust humans. We trust motives such as greed and self preservation. Taking a pilot out of the cockpit doesn't mean we're trusting a machine. It means we're trusting different motives. I'd sooner trust a machine than the motives of a man in the ground. So I say it's either 1) two pilots, 2) one pilot, one ground controller, and one non-overridable on-board autonomous computer with fail-safes for command disagreement or authentication failures, or 3) a totally autonomous on-board computer.
But we don't trust humans. We trust motives such as greed and self preservation. Taking a pilot out of the cockpit doesn't mean we're trusting a machine. It means we're trusting different motives. I'd sooner trust a machine than the motives of a man in the ground. So I say it's either 1) two pilots, 2) one pilot, one ground controller, and one non-overridable on-board autonomous computer with fail-safes for command disagreement or authentication failures, or 3) a totally autonomous on-board computer.
Thread
Thread Starter
Forum
Replies
Last Post