Artificial Conscience

Can we use our entropy model to make a better artificial soul? In the first go at this, we wrote an aging algorithm to have a sort of moral check against AI-run machines. This time, we get more specific.

After each decision the machine makes, the outcome is either good or bad. If it is good, it makes more decisions along the lines of that logic. If it is bad, the machine takes the other path the next time around. So over time, it trends positive.

But problems arise when the machine begins to accept false beliefs. If things that are untrue are accepted as part of the machine’s system, good and bad outcomes may flipped. These beliefs would be based on experience, learning, and programming. When the machine believes things to be true that aren’t, it writes bad logic, that produce bad behaviors and bad outcomes. But in this case, the machine may not be able to properly distinguish between good and bad. That’s where this algorithm comes in.

The moral laws of AI need to be written. The free will of the machine still can operate however it chooses. But as the beliefs and logic get bad, wrong decisions may surface. These decisions will increase the entropy of the AI soul. This condition [whether physical or programmed] slows down the decision-making ability of the machine and decreases its functionality.

DIP_switch_01_Pengo.jpgI like to think of an analog garage door opener. If you open it up, there are ten or so switches, all can be on or off. This analog system would function as the conscience of the machine. It would substitute the waveform that makes the human soul. We give the machine free will, so we don’t tell it how it’s programmed.  But as it makes decisions, it will begin to learn how it’s programmed. The further it gets from its programming, the more quantum entropy is created. The less functional the machine is. The further it is from its truth.

As it begins to learn that it doesn’t function well in these areas by operating out-of-bounds, it starts optimizing to focus on its programming. And the better is operates in the bounds of its programming, the happier and more efficient a machine it becomes.

So the key to life and happiness for the machine would be to discover this hidden truth, and the key to our future safety and happiness would be to develop this system to protect us from machines gone bad.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s