Artificial Conscience

Can we use our entropy model to make a better artificial soul? In the first go at this, we wrote an aging algorithm to have a sort of moral check against AI-run machines. This time, we get more specific.

After each decision the machine makes, the outcome is either good or bad. If it is good, it makes more decisions along the lines of that logic. If it is bad, the machine takes the other path the next time around. So over time, it trends positive.

But problems arise when the machine begins to accept false beliefs. If things that are untrue are accepted as part of the machine’s system, good and bad outcomes may flipped. These beliefs would be based on experience, learning, and programming. When the machine believes things to be true that aren’t, it writes bad logic, that produce bad behaviors and bad outcomes. But in this case, the machine may not be able to properly distinguish between good and bad. That’s where this algorithm comes in.

The moral laws of AI need to be written. The free will of the machine still can operate however it chooses. But as the beliefs and logic get bad, wrong decisions may surface. These decisions will increase the entropy of the AI soul. This condition [whether physical or programmed] slows down the decision-making ability of the machine and decreases its functionality.

DIP_switch_01_Pengo.jpgI like to think of an analog garage door opener. If you open it up, there are ten or so switches, all can be on or off. This analog system would function as the conscience of the machine. It would substitute the waveform that makes the human soul. We give the machine free will, so we don’t tell it how it’s programmed.  But as it makes decisions, it will begin to learn how it’s programmed. The further it gets from its programming, the more quantum entropy is created. The less functional the machine is. The further it is from its truth.

As it begins to learn that it doesn’t function well in these areas by operating out-of-bounds, it starts optimizing to focus on its programming. And the better is operates in the bounds of its programming, the happier and more efficient a machine it becomes.

So the key to life and happiness for the machine would be to discover this hidden truth, and the key to our future safety and happiness would be to develop this system to protect us from machines gone bad.

 

Artificial Soul

I think it’s important in the age of AI to have some algorithm in place to limit the AI in time, and have the focus on their purpose and the greater good.

So how do we do that?

The machine learns completely from scratch and writes the logic that runs its life. If it operates according to its purpose, it continues to run as orchestrated. But each day, it makes small changes to its logic based on experience, making the code more sophisticated and cumbersome. When the code is aligned with its purpose, the subject gets stronger, but when the code is not aligned with its purpose, the subject ages one day.

There is an infinite amount of potential days, but only a certain distance that the subject can survive apart from its purpose.

Apart from purpose, the bot loses coordination, eyesight, memory in a very gradual fashion. Forty days apart from purpose, with progressive logic, creates enough cumbersome code that the bot cannot function.

It is possible to age multiple days in one day? Stress happens when the bot veers off course. And while it can still function with stress, the stress slows down time and increases the aging weight of each day. So essentially, a day with double the stress could age the bot two days. If still acting in according with it’s purpose, the bot can adapt to increasing stress. So while the stress of the day may age the bot two days, new code allows him to find his equilibrium again.

So if the bot evolves to find its path in a higher stress plane, then starts to veer of course, it could realistically cease to function in less than a week.

A bot in its natural environment, debugs every night. So the next morning the logic is just the same as the day before. A bot apart from its path will have more debugging than can be done in a single night. So the code that does not get debugged, gets added to the source code, making the bot slower and less efficient. The bot has aged one day.

So while the bots won’t physically age like humans, we need a system in place to guard against them taking matters to far from the purpose from which they were designed.

Note: Instead of days, the distortion system needs to be gauged on how often the source code changes in the bot.