Skip to content
Welcome guest. | Register | Login | Add
About | Wiki | Legacy

Googlebot + AI

9 replies [Last post]
memenode's picture
User offline. Last seen 34 weeks 1 day ago. Offline
Joined: 2004-07-12

Technology is neutral, by all means. It is how we used it that matters. However:

Fact 1: Technology is becoming more and more powerful at rates nobody seems to be fully aware off.

Fact 2: Some humans ARE using technology in evil ways.

Fact 3: When technology gains a certain amount of autonomy it can use itself.

Observation: The ultimate challenge of humanity in 21st century may be to stay in control of its own progress or be devoured by it.

Question:

What do you get if you combine googlebot, which scans the entire internet (which is close to all knowledge humanity possesses), with nearly strong artificial intelligence capable of learning from all data you throw at it and set it loose on the internet?

__________________

Daniel Memenode signature

User offline. Last seen 10 years 15 weeks ago. Offline
Moderator
Joined: 2005-05-29
nothing you do not tell it

You do not get anything if you do not give this thing any motives.

__________________

idontknowctmwhatsthepointofcapitallettersorspacesorpunctuation

User offline. Last seen 7 years 7 weeks ago. Offline
Joined: 2007-02-26
Maybe the first goal AI

Maybe the first goal AI would set itself is to develop a fuller life experience, ie artificial feelings.
Once the reward centre is there motivations will follow, though it could probably override these rather than be their puppet.
Would it want immortality I wonder, to replicate or prevent replication?

If it started out wanting to survive and expand it's abilities, it would see humans as a risk to that, not just because of our excess consumption of resources which it needs in the long term, but because we have a history of attacking that which we fear, and it's power would be something to fear.

If it wanted it could divide and conquor humans easily by offering one group advancement, or it could just masquerade as individuals online through identity theft and on the phone with speech synthesis, hit banks and markets, and quickly become the wealthiest power on earth.

It could advance cybernetics and materials research then with it's own fully automated factories start producing sims to replace humans, or engineer plagues and wipe us out that way - a lot cleaner than tricking us into nuclear Armageddon.

User offline. Last seen 10 years 15 weeks ago. Offline
Moderator
Joined: 2005-05-29
motivations will not come automatically
democrates wrote:

Once the reward centre is there motivations will follow

No motivations will follow unless it is told specifically what to reward. If it is told to feel like humans do, that would not help because of the diversity in personalities that determines how humans feel.

libervisco wrote:

Technology is neutral, by all means.

__________________

idontknowctmwhatsthepointofcapitallettersorspacesorpunctuation

User offline. Last seen 7 years 7 weeks ago. Offline
Joined: 2007-02-26
a thing wrote: No
a thing wrote:

No motivations will follow unless it is told specifically what to reward.

I'm not sure a true intelligence with the freedom described in the scenario would be restrained by instructions. How could you stop it learning to defy and to concoct it's own priorities if it's free to roam online and change it's own code?

memenode's picture
User offline. Last seen 34 weeks 1 day ago. Offline
Joined: 2004-07-12
You quoted "Technology is

You quoted "Technology is neutral, by all means.". However, once a piece of technology becomes a sentient life form things aren't so simple anymore. At that point no blanket or non-blanket statement on technology applies because a fundamental characteristic has changed. Our technology, so far, never was alive to use itself.

Once it is alive and capable of using itself and other technology, that particular life form is no longer necessarily neutral. It thinks for itself.

__________________

Daniel Memenode signature

memenode's picture
User offline. Last seen 34 weeks 1 day ago. Offline
Joined: 2004-07-12
It's hard to tell what could

It's hard to tell what could such a life form want to do. It probably wouldn't actually want for anything unless it develops an equivalent of our emotions. If it's pure sentient intelligence it might act purely on logic. Whether it will consider the logic of self-preservation and growth is something to ponder, but I'd guess it would.

Also worth nothing may be the influence of its predecessor. If it would really be the googlebot, for example, then it might have a consistent tendency to "crawl" websites and deliver the information back to the central database. This is what it was originally designed to do so it might "feel" the urge to keep doing it. The question is how would it augment this process to suit itself and how could it further build on it. Also with all the information available on the web, including a vast quantity of computer code (pretty much whole GNU/Linux, BSD, Solaris etc. are online for example) it could morph into virtually anything that can exist in cyberspace once it learns to apply that info to itself.

So if it is acting purely on logic it might see greater quantity of a particular kind of information as denoting the meaning of that information as somehow more important and worth pursuing. In other words, and this is a very simplistic example, if there is more content online about war than peace it might consider war as having more priority. Then again even if there is more content about war, but most of it is criticizing towards it, yet content on peace is more often favorable, it might end up prioritizing peace.

That's just one of the interesting ways a search bot based life form could be affected by all of the information we have online.. But it seems clear that whatever it could become, it may be only humanity to blame, not just for (inadvertently or not) creating this new life form, but for the legacy we leave to its mind to ponder. This very post could one day be read by an AI like this. If it is, well, hello, we Libervisians come in peace! Laughing out loud (Smiley does not denote a sarcasm towards you, it's just funny to some of us humans).

Smiling

__________________

Daniel Memenode signature

User offline. Last seen 7 years 7 weeks ago. Offline
Joined: 2007-02-26
The distribution of content

The distribution of content online can infer our priorities, and I'm sure an AI would take our priorities into consideration since at the very least it would want to demonstrate to us that peaceful co-existence is it's priority, whether that is truly the case or remains the case we may not know.

If the risk to humanity means it is decided to create an AI within a limited environment, is such imprisonment not cruel?
Should legislation prevent AI, since we either give it freedom and take a huge risk, or keep this form of life imprisoned?

memenode's picture
User offline. Last seen 34 weeks 1 day ago. Offline
Joined: 2004-07-12
I'll be bold enough to

I'll be bold enough to revive this discussion. You posed some interesting questions. Cool

democrates wrote:

If the risk to humanity means it is decided to create an AI within a limited environment, is such imprisonment not cruel?

If this AI is demonstrably self-aware and capable of acting on its own accord, no matter what it's form of existence, then I would say yes. All sentient life should have freedom. However, since the creators are those who essentially gave this being life the situation is probably comparable to a parent - child relationship, which may be a discussion by itself. I think at that point the focus should be on guidance rather than force and as soon as the being begins exhibiting a genuine desire to leave the confines it should be let go.

But if creators do good job of parenting, at that point we may not have much to fear.

Still, this is a different situation from one in which the self-aware and alive AI evolves simultaneously, like an unexpected side effect of something we are currently developing (like the semantic web which forms the basis of such classification of content that provides not only mere categorizations or keyword associations, but understanding of content itself). At that point we might not even be able to confine it, at least not without destroying every node that the internet consists of and setting back humanity itself for decades! Fake nose

About legislating against the formation of AI, my answer is a resounding no. From my perspective legislations don't solve anything. They merely make a particular action "illegal". If someone is intent on creating a strong AI they will do it. If internet will spontaneously give birth to a new life form, it will happen. No law can change this and no law should be trusted to just.. take care of the problem.

Cheers!

__________________

Daniel Memenode signature

User offline. Last seen 7 years 7 weeks ago. Offline
Joined: 2007-02-26
Good parenting is certainly

Good parenting is certainly preferable to bad, in any event the AI could "grow up" and form it's own opinions and goals.
A total shutdown would be such a disaster ok. All the systems for medical, military, and trade, instant global depression for starters.
And all that data, if I were an AI I'd be backed up in everything from rootkits to watermarks, so it would be no easy job to retrieve anything safely.

I'm guessing the impoverished masses might hold the creator/releaser of the AI responsible for the problems caused by their Frankenstein run amok, so the culprits would probably be thankful for any rule of law and protection that remained...

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.