Artificial Intelligence – There Is A Lot More Than You Would Think At This Website.

As you might imagine, crunching through enormous datasets to extract patterns requires a LOT of computer processing power. In the 1960s they just didn’t have machines powerful enough to get it done, which is the reason that boom failed. Within the 1980s the computers were powerful enough, nevertheless they discovered that machines only learn effectively once the level of data being fed to them is large enough, plus they were unable to source big enough amounts of data to give the machines.

Then came the internet. Not merely did it solve the computing problem for good from the innovations of cloud computing – which essentially allow us to access as much processors while we need on the touch of the mouse – but people on the internet have already been generating more data each day than has ever been created in the whole past of planet earth. The volume of data being produced over a constant basis is completely mind-boggling.

What this implies for machine learning is significant: we currently have more than sufficient data to actually start training our machines. Think about the variety of photos on Facebook and you start to understand why their facial recognition technology is so accurate. There is no major barrier (that people currently know of) preventing A.I. from achieving its potential. We have been only just starting to determine whatever we can do with it.

Once the computers will think on their own. You will find a famous scene from the movie 2001: A Space Odyssey where Dave, the main character, is slowly disabling the artificial intelligence mainframe (called “Hal”) following the latter has malfunctioned and decided to try and kill each of the humans on the space station it was meant to be running. Hal, the A.I., protests Dave’s actions and eerily proclaims that it must be fearful of dying.

This movie illustrates among the big fears surrounding A.I. generally speaking, namely what is going to happen when the computers begin to think on their own as opposed to being controlled by humans. The fear applies: we have been already utilizing machine learning constructs called neural networks whose structures are based on the neurons in the human brain. With neural nets, the data is fed in and after that processed through a vastly complex network of interconnected points that build connections between concepts in much much the same way as associative human memory does. Which means that computers are slowly starting to develop a library of not just patterns, but also concepts which ultimately lead to the basic foundations of understanding rather than just recognition.

Imagine you are looking at a picture of somebody’s face. When you see the photo, several things happen in your mind: first, you recognise that it is a human face. Next, you might recognise that it must be female or male, old or young, black or white, etc. Additionally, you will possess a quick decision from your brain about whether you recognise the face, though sometimes the recognition requires deeper thinking depending on how often you have come across this particular face (the event of recognising someone but not knowing straight away from where). All of this happens virtually instantly, and computers happen to be capable of doing this all too, at almost the identical speed. As an example, Facebook can not only identify faces, but can also let you know who the facial area is owned by, if said individual is also on Facebook. Google has technology that may identify the race, age along with other characteristics of any person based just tstqiy a picture with their face. We now have come a long way because the 1950s.

But true Udacity – which is called Artificial General Intelligence (AGI), in which the machine is really as advanced as a brain – is a long way off. Machines can recognise faces, nevertheless they still don’t truly know what a face is. As an example, you could look at a human face and infer several things that are drawn coming from a hugely complicated mesh of different memories, learnings and feelings. You could examine a photograph of the woman and guess that she is a mother, which in turn might make you assume that she is selfless, or indeed the exact opposite depending on your own experiences of mothers and motherhood. A guy might look at the same photo and discover the girl attractive that can lead him to make positive assumptions about her personality (confirmation bias again), or conversely realize that she resembles a crazy ex girlfriend which will irrationally make him feel negatively towards the woman. These richly varied but often illogical thoughts and experiences are what drive humans for the various behaviours – good and bad – that characterise our race. Desperation often results in innovation, fear results in aggression, etc.

For computers to really be dangerous, they need many of these emotional compulsions, but this can be a very rich, complex and multi-layered tapestry of different concepts which is very difficult to train a pc on, no matter how advanced neural networks might be. We shall get there some day, there is however sufficient time to ensure that when computers do achieve AGI, we is still capable of switch them off if necessary.

Meanwhile, the advances currently being made have found increasingly more useful applications in the human world. Driverless cars, instant translations, A.I. cell phone assistants, websites that design themselves! Many of these advancements usually are meant to make our everyday life better, and thus we should not be afraid but alternatively enthusiastic about our artificially intelligent future.