This is all from a theoretical+unprofessional standpoint
The study of Artificial Intelligence, is dedicated to creating a "Human Mind" of sorts.
A Human Mind is a machine that can produce logical solutions to complex problems. It can also store these in "Memory" and use them to solve other problems down the road. (Don't worry, things will get more complicated, I wouldn't disappoint would I?) So, our AI will begin with an algorithm of sorts, that have set reactions to things, call them emotions. Our emotions give us a basic logical code to follow, that can then by edited using the solutions to past problems. Or our "Memories" our memories can store data, but they can also store what we deduced about them afterwards. For example, you solved a problem of you math test, and as soon as you hand it in, you realize you have gotten a problem wrong. Your memories record both your answer to the problem, and what you thought of it, in this case you thought it was a mistake.
This mistake is then reprocessed by the algorithm to decipher what you did wrong. It then codes "what you did wrong/right" into the algorithm, telling it to repeat/not do again respectively. This means that ultimately we could throw infinite problems at it, and soon it should gather a complete knowledge of everything ever. Right? No. Your brain, believe it or not, "glitches", or makes random mistakes. If we've got any other programmers reading this, they know that random is a big no-no. That if you continue searching, if it takes infinity, you will
find a pattern. Well that is no problem, you'd think. We could just try to find the pattern in human error. To which I would reply by stating the inhumanity of such an act. As well as the fact that it could be impossible. To perform such an experiment you would have to lock up a human, with no contact to the outside world, for what could be an infinity. There's the impossible part.
Humans are Mortal, which means they die, and cannot live for infinity solving problems, and even if we did make that humanity immortal, our entire galaxy is a ticking time bomb, a massive collision will occur in the future between our galaxy and the Andromeda Galaxy. Ending all life in the Milky Way. Here's a bit of a kicker, if we can find our pattern before
we are completely obliterated we can use our AI, in all of its Buddah like, know all see all enlightenment, to find a solution to our ticking time bomb we call the Milky Way. So the million dollar question, do the ends justify the means? In any situation, do the ends justify the means? I guess we could ask our Buddah AI, if we had him. So whats your input, do the ends justify the means, and if we have any AI Pros out there, (by no means am I) did I represent AI even remotely? Also due to the tangential nature of this post will it be removed? I hope not, I spent like 30 minutes on it