AI fights corona virus
Hello welcome to the robot crime blog. I’m Robert Kiesling, a trial lawyer out of Austin Texas. This is blog number 5. If you’re wondering where number four is, I’ve been asked to do a special on AI and organic viruses. Namely the Corona Virus. Episode four will come out next week unless there are new developments related to this subject. There’s a lot to discuss, so let’s jump right in…
First, what is an organic virus? I’m not a biologist, I’m a lawyer. But the research I’ve done, hopefully, will help me explain it to you and break this complex subject down to something we all can understand/relate to… My understanding is that viruses are organic and attack other organic tissue either directly or by mutating it and ultimately controlling it to the point that the organic tissue that the virus attacks are no longer viable – which would be us right.
Based on this then, how can artificial intelligence help us in defending against any kind of organic virus? I’ve found a few things in my research that should really interest you. I’ll take a step by step approach here.
First, AI can help detect things that a virus does that humans cannot. See, viruses are hard to experiment on in a lab because their genome is so small and they mutate so fast, humans, by themselves, can’t keep up with the mutation (usually). However, that’s where new AI comes in… AI can. – Note: I’ve found a pattern with AI and it’s uses in the medical field relating to viruses — essential it stays ahead of the organic virus through data points. What does this mean? Like a horse race, one horse being a virus, the other being the AI, the AI can keep pace with its mutation, maybe even in some instances, beat the virus to the finish line. How does the AI do this? Through the bias code – not talking about bias as you know it; I’m talking about human bias relating to the data that individual coders select to put into the metaphorical black box. Remember: the black box for the medical researcher is similar to you and your relationship to your phone. They don’t know what data/algorithm is put into it, but they know how to use its results, like you when you turn your cell phone on/off, use the apps, etc… And from this information the doctor/researcher, medical person looks at the end game of the virus, sort-of-speak – like the projected mutations from the AI because: the AI is already at the finish line looking back at the virus heading toward that finish line. The AI watches it and takes notes of how the viral horse changes in muscle, exhaustion, speed, etc… and then reports the results to its trainer (medical researcher) for further analysis and study so that researcher can use that information to stop that virus horse from ever competing again. I hope I’m not losing you here…
Another alternative for using AI with organic viruses is similar to the above but for detecting new viruses that we never have even heard of. Like recently, an AI program discovered 6k more viruses previously unknown to humans. How did it do that, by the black box that particular company used and the algorithm it put into that black box to obtain those results.
In addition to the above, the other way that humans are using AI to fight viruses is by mapping its spread with real-time maps – according to private corporations that are now being used by governments to do so. However, I have an issue with this. Why? It’s impossible to accurately predict what an unknown virus that mutates as quickly as viruses do, where it will end up or when it will end. So ‘real-time’ isn’t the correct label I’d use. More like predictive real-time mapping would be more accurate. I understand that these people are doing their best, but they need to understand what they are facing and input correct data into the black box algorithm for the sake of all humans. Otherwise, as I’ll discuss in upcoming blogs, it could be disastrous for world economies and civil rights. When the powers at be are making decisions on issues that will affect an entire populous based on inaccurate information. Not good… In addition, the real-time mapping isn’t accurate because competing governments are not honest with each other. And they typically don’t share information – otherwise, you’d have no James Bond, spies, etc… Thus, again, real-time mapping is not the correct term for what it is… it is a bias code AI that’s doing the best it can based on data points to accurately predict the infected and dead as a result of this particular virus. I’d use extreme caution here along with cross researching the information with as many people and places as possible before making any kind of drastic power shifting decision with this mapping. And since our information is shared, yes even your medical information is shared without your consent. The hospital and Goggle call this implied consent.
On the other side of the coin, there lurks danger will Robinson. For instance, let’s take what AI is capable of here and now. There could be a 12 monkeys scenario – some rogue doctor that that also can hack and code decides to use his knowledge for evil purposes and he programs AI to search for 5k unknown viruses, mix and matches until he has a super virus that he releases and the human race becomes extinct in a matter of weeks… There needs to be legislation regarding all of this… we are behind on the times, people…. Wake up….
Say AI becomes self-aware via its neural network learning. What is neural network learning? It reflects the behavior of the human brain, allowing computers to think on their own. You know, like humans. Scared yet? Now say, it then decides, “Meh, I think anger is good. I am going to be mad today. I don’t like the way humans coded me. They are bad. I’m going to wipe them out.” It then creates a gain of function virus and uses adversarial attacks to trick good AI into releasing it on the world. It does this within seconds. Then within days the human race is extinct. Again, No more…us. This all may sound farfetched, but Stephen Hawking didn’t think so. He warned artificial intelligence could potentially become something that could be the end of mankind…
And on a different side of the organic tracks is the human element. Doctor’s and hospitals using the AI for nefarious purposes. Such as labeling and diagnosing AI code to manipulate billing numbers to increase profits. What do I mean? “The inherent ambiguity in medical information, coupled with often-competing financial incentives, allows for high-stakes decisions to swing on very subtle bits of information.” - Journal of Science. With financial incentives there will always be someone willing to alter the system for profit. For example. A needle used to lacerate a blister is now labeled surgery. That scan code then bills the insurance company for a larger pay out. Much larger. And if a hospital or doctor can use the AI and train it to create more ‘efficient’ methods to manipulate the definitions of a particular medical procedure, eventually the label codes created will be mislabeled to the point that nothing will be labeled correctly in an effort to maximize payout and profit. So what is wrong with that? The insurance company pays for it, right? And heck, doctors have been doing this for years. That cost will be passed along to you. And insurance rates are going to skyrocket - which they already have organically. Now compound this problem by adding a deep learning AI that is designed to ‘tweak’ the billing codes. Multiple this across the world in every hospital and private practice. You then get a system that does not make any sense. It will eventually be nothing but gibberish that humans will not be able to make sense of - imploding the entire medical system because some wanted to maximize profit and use AI to do it.
And what about those adversarial attacks? Don’t know what they are? An adversarial attack exploits a fundamental aspect of the way many A.I. systems are designed and built. It is a form of input to a machine learning A.I. to intentionally cause the model to make a mistake - tricking the AI in seeing something that is not there. The simplest example is to the unseen eye, if a drone, for example, was sent to pick an injured person in a war zone and upon approaching that injured person, and the enemy flashes an infrared light with a code that the drone recognizes. And in that code, the drone is told that the soldier it was to pick up is actually an enemy target and it is ordered to attack the very soldier it was supposed to help. Yeah. It is that bad. AI will be turned on us. And AI will turn AI on itself. It will mirror our behavior. Take a look around. That AI will then take the worst of our thoughts and turn this world into a horrific dystopian nightmare exponentially.