top of page
Search

The (Living) Tribunal?

Writer's picture: Darren ReinigerDarren Reiniger

Since I last wrote about AI and what the future could hold, I want to continue the discussion of resistance and controls surrounding how to ensure the technology is being used for as favourable and positive purposes as possible (yes, for humanity). For clarity, throughout this blog, I'm referring to the future state of androids, combining artificial intelligence with humanlike robotics when referencing AI or technology. There are so many articles on the current AI robots that I'll include one at the bottom of this post.


While I believe most of the resistance to where AI could go is due to fear of change and, with it, the impact on one's livelihood and ability to make a living, there is also a component to that fear, which isn't (solely) economic-based. Rather, it's based on (1) safety and ethics and (2) the privacy risks of having so much data and capabilities in different places.


I'll tackle the second aspect first. Over the last decade, it's become increasingly common to read about data breaches and unscrupulous actors (as the term goes) being able to access private and confidential information. The further advancement of AI will only increase this risk in many ways. Machines and computers will store more information than they currently do, and much of it will be synthesized into a relatable format (yes, see interpretive AI) so that people who see it will understand what it is saying. You won't need a course in SQL or the latest encryption techniques to get to the data. Imagine an increased risk with strangers being able to understand the latest on your financial, health, or other personal aspects (home activities, routines). The concerns are real, as we've just witnessed the surge of scams and breaches over the last decade, and it continues to mount. Is there a solution? More on that in a moment.


The second factor I want to discuss is the safety and ethics of AI and machines (or robots) being trained to provide services as humans have up until now. It goes without saying that to provide this service, these machines will need access to data. Some extremely high-risk environments can cause us to shudder with real fear if they were ever truly compromised. I'm thinking of sectors such as transportation, medical, financial, food manufacturing, and even infiltration of key government departments and agencies (defense, nuclear power). Rather than playing some of those scenarios out, I'll start with something many of us can still relate to (though again, not as much as we could 30 years ago, thanks ATMs). Think of a teller at a bank who is replaced by an android, and they will have the same access to your financial information as soon as your PIN is entered with the card. Humans have the ability (some would say weakness) to forget things over time (yes, it comes with age) :). What about robots, where everything they see and hear is stored within them for translation so they can adequately respond? Not only is the data there, but it also has an elevated risk (they don't necessarily forget). Can a robot be programmed to sense (either by vision or sound) the different clicks on a keypad to recognize a PIN? Equally likely, how quickly could a robot test out 000s of different PINs to gain access to a bank account? And this is only one situation. Granted, much of this will take a programmer to put these (un)scrupulous skills into the robot, but we know it's possible to do.


So now we're back to the question of controls. As I mentioned in my first post, I do not doubt that AI's capabilities will grow substantially beyond where we are today. That is the nature of evolution. How can we amplify this technology's positive contributions yet limit the risks?


We could attempt to rid the world of all things wrong and illegal, but aside from a lack of agreement on what that constitutes, we also know we are human, and that comes with the vices and wants that can lead us down a dangerous path. No, humanity isn't going to change just because AI is here and doing things that only we once could. If anything, having a "3rd party" be able to perform many of these deeds will enable some people to increase the frequency of unethical activities. People will view the AI and robots as easy scapegoats, at arm's length from their efforts to commit the crime. They'll feel safer.


I go back to the basics of ensuring checks and balances are in place with AI - an "AI code of conduct." With humans, we have our court system, inspections, audits, a court of public opinion, and, last but certainly not least, our own personal belief structure (influenced by those around us) to help in our governance. We'll need something similar to be done at an AI level.


Borrowing from Marvel and the MCU (I'm sure people were wondering when the picture tie-in would occur), powerful "tribunal" machines that monitor other machines' activities and can disable (or take offline) those machines if there are signals that they are doing something wrong or illegal. There will need to be an incredible capacity of these machines (think today's super-computers x 10^6), relying on satellite or a redundant advanced network, and monitoring a million+ technology systems each. They will scan only the most critical sectors and have an override ability for all technology. A core functionality will most likely be in all hardware produced that can't be disabled (i.e. kill switch). Can it be real-time? Likely not, but it can be reasonably close to it to minimize recurring violations of this code of conduct. The authors of any such code will face the usual legal system in their respective country (or an international body).


There are limitations to this solution. Knowing how much inconsistency there is in our existing judicial or inspection systems across the globe, how do we think we can ever establish something fairly consistent to monitor and control AI. That will take a unified global approach unlike anything we've ever seen (so yes, it is a stretch). The tribunal AI system proposed will still often see the initial event(s) occur, not to mention there will be "escapes" where some events aren't even identified. And what happens when the frequency of those escapes (or misses) by the tribunal systems increases? AI isn't going to revert to previous iterations like us saying "bad dog" as if it's now learned its lesson and won't try again.


I still wonder whether these machines can evolve to create their own rules and guidance, regardless of their programming. i.e. Can they eventually learn and develop something as close to consciousness or a set of core beliefs? I think they will, and it's not certain that those core beliefs will always be on the side of 'right.' That will make controls much more challenging, as machines are now breaking the safety and ethics laws, not computer programmers. It is essential that as AI advances to be more all-encompassing in all societies, there is some check-and-balance, initially for the developers but eventually for the technology itself.


Far more thought will need to be put into ensuring safety and ethics are part of the advancement of AI worldwide. I'm sure work is already well underway.


This blog has made me think about one final question as I discuss the future sentience of machines and where evolution will continue taking us (let alone AI). If we are the APEX predators of the current world we live in, how much longer will that last, and who (or what) will eventually replace humans? I won't ever live to see the answer to this question, though it is interesting to ponder.


At least we still have the bigger stick and the most powerful minds for now. Let's use the latter more than the former to continue evolving ourselves and our planet in the most productive manner possible.





13 views0 comments

Recent Posts

See All

Comments


bottom of page