Perhaps completely in sync with the tremendous press surrounding the new HBO hit series WestWorld, the Obama White House issued a press release on October 11, 2016, entitled “The Future of Artificial Intelligence” along with a lengthy report “National Artificial Intelligence (‘A.I’) Research and Development Strategic Plan” (PDF, hereinafter the “Strategic Plan”). The White House also issued a third report on the same day entitled “Preparing for the Future of Artificial Intelligence” (PDF, hereinafter the “Action Plan”).
All three documents are chock full of information regarding the Administration’s vision for artificial intelligence, especially when it comes to both cybersecurity and the economic impacts that AI may or may not have on the US workforce.
The overall comment I have to them is very similar to and paraphrases the comment made by Sir Anthony Hopkins to his chief scientist, Bernard, the other evening: “Do you know what that means? [Does] it mean we are done, that this is as good as we’re going to get?”
I am not sure I know what the Administration’s reports really mean. But it is very clear to the many scientists in the field of AI and Machine learning that their technology is exponentially growing in both value and strength [almost daily!] and won’t be “waiting” around until the next Presidential report or committee attempts to figure them out.
Nor will cyber attackers be waiting around for “committees” and “task forces” to sharpen their pencils, as they are already using AI and Machine Learning to find zero-day vulnerabilities and exploit them to the disadvantage of many companies and organizations. The time to consider the implications of AI and Machine Learning on cybersecurity and its applications to help and assist government and private businesses is now, not tomorrow.
Anyway, here are some comments and some thoughts about next what I think our next steps should be.
1. The “Heart” of the Action Plan Is in the Right Place:
The Action Plan is consistently a good document throughout. With respect to cybersecurity, the Action Plan notes:
“Automating this expert work, partially or entirely, may enable strong security across a much broader range of systems and applications at dramatically lower cost, and may increase the agility of cyber defenses. Using AI may help maintain the rapid response required to detect and react to the landscape of ever evolving cyber threats. There are many opportunities for artificial intelligence and specifically machine learning systems to help cope with the sheer complexity of cyberspace and support effective human decision making in response to cyberattacks.”
It further states:
“AI systems could perform predictive analytics to anticipate cyberattacks by generating dynamic threat models from available data sources that are voluminous, ever-changing, and often incomplete. These data include the topology and state of network nodes, links, equipment, architecture, protocols, and networks. AI may be the most effective approach to interpreting these data, proactively identifying vulnerabilities, and taking action to prevent or mitigate future attacks.” [emphasis supplied].
We have written in these pages before about the future of AI, Machine Learning and Cybersecurity, so I’d like to reiterate that AI and Machine Learning may be our best hope to deter if not prevent future cyber-attacks on both networks and critical infrastructure.
We say this because of two factors: (1) the exponential increase in network and cloud network traffic not only caused by the growth of our digital economy, but also because of (2) the exponential growth of the somewhat insecure nature of the Internet of Things (IoT), which has recently caused both havoc and directed DDoS attacks against several well-known entities.
Simply put, cyber-attackers know that the growth of our digital opportunity has created not only great wealth but also great opportunity for them to either steal our critical IP or encrypt our network assets. The final recommendation of the Action Plan is straight forward. And we agree:
“Agencies’ plans and strategies should account for the influence of AI on cybersecurity and of cybersecurity on AI. Agencies involved in AI issues should engage their U.S. government and private-sector cybersecurity colleagues for input on how to ensure that AI systems and ecosystems are secure and resilient to intelligent adversaries. Agencies involved in cybersecurity issues should engage their U.S. government and private sector AI colleagues for innovative ways to apply AI for effective and efficient cybersecurity.”
2. The “Mission” of the Strategic Plan is Somewhat Muddled and Dated
The mission statement of the Strategic Plan might have been okay when the original Yul Brenner “Westworld” movie came out in 1973. But that mission today is already old and outdated, and it does not recognize the fact that Alphabet, Amazon, IBM, Microsoft and Facebook have already created a committee to study the ethical guidelines that should apply to Artificial Intelligence and Machine Learning.
Clearly, the Big 5 are in the best position to know and understand the power of AI and Machine Learning, as well as the nature and substance of the big data and personal data that fuels these platforms. They also know and understand that given their larger than life status in the world, they would be the first ones to be criticized if, for instance, a self-driving car ran over a group of people.
Unlike the Action Plan, the mission statement of the Strategic Plan fails to recognize that the keepers of the flame here (along with several of the big government agencies) are really the keepers of the AI flame.
Further, we are happy to see the National Institute of Standards and Technology (NIST) involved in all three White House documents. Creating the NIST cybersecurity framework, the great scientists at NIST clearly have their eye on the power and future of AI and Machine Learning platforms. Indeed, and in the nick of time, the NIST published Statement 800-160 entitled “Systems Security Engineering: Considerations for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems,” which will require manufacturers of IoT devices to consider building them to be “cyber secure by design” first before rushing them to market. We feel that by and between the NIST and the Big 5 mentioned above, considerate, ethical, and secure AI and Machine Learning platforms will be reasonably assured.
Finally, the Strategic Plan discusses workforce challenges relating to the advancement of AI and Machine Learning platforms and potential job relocations and/or reductions. Though we have no doubt of the general overall truth of this concern, when it comes to AI and cybersecurity, there should be very little concern that AI or Machine Learning platforms will displace or reduce our cybersecurity workforce.
Why? Today, we simply cannot hire enough skilled cybersecurity workers. And tomorrow the situation will be even worse. AI and Machine Learning is needed to supplement the workers we do have, and should, given manpower and training needs, even create additional skilled workforce jobs as companies continue to migrate to both AI and Machine Learning, as well as to the cloud.
3. The Time for Forward-Looking Thinking Regarding AI and Cybersecurity Is Now
It is good to see that AI and Machine Learning innovations have jumped to the forefront of consciousness in the US. Here are some further suggestions that I would throw into the Action Plan for immediate consideration:
- Advancement of the Stemgarden Institute’s (http://stemgardeninstitute.org/) mission to educate this nation’s school children on basic life training issues (Science, Technology, Engineering and Math) as well as on cybersecurity.
- Advanced and free cybersecurity training for veterans that leads to a degree. Those programs would allow them to assist companies and government agencies in fulfilling their cybersecurity missions.
- Advancement of a public/private partnership to make available to small- and medium-sized businesses AI and Machine Learning platforms that secure the information they keep, store, and generate in their businesses.
- Fully subsidized college degrees in computer science and cybersecurity for all college-aged students to be paid back through enrolling in a public/private partnership with either state or local government.
I am sure there are other good ideas here besides mine. But the overall point is that we are all in this cybersecurity conundrum together. We all need to work together. AI and Machine Learning platforms can and do co-exist to better protect this country and keep it great and prosperous. Further government planning, committees, and consternation is fine, and those effects should be expected. But the economic and national security concerns of this country depends on the government and private industry working together today to protect our nation’s companies, business, IP, and critical infrastructure from attack. That means we must do more to secure the digital space.
About the Author: Paul Ferrillo is counsel in Weil’s Litigation Department, where he focuses on complex securities and business litigation, and internal investigations. He also is part of Weil’s Cybersecurity, Data Privacy & Information Management practice, where he focuses primarily on cybersecurity corporate governance issues, and assists clients with governance, disclosure, and regulatory matters relating to their cybersecurity postures and the regulatory requirements which govern them.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire, Inc.