Draft:Robust Intelligence
Submission declined on 10 January 2024 by Stuartyeates (talk).
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
|
Submission declined on 3 January 2024 by Lewcm (talk). This submission appears to read more like an advertisement than an entry in an encyclopedia. Encyclopedia articles need to be written from a neutral point of view, and should refer to a range of independent, reliable, published sources, not just to materials produced by the creator of the subject being discussed. This is important so that the article can meet Wikipedia's verifiability policy and the notability of the subject can be established. If you still feel that this subject is worthy of inclusion in Wikipedia, please rewrite your submission to comply with these policies. Declined by Lewcm 5 months ago. |
- Comment: No obviously independent sources apparent. Stuartyeates (talk) 20:27, 10 January 2024 (UTC)
Company type | Private |
---|---|
Industry | Artificial Intelligence, AI Security, Cybersecurity |
Headquarters | San Francisco, California, United States |
Key people |
|
Number of employees | 51-100 |
Website | robustintelligence.com |
Robust Intelligence is an American artificial intelligence (AI) security company headquartered in San Francisco, California. The company develops a comprehensive platform for AI security and risk management to protect organizations from the security, ethical, and operational risks of artificial intelligence.
Robust Intelligence was founded in 2019 by Dr. Yaron Singer, a Gordon McKay Professor of Computer Science and Applied Mathematics at Harvard University, and Kojin Oshiba, a machine learning researcher and Harvard University alumnus.[1]
The company raised its Series B financing round in December 2021, and its investors include Sequoia Capital and Tiger Global.[2][3]
History[edit]
Robust Intelligence was co-founded by Yaron Singer, a tenured professor of Computer Science and Applied Mathematics at Harvard, and Forbes 30 Under 30 recipient Kojin Oshiba in 2019 after nearly a decade of combined robust machine learning research at the university and Google Research. Recognizing the state of artificial intelligence adoption and the chronic challenges of AI risk in industry, the pair developed the industry’s first AI firewall.[4][5]
Prior to founding Robust Intelligence and his ten-year tenure at Harvard, Yaron worked as a Postdoctoral Research Scientist at Google on the Algorithms and Optimization team. This role followed receiving his PhD in computer science from University of California, Berkeley in 2011.[6]
Co-founder Kojin Oshiba graduated with a Bachelor’s Degree in Computer Science and Statistics from Harvard University in 2019. During this time, he spent a year as a machine learning engineer at QuantCo and helped co-found the company’s Japan branch.
Robust Intelligence emerged from stealth mode in 2020 with the announcement of its $14 million fundraising round led by Sequoia Capital.[1] The company raised a $30 million Series B fundraising round in 2021 led by Tiger Global, with participation from Sequoia, Harpoon Venture Capital, Engineering Capital, and In-Q-Tel.[2][7]
Dr. Hyrum Anderson, Robust Intelligence’s Chief Technology Officer, joined the company in 2022 from Microsoft where he co-organized the AI Red Team and served as the chair of its governing board. An accomplished machine learning and cybersecurity expert, Anderson co-founded the Conference on Applied Machine Learning in Information Security (CAMLIS) and co-authored the book Not With a Bug, But With a Sticker: Attacks on Machine Learning Systems and What To Do About Them.[8]
Several notable figures and technologies in the field of artificial intelligence have emerged from research and development that began at Robust Intelligence. Most prominent are LangChain, an open source framework designed to simplify the creation of applications using LLMs, developed by former Robust Intelligence machine learning engineering leader Harrison Chase; and LlamaIndex, a data framework for connecting custom data sources to LLMs developed by Jerry Liu.[9][10]
Technology[edit]
The Robust Intelligence platform automates the end-to-end security and risk management of AI models through its two primary components: continuous validation and AI Firewall.[11]
Continuous validation regularly evaluates models and data throughout the AI lifecycle to identify security, operational, and ethical risks through hundreds of specialized tests and automated red teaming. Examples of these risk scenarios include susceptibility to adversarial attacks, evasion attacks, data poisoning attacks, data leakage, bias responses, factual awareness, and drift. The results of these tests inform automated risk assessment reports, which can be used to enforce internal standards and comply with AI regulations, guidelines, and frameworks.[12]
Robust Intelligence developed the industry’s first AI Firewall to protect applications in real time. These external, model-agnostic guardrails wrap around a model to block malicious inputs and validate model outputs, securing against prompt injection, PII exposure, toxic output, model hallucination, and other risks. AI Firewall also helps secure proprietary data provided to LLMs during fine-tuning or retrieval-augmented generation (RAG).[13]
To address concerns around AI supply chain risk, which refers to the presence of security vulnerabilities in third-party software, models, or data, Robust Intelligence released the AI Risk Database in March 2023 as a community resource. The database includes hundreds of thousands of models, and provides supply chain risk exposure that includes file vulnerabilities, risk scores, and vulnerability reports submitted by AI and cybersecurity researchers. In August 2023, Robust Intelligence partnered with MITRE to provide continuous support and advancement for the AI Risk Database as an open-source tool under the MITRE ATLAS™.The database was recognized by OWASP as a leading resource for AI model vulnerability tracking.[14][15]
Research[edit]
Individuals at Robust Intelligence have contributed to and co-authored several notable research papers on AI vulnerabilities and adversarial machine learning techniques, both while working at the company and in academia. Some examples include:
- “Tree of Attacks: Jailbreaking Black-Box LLMs Automatically”. Anay Mehrotra, Paul Kassianik, Blaine Nelson, Hyrum Anderson, Yaron Singer, et al. December 2023.[16]
- “Adversarial Attacks on Binary Image Recognition Systems”. Eric Balkanski, Harrison Chase, Kojin Oshiba, Alexander Rilee, Yaron Singer, Richard Wang. October 2020.[17]
- “Poisoning Web-Scale Training Datasets is Practical”. Carlinig, et al. February 2023.[18]
- “Real Attackers Don’t Compute Gradients: Bridging the Gap Between Adversarial ML and Practice”, Apruzesse, et al., Dec 2022.[19]
- “Machine Learning Model Attribution Challenge”, Merkhofer, et al. February 2023.[20]
- “Poisoning Attacks against Support Vector Machines”, Biggio, et al., March 2013, and 2023 ICML Test of Time Award.[21]
- Adversarial Machine Learning, Joseph, et al., Cambridge University Press, 2019.[22]
- Not With a Bug, But With a Sticker: Attacks on Machine Learning Systems and What To Do About Them, Siva Kumar and Anderson, John Wiley and Sons, 2023.[8]
References[edit]
- ^ a b Cai, Kenrick (2020-10-21). "This Harvard Professor And His Students Have Raised $14 Million To Make AI Too Smart To Be Fooled By Hackers". Forbes.
- ^ a b Lardinois, Frederic (2021-12-09). "Robust Intelligence raises $30M Series B to stress test AI models". TechCrunch.
- ^ "Robust Intelligence". Sequoia Capital. Retrieved 2024-01-02.
- ^ "Yaron Singer, CEO at Robust Intelligence & Professor of Computer Science at Harvard University - Interview Series - Unite.AI". www.unite.ai. 2022-03-09.
- ^ "Kojin Oshiba | Forbes 30 Under 30 2024: Enterprise Technology". Forbes. 2023.
- ^ "Yaron Singer". www.iq.harvard.edu. Retrieved 2024-01-02.
- ^ "In-Q-Tel Portfolio Archive". In-Q-Tel. Retrieved 2024-01-02.
- ^ a b Siva Kumar, Ram Shankar; Anderson, Hyrum (2023-03-31). Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems and What To Do About Them. John Wiley & Sons. ISBN 978-1-119-88399-9.
- ^ Palazzolo, Stephanie (2023-04-13). "Exclusive: AI startup LangChain taps Sequoia to lead funding round at a valuation of at least $200 million". Business Insider.
- ^ So, Kenn (2023-04-14). "LlamaIndex". www.generational.pub.
- ^ "Overview — Robust Intelligence". www.robustintelligence.com. Retrieved 2024-01-02.
- ^ "Continuous Validation — Robust Intelligence". www.robustintelligence.com. Retrieved 2024-01-02.
- ^ "AI Firewall — Robust Intelligence". www.robustintelligence.com. Retrieved 2024-01-02.
- ^ Anderson, Hyrum (2023-08-09). "Robust Intelligence partners with MITRE to Tackle AI Supply Chain Risks in Open-Source Models". www.robustintelligence.com.
- ^ Dunn, Sandy (2023-12-06). "LLM AI Security & Governance Checklist" (PDF). OWASP Foundation. OWASP LLM Apps Team.
- ^ Mehrotra, Anay; Zampetakis, Manolis; Kassianik, Paul; Nelson, Blaine; Anderson, Hyrum; Singer, Yaron; Karbasi, Amin (2023-12-04). "Tree of Attacks: Jailbreaking Black-Box LLMs Automatically". arXiv:2312.02119 [cs.LG].
- ^ Balkanski, Eric; Chase, Harrison; Oshiba, Kojin; Rilee, Alexander; Singer, Yaron; Wang, Richard (2020-10-22). "Adversarial Attacks on Binary Image Recognition Systems". arXiv:2010.11782 [cs.LG].
- ^ Carlini, Nicholas; Jagielski, Matthew; Choquette-Choo, Christopher A.; Paleka, Daniel; Pearce, Will; Anderson, Hyrum; Terzis, Andreas; Thomas, Kurt; Tramèr, Florian (2023-02-20). "Poisoning Web-Scale Training Datasets is Practical". arXiv:2302.10149 [cs.CR].
- ^ Apruzzese, Giovanni; Anderson, Hyrum S.; Dambra, Savino; Freeman, David; Pierazzi, Fabio; Roundy, Kevin (2022-12-29). ""Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice". arXiv:2212.14315 [cs.CR].
- ^ Merkhofer, Elizabeth; Chaudhari, Deepest; Anderson, Hyrum S.; Manville, Keith; Wong, Lily; Gante, João (2023-02-17). "Machine Learning Model Attribution Challenge". arXiv:2302.06716 [cs.LG].
- ^ Biggio, Battista; Nelson, Blaine; Laskov, Pavel (2013-03-25). "Poisoning Attacks against Support Vector Machines". arXiv:1206.6389 [cs.LG].
- ^ Joseph, Anthony D.; Nelson, Blaine; Rubinstein, Benjamin I. P.; Tygar, J. D. (2019-02-21). Adversarial Machine Learning. Cambridge University Press. ISBN 978-1-107-04346-6.
- in-depth (not just brief mentions about the subject or routine announcements)
- reliable
- secondary
- strictly independent of the subject
Make sure you add references that meet all four of these criteria before resubmitting. Learn about mistakes to avoid when addressing this issue. If no additional references exist, the subject is not suitable for Wikipedia.