AI: A Shield Against Cybercrime

AI: A Shield Against Cybercrime

Insights

Cybercrime

AI: A Shield Against Cybercrime

 
 
August 29, 2024
 
 

By Saya Ahmed

As we have grown to rely more and more on dependent on digital methods of conducting business, and information is the new currency, cybercrime has become a pervasive threat. From data breaches to ransomware attacks, malicious actors are constantly evolving their tactics to exploit vulnerabilities. To combat this growing challenge, artificial intelligence (AI) is emerging as a powerful tool.

One of the most significant ways AI can help prevent cybercrime is through advanced threat detection and prevention. Traditional security systems often struggle to keep pace with the rapid evolution of cyber threats. AI-powered algorithms, however, can analyze vast amounts of data in real-time, identifying patterns and anomalies that may indicate a potential attack. By leveraging machine learning, AI can learn from past attacks, adapting to new threats and proactively blocking them before they can cause harm.

In 2020, a large multinational bank (that shall remain anonymous) was under constant attack from a sophisticated cybercrime group. Despite their best efforts, the bank’s traditional security systems were overwhelmed by the volume and complexity of the attacks.

The bank then deployed an AI-powered security solution that used machine learning algorithms to analyze vast amounts of network data. The AI system quickly identified patterns in the attackers’ behavior that were not discernible to human analysts. It detected unusual traffic flows, anomalous login attempts, and suspicious data exfiltration attempts.

Based on the AI’s insights, the bank’s security team was able to isolate the infected systems and prevent the attackers from gaining access to sensitive customer data. The AI system’s early detection and rapid response averted a potentially catastrophic data breach that could have had severe financial and reputational consequences for the bank.

Another area where AI can make a significant impact is in identifying and mitigating phishing attacks. Phishing emails, which often contain malicious links or attachments, remain a common tactic used by cybercriminals. AI-powered systems can analyze the content, sender, and other characteristics of emails, flagging suspicious messages for further investigation. Additionally, AI can help detect and prevent social engineering attacks, where attackers manipulate individuals to divulge sensitive information.

Moreover, AI can play a crucial role in securing the internet of things (IoT) devices. As the number of IoT devices connected to the internet continues to grow, so does the risk of cyberattacks targeting these devices. AI can be used to monitor IoT networks for unusual activity, identifying potential vulnerabilities and taking appropriate action to protect them.

However, AI is not a silver bullet. While it can significantly enhance cybersecurity efforts, it is not infallible. Human oversight and intervention remain essential to ensure that AI systems are effective and are not being exploited by malicious actors. Additionally, as AI technology continues to evolve, it is crucial to address ethical concerns and ensure that it is used responsibly.

In conclusion, AI offers a promising solution to the growing threat of cybercrime. By enabling advanced threat detection, mitigating phishing attacks, and securing IoT devices, AI can help organizations protect their valuable data and systems. As AI technology continues to mature, it is likely to play an even more critical role in safeguarding the digital world.

8 Tips for Businesses to Achieve Compliance and Avoid Fines Under the CPRA’s Data Minimization Requirements

8 Tips for Businesses to Achieve Compliance and Avoid Fines Under the CPRA’s Data Minimization Requirements

Insights

8 Tips for Businesses to Achieve Compliance and Avoid Fines Under the CPRA’s Data Minimization Requirements

 
 
August 16, 2024
 
 

By Daniel B. Garrie, Bradford Newman, Jonathan Tam

Organizations that prioritize data minimization and stay up to date with changes in privacy laws and regulations will be well- positioned to meet the privacy challenges of the future.

The majority of CPRA amendments took effect on Jan. 1, 2023, and introduced new data minimization obligations into the CCPA. As a result, the CCPA now requires a business’ collection, use, retention, and sharing of a California resident’s personal information to be “reasonably necessary and proportionate to achieve the purposes for which the personal information was collected or processed, or for another disclosed purpose that is compatible with the context in which the personal information was collected, and not further processed in a manner that is incompatible with those purposes.” (Cal. Civ. Code § 1798.100(c)). Businesses that fail to comply with the CCPA could face litigation that is damaging to the organization’s finances and reputation. Moreover, non-compliance can also lead to fines of up to $2,500 per violation or $7,500 for violations that are intentional or involve children, with each impacted consumer potentially giving rise to a separate violation.

 
 

Financial Institutions Guide to Cybersecurity and Operational Resilience

Financial Institutions Guide to Cybersecurity and Operational Resilience

Insights

Financial Institutions Guide to Cybersecurity

Financial Institutions Guide to Cybersecurity and Operational Resilience

 
 
August 14, 2024
 
 

By Saya Ahmed

The financial industry operates in a complex and dynamic landscape, characterized by increasing digitalization, regulatory scrutiny, and the ever-present threat of cyberattacks. To safeguard customer assets, maintain market confidence, and ensure business continuity, financial institutions must adopt a holistic approach to cybersecurity and operational resilience.

Operational resilience, the ability of a financial institution to absorb, adapt to, and recover from adverse events, is paramount. Cybersecurity is an integral part of this, but operational resilience encompasses a broader range of risks, including market fluctuations, economic downturns, and regulatory changes. A holistic view allows institutions to identify interdependencies and develop robust strategies to mitigate risks across the organization.

Financial institutions handle sensitive customer data, which makes them prime targets for cybercriminals. A strong cybersecurity posture requires collaboration between IT, risk management, compliance, and other functions. This includes implementing advanced threat detection and prevention technologies, conducting regular security assessments, and providing comprehensive employee training.

Moreover, financial institutions must be prepared to respond effectively to cyber incidents. Incident response plans should be regularly tested and updated to reflect evolving threats. This requires close collaboration between IT, legal, and communications teams to contain the damage, protect customer information, and restore operations.

Risk management is fundamental to operational resilience. Financial institutions must conduct thorough risk assessments to identify potential threats and vulnerabilities, prioritize mitigation efforts, and allocate resources accordingly. A holistic approach involves considering not only cyber risks but also operational, market, credit, and liquidity risks. This comprehensive view enables institutions to develop well-rounded strategies that address multiple threats simultaneously.

Business continuity planning is essential for ensuring the continued delivery of critical services in the face of disruptions. Financial institutions must have robust plans in place to maintain essential operations, protect customer assets, and comply with regulatory requirements. Cybersecurity should be an integral part of business continuity planning to ensure a coordinated response to cyberattacks.

Law and Forensics’ expertise has provided invaluable support to financial institutions in building a strong cybersecurity and operational resilience framework. Legal experts have navigated complex regulatory landscapes, conducted investigations, and managed legal and reputational risks. Forensics specialists have investigated cyber incidents, recovered lost data, and provided evidence for legal proceedings.

In conclusion, a holistic approach to cybersecurity and operational resilience is imperative for financial institutions to thrive in today’s challenging environment. By recognizing the interconnectedness of various risks, conducting thorough risk assessments, developing comprehensive plans, and fostering a culture of resilience, financial institutions can build a strong foundation for long-term success.

Neom: A Techtopia?

Neom: A Techtopia?

Privacy and Cyber Concerns in Saudi Arabia’s Mega-City

Insights

Neom privacy concerns and cybersecurity challenges

Neom: A Techtopia?

 
Privacy and Cyber Concerns in Saudi Arabia’s Mega-City
 
July 29, 2024
 
 

By Saya Ahmed

Rising from the Saudi Arabian desert sands, Neom promises to be a futuristic metropolis, a beacon of technological innovation and sustainable living. Yet, beneath the gleaming vision lies a shadow of concern – the potential for a society built on pervasive surveillance and privacy concerns. This article delves into the cyber and privacy concerns surrounding the Neom project, raising questions about the balance between technological advancement and individual freedoms.

A City Built on Data:

Neom envisions being a data-driven city, with every aspect – from traffic flow to energy consumption – monitored and optimized through a network of sensors and connected devices. This “internet of things” approach offers undeniable benefits, but it also raises red flags. The vast amount of personal data collected – from facial recognition to health data – raises concerns about how it will be stored, used, and potentially stolen.

A crucial question is who will have access to this vast trove of personal information. Neom’s governance structure remains opaque, with details about data ownership and usage rights unclear. Will data be centralized under government control, or will private companies have access? The lack of transparency fuels anxieties about potential breaches or unauthorized use. Neom’s reliance on advanced technologies, including facial recognition and AI-powered monitoring systems, creates cause for concern in the case of a major hack or breach.

Cybersecurity Threats:

A city as technologically advanced as Neom promises, will be a prime target for cyberattacks. Hackers could disrupt critical infrastructure, steal sensitive data, or even launch cyber terrorism attacks. The interconnected nature of the city’s systems could create a cascading effect, causing widespread damage if compromised. Neom’s developers need to prioritize robust cybersecurity measures and ensure constant vigilance against cyber threats.

Noem needs to address these privacy and cyber concerns. The development organizers must prioritize transparency regarding data collection, usage, and security measures. A robust legal framework protecting personal data and ensuring accountability for misuse is also crucial. Ultimately, the success of Neom hinges on public trust. Balancing technological advancement with cybersecurity, and the implementation of strict data protection regulations. Neom’s ambitious vision necessitates a robust foundation in cyber and privacy protection. To safeguard its digital infrastructure and the sensitive data of its residents and businesses, integrating the expertise of Law & Forensics is paramount. By leveraging these disciplines, Neom can establish a comprehensive legal framework, develop proactive cybersecurity measures, and ensure swift and effective responses to potential breaches. This strategic approach will not only protect Neom’s reputation but also foster a secure environment essential for attracting global talent and investment.

Neom has the potential to be a marvel of innovation, but it faces the challenge of balancing progress with accountability. The project’s success depends on addressing privacy and cyber security concerns head-on. Only by ensuring transparency, robust data protection, and public trust can Neom become a true “techtopia”.

Cybersecurity in Multidistrict Litigation

Cybersecurity in Multidistrict Litigation

Insights

ALM

 

Cybersecurity in Multidistrict Litigation

 

June 17, 2024

 

By Daniel B. Garrie, Michael Mann and Leo M. Gordon

MDLs can pose unique challenges for cybersecurity litigators as MDLs often involve large volumes of data that may be consolidated from disparate sources. This article examines some key cybersecurity considerations for attorneys that are part of an MDL.

 

Cybersecurity is becoming more important for the legal industry as more and more lawsuits involve large volumes of sensitive data. This is particularly true for multidistrict litigations (MDLs), which have become increasingly common in recent years. It is estimated that there are approximately 457,000 civil actions pending on MDL dockets, which represents approximately 67% of all pending civil litigation, throughout the United States, as of Dec. 31, 2023 (this information was computed from statistical reports from Judicial Panel on Multidistrict Litigation and U.S. District Courts—Civil Statistics Tables for the Federal Judiciary).

MDLs can pose unique challenges for cybersecurity litigators as MDLs often involve large volumes of data that may be consolidated from disparate sources. This article examines some key cybersecurity considerations for attorneys that are part of an MDL.

 

What Makes MDLs Unique

MDLs are litigations in which multiple lawsuits filed in various jurisdictions are consolidated into a single case. MDLs are meant to streamline the litigation process for multiple cases arising from the same or similar events. MDLs commonly involve cases in which a single large-scale entity’s actions affect many people located in various parts of the country. This is commonly seen in cases related to things like defective products, unsafe drugs, intellectual property infringement, oil spills, employment practices and securities fraud.

The decision to consolidate or coordinate pre-trial proceedings in disparate cases in a MDL is made by a national panel known as the Judicial Panel on Multidistrict Litigation. Once a MDL is created, steering committees are established for the parties (plaintiffs or defendants) to the litigation, and lead counsel is chosen for each group of parties. Cases are referred for consolidation or coordination in a MDL for the purposes of pre-trial proceedings, such as discovery, case management, including often deciding Daubert and nondispositive and dispositive motions, and possibly settlement discussions.

JAMS Releases New Rules for AI Disputes

JAMS Releases New Rules for AI Disputes

Insights

ALM

 

JAMS Releases New Rules for AI Disputes

 

April 23, 2024

 

By Rhys Dipshan

On Tuesday, alternative dispute resolution service provider JAMS announced new rules around disputes involving artificial intelligence. These rules cover a range of issues, including the protection of proprietary training data and AI models, as well as the knowledge needed to arbitrate disputes concerning AI software.

In a news release, JAMS noted that the rules “refine and clarify procedures for cases involving AI systems” and help “equip legal professionals and parties engaged in dispute resolution with clear guidelines and procedures that address the unique challenges presented by AI, such as questions of liability, algorithmic transparency, and ethical considerations.”

In addition, the rules sets forth a definition of AI, specifically defining the term as “a machine-based system capable of completing tasks that would otherwise require cognition.” Throughout the legal industry, defining AI has been tricky, with many judges differing on how they describe the technology, and what it entails.

In an email, JAMS President Kimberly Taylor told Legaltech News that the impetus behind the new rules was an anticipation of “new litigation arising from the technology. We believe that resolving AI-driven disputes via arbitration will require tailored rules that account for the complexity of the technology and that introduce other novel factual and evidentiary issues.”

 

Evolving AI and arbitration legal practices

Evolving AI and arbitration legal practices

Insights

Evolving AI and arbitration legal practices

 
July 24, 2024
 
 

By Daniel B. Garrie and Ryan Abbott

The JAMS Artificial Intelligence Dispute Resolution Rules (AI Rules) are a crucial update of arbitration processes for modern technology. These rules streamline the resolution process and reduce the time and cost associated with resolving disputes. The impact of artificial intelligence (AI) on society is evident, yet the extent of its influence remains unclear. What is clear, however, is that AI-driven disputes will only increase in the coming years. These disputes will involve a wide range of conflicts from data privacy breaches, intellectual property infringement, unauthorized synthetic content, and trade secret misappropriation involving large language models (LLMs), to breach of contract. As it stands, alternative dispute resolution (ADR) today is largely ill-equipped to handle these types of disputes; in recognition of this, I, along with Dr. Ryan Abbott, worked closely with JAMS to develop JAMS Artificial Intelligence Dispute Resolution Rules (AI Rules). “JAMS Rules Governing Disputes Involving Artificial Intelligence Systems, effective April 15, 2024,” www. jamsadr.com/rules-clauses/ artificial-intelligence-disputes?clause-and-rules.

At a high level, the AI Rules establish that, unless otherwise agreed by the parties, JAMS will propose arbitrator candidates with AI knowledge (Rule 15(b)), mandate a protective order by default to secure sensitive information and stringent data handling during disputes (Rule 16.1(a)), and limit expert testimony to written reports and directed oral responses to maintain focus and confidentiality throughout the arbitration process (Rule 16.1(b)). The net effect of the AI Rules is that they streamline the resolution process and reduce the time and cost associated with resolving AI disputes.

ARBITRATOR SELECTION Rule 15(b) of the AI Rules provides that “JAMS shall propose, subject to availability, only panelists approved by JAMS for evaluating disputes involving technical subject matter with appropriate background and experience. JAMS shall also provide each party with a brief description of the background and experience of each Arbitrator candidate.” Id. at 15(b). This prerequisite helps to alleviate parties expending substantial resources educating the arbitrator on the technical aspects underpinning the dispute and that the arbitrator will be capable of adjudicating appropriately. For example, in a dispute concerning the execution of a funding agreement, the main issue is the alleged misrepresentation of a company’s machine learning algorithms’ capabilities and performance metrics, which were crucial in securing the funding. The Arbitrator should possess knowledge of AI to adequately understand the technical nuances of the case.

From Niche to Universal: The Broadened Application of NIST Cybersecurity Framework 2.0

From Niche to Universal: The Broadened Application of NIST Cybersecurity Framework 2.0

Insights

ALM

 

From Niche to Universal: The Broadened Application of NIST Cybersecurity Framework 2.0

 

July 2, 2024

 

By Daniel B. Garrie, Esq., Yoav Griver

The National Institute of Standards and Technology (NIST) Cybersecurity Framework was created to provide a structured approach to managing cybersecurity risks and improving overall security measures. It serves as a guide for organizations to identify, protect, detect, respond to, and recover from cyberthreats effectively. The NIST recently unveiled the muchanticipated version 2.0 of its landmark Cybersecurity Framework. This update, as detailed in the NIST’s announcement, is designed to be more inclusive, extending its applicability across all sectors and industries, thereby reinforcing the importance of cybersecurity in the modern digital age. The expansion and refinement of the framework underscore the growing recognition of cybersecurity as a critical component of organizational integrity, regardless of the industry. This article explores the implications of the NIST Cybersecurity Framework 2.0 for organizations and elucidates why thirdparty cyber audits are instrumental in ensuring compliance and enhancing cybersecurity posture.

Understanding NIST Cybersecurity Framework 2.0

The NIST Cybersecurity Framework 2.0 is designed to be universally applicable, extending its reach beyond critical infrastructure sectors to encompass all industries. This inclusive approach is a response to the universal challenge of cybersecurity threats, which do not discriminate by sector. The framework’s expanded applicability means that organizations across various sectors, including those not traditionally considered part of critical infrastructure, such as education and retail, are now encouraged to adopt its guidelines to bolster their cybersecurity defenses. Moreover, the framework has been updated to offer enhanced flexibility, allowing organizations to tailor their cybersecurity strategies more effectively to their specific needs, risks, and contexts. This adaptability is crucial in a landscape where cyberthreats are constantly evolving, and one-size-fits-all solutions are often inadequate

To read the full article, go to ALM

Lessons for CISOs from the SolarWinds Breach and SEC Enforcement

Lessons for CISOs from the SolarWinds Breach and SEC Enforcement

Insights

ALM

 

Lessons for CISOs from the SolarWinds Breach and SEC Enforcement

 

May 2024

 

By Daniel B. Garrie, Esq., David cass, and Jennifer Deutsch

In an era where digital threats loom large, the responsibilities of Chief Information Security Officers (CISOs) have expanded beyond traditional IT security to encompass a broader governance, risk management, and compliance role. The infamous SolarWinds Corp. attack, which compromised numerous public and private organizations globally, illustrates the complex cybersecurity landscape CISOs navigate. The subsequent legal and regulatory responses, including a complaint by the U.S. Securities and Exchange Commission (SEC), underscore the critical role of CISOs in not only safeguarding digital assets but also ensuring compliance with evolving cybersecurity disclosure requirements. This article examines the SolarWinds incident and the SEC’s actions to derive essential governance lessons for CISOs.  

In 2020, SolarWinds disclosed that it had been subject to a cyberattack, commonly referred to as “SUNBURST.” SUNBURST is believed to have been conducted by Russian state-sponsored hackers and affected over 18,000 customers, including government agencies and Fortune 500 companies.i Attackers compromised the infrastructure of SolarWinds, a leading provider of IT management software, to distribute malicious updates to the company’s Orion software.  

In response to the breach, on October 30, 2023, the SEC sued SolarWinds and its CISO, Timothy G. Brown, in connection with the SEC Division of Enforcement’s investigation of the cyberattack.ii The SEC alleges that from October 2018, when SolarWinds went public, to January 2021, SolarWinds and Brown “defrauded SolarWinds” investors by overstating SolarWinds’ cybersecurity practices and understating or failing to disclose known risks.iii In its filings with the SEC, SolarWinds allegedly misled investors by disclosing only generic and hypothetical risks at a time when SolarWinds and Brown knew of specific deficiencies in SolarWinds’ cybersecurity practices as well as the increasingly elevated risks the company faced at the same time.iv Recently, the SEC filed an amended complaint that lays out the same claims it made against the company last fall, only in greater detail.v

To read the full article, go to ALM

Using AI to Predict Outcomes in Class Action Litigation

Using AI to Predict Outcomes in Class Action Litigation

Insights

ALM

 

Using AI to Predict Outcomes in Class Action Litigation

 

May 3, 2024

 

By Daniel B. Garrie, Esq. and Michael Mann

In evaluating whether to take on a potential class action case, attorneys have to consider many things. How many other people have been harmed in the same way as the prospective plaintiff? How likely is it that their claims will succeed? How likely is it to get class certification? Have other lawsuits asserting the same claims already been filed? It can be a challenging analysis to undertake even before getting involved in the actual case. The development of legal artificial intelligence (AI) tools in recent years is starting to have an impact on this type of analysis for class actions. This article explores how using AI can impact class action lawsuits and change the legal landscape.

But first, what does AI mean in this context? The term AI within the legal technology field refers, at a basic level, to software designed to perform fairly routine language-related tasks, such as reading court rulings, very quickly over a large data set. It generally involves machine learning, which enables the software to improve its performance according to human direction and feedback. Legal AI tools use natural language processing (NLP) software that can understand written or spoken commands, enabling lawyers to easily give direction and feedback to the AI without needing to use computer programming.

One use of AI in the legal field is to predict court rulings on potential litigation issues. AI tools can analyze extremely large volumes of court rulings to determine the decisions reached by the judge in relation to the facts of each case. This analysis can be used to identify trends in fact patterns corresponding with favorable or unfavorable rulings as well as other types of trends such as those pertaining to jurisdiction, specific judges, specific types of plaintiffs, specific defendants, etc.

To read the full article, go to ALM