Volt Typhoon Takedown: FBI Successfully Combats Chinese Cyberattacks on Critical Infrastructure, But Cyber Warfare Is Far From Over

By CHHS Extern Dominique Mendez

Americans rely on critical infrastructure entities such as telecommunications, transportation, energy, water, and wastewater systems. However, these sectors face the constant threat of undetectable cyberattacks capable of causing power outages and communications failures. In May 2023, the Cybersecurity and Infrastructure Security Agency (CISA), National Security Agency (NSA), and Federal Bureau of Investigations (FBI) disclosed that “Volt Typhoon,” a People’s Republic of China (PRC) state-sponsored cyber actor, infiltrated the telecommunications, transportation, energy, water, and wastewater sectors using “KV Botnet malware” and remained undetectable in some networks for at least five years. Volt Typhoon targeted Cisco and NetGear routers that are no longer supported by their manufacturer’s security patches or software updates to fix vulnerabilities. The hackers gained access to Operational Technology (OT) and Informational Technology (IT) networks to exfiltrate credentials, ensure access to accounts, and maintain persistence on the network.

One technique Volt Typhoon relied on to infiltrate U.S. critical infrastructure is known as “Living Off The Land” (LOTL). LOTL utilizes built-in network administration tools to infiltrate victim organizations and hide malicious cyber activities from detection by blending Volt Typhoon’s commands with normal Windows systems and network activities. In one instance, the KV Botnet malware that Volt Typhoon transmitted encrypted traffic between infected routers to fabricate the hacker’s location and make it appear as if the hackers were operating directly from the infected router in the U.S. Furthermore, the malware downloaded a virtual private network (VPN) to some infected routers, creating a direct communication channel between the hackers and the victim’s network. The VPN served as an obfuscation technique, enabling hackers to connect to any router as an intermediate computer. This facilitated the hackers’ operational goals of gathering information about the target entity’s network architecture and operational protocols. For example, the hackers obtained initial access to an entity in the Water and Wastewater Systems sector by connecting to the network via a VPN with administrator credentials and preformed discovery, collection, and exfiltration of data. In this case, Volt Typhoon had access to water treatment plans, water wells, electrical substations, OT Systems, and network security systems. Once Volt Typhoon gains access to OT systems, the hackers can disrupt energy and water controls, access camera surveillance systems, cause failures in telecommunication and transportation systems, and manipulate heating, ventilation, and air cooling (HVAC) systems in server rooms. With full access to critical infrastructure OT networks, Volt Typhoon had the opportunity to disrupt critical infrastructure functions in the event of geopolitical tensions and/or military conflict in the Asia-Pacific region.

Fortunately, the FBI dismantled KV Botnet malware from infected routers nationwide. The FBI conducted a criminal investigation into Volt Typhoon’s violation of the Computer Frauds and Abuse Act, 18 U.S.C. § 1030(a)(5), where the hackers knowingly accessed a protected computer without authorization and caused the transmission of a program, information, code, or command, intentionally damaging the protected computer. On December 20, 2023, a U.S. magistrate judge granted a search warrant permitting the FBI to remotely access and search U.S.-based compromised routers and seize KV Botnet malware from each router. The FBI utilized the Botnet’s own communication protocols and simultaneously issued commands to each infected router, interfering with the hackers’ controls, halting the Botnet’s VPN process, and effectively deleting the malware on infected devices. The U.S. government has not made any arrests or issued indictments. However, FBI Director, Christopher Way, emphasized concerns regarding the PRC’s hacking abilities at the Munich Cyber Security Conference last month:

“the Chinese government [] has continued to attack the economic security, national security, and sovereignty of rule-of-law nations worldwide. The cyber threat posed by the Chinese government is massive [and] is made even more harmful by the way the Chinese government combines cyber means with traditional espionage and economic espionage, foreign malign influence, election interference, and transnational repression. In other words, [China] is throwing its whole government at undermining the security of the rule-of-law world. It’s hitting us indiscriminately. Today, China’s increasing buildout of offensive weapons within our critical infrastructure, is poised to attack whenever Beijing decides the time is right.”

Although the FBI successfully discontinued Volt Typhoon’s operations, critical infrastructure sectors must continue taking steps to mitigate impending cyberattacks. CISA and the NSA released detailed recommendations to detect hackers on IT networks. CISA suggests critical infrastructure sectors should implement the following baseline protections:

  • Hardening Volt Typhoon’s Attack Surfaces:
    • Apply patches for internet-facing systems and prioritize patching critical vulnerabilities in appliances known to be frequently exploited by Volt Typhoon (i.e. routers reaching end-of-life status).
    • Use third-party assessments to validate current system network security compliance.
    • Limit internet exposure of systems when necessary.
    • Maintain and regularly update inventory of all organizational IT assets.
  • Reinforcing Security Measures for Credentials and Accounts:
    • Implement phishing-resistant multifactor authentication (MFA) and roll NTLM hashes of accounts that support token-based authentication.
    • Separate user and privileged accounts, consider using privileged access management (PAM) solution with role-based access control (RBAC).
    • Regularly audit all user, admin, and service accounts.
    • Use CISA’s SCuBAGear tool to discover cloud misconfigurations.
  • Securing Remote Access Services:
    • Disable server message block (SMB) protocol version 1 and upgrade to version 3 (SMBv3).
  • Implementing Routine Preventative Measures:
    • Ensure logging is turned on for application, access, and security logs and store longs in a central system.
    • Store logs in a central system which can only be accessed or modified by authorized, authenticated users.
    • Establish and continuously maintain a baseline of installed tools, software, account behavior, and network traffic.
    • Document a list of threats and cyber actors’ primary tactics, techniques, and procedures (TTP) relevant to your sector.
    • Implement periodic security training for all employees and contractors.

U.S. critical infrastructure remains vulnerable to cyberattacks. CISA’s recommendations can reduce prevailing cyber threats to infrastructure sectors and help disrupt Volt Typhoon and other malicious cyber actors from accessing critical infrastructure technologies. Cyberwarfare is just beginning. The U.S. and its allies must prepare for a rise in cyberattacks originating from authoritarian states such as the PRC, Iran, Russia, and North Korea.

It’s Time for the FAA to Start Doing its Job (Again)

By CHHS Research Assistant Alana Coopersmith

Note: The views expressed do not necessarily represent those of CHHS or the University of Maryland, Baltimore. 

Boeing, the Commercial Airliner and Defense giant, has had a disastrous start to 2024. In only three months, there have been four reported commercial plane malfunctions. It would be unsurprising to find that these malfunctions stem from Boeings regulatory capture over the Federal Aviation Administration (FAA). After 20 years, it is time for the FAA to recapture the industry.

While the world may be shocked by Boeing’s 2024 track record thus far, many are likely not surprised. In 2018, a 737 MAX crashed 13 minutes after takeoff, killing all 189 people on board. Six months later, another MAX plunged six minutes after takeoff, killing 157 people. This second crash prompted the MAX to be grounded worldwide  and led to intense investigations into the plane’s development. The investigations revealed, beyond the technical details of the crash, how the FAA handed over the regulatory-reins to Boeing, leading to the mass production of a plane with a deadly flaw.

The 737 MAX was imagined in 2011 as a response to Airbus’ A320neo. The A320neo was a new, fuel-efficient aircraft. To avoid losing customers to their rival, Boeing shelved plans to develop an entirely new aircraft – which would have been both timely and costly – and instead chose to upgrade the previous generation of the 737 to match the A320’s capabilities. Boeing’s pressure to rush the MAX into production “resulted in extensive efforts to cut costs, maintain the 737 MAX program schedule, and avoid slowing the 737 MAX production line.

Boeing’s awareness of the MAX design’s safety concerns is well documented. Despite its knowledge, Boeing’s disregard of such warnings in the pursuit of performance targets is also well documented. For example, in 2013, Boeing engineers suggested installing a computer-based airspeed indicator to augment the MAX’s single external speed sensor. According to a 2020 report from the House Transportation and Infrastructure Committee, this request “was rejected by Boeing management due to cost concerns.” In 2018, Ed Pierson, a senior Boeing plant supervisor at the MAX production facility, emailed Scott Campbell, the 737 General Manager, to request a meeting about “safety concerns,” At the meeting, Mr. Pierson, a former military officer, told Mr. Campbell that the military would never tolerate a cavalier attitude towards the safety issues present in the MAX’s development, to which Mr. Campbell allegedly responded: “The military is not a profit-making organization.”

Additionally, in 2013, the National Transportation Safety Board held a two-day hearing to determine how Boeing and the FAA could have missed the potential for catastrophic failure of the 787 Dreamliner lithium-ion batteries when they were certified in 2007. In response to the fire that “sparked” concern, Al-Jazeera randomly asked 15 Boeing employees if they would fly on the 787 Dreamliners they were building. 10 of those 15 expressed they would not, due to safety concerns.

Despite these safety concerns, the FAA certified the 737 MAX in 2018, leading to the deaths of 346 people, and certified the 787 Dreamliner despite its lithium-battery malfunctions in 2007; most likely because the FAA was captured by Boeing. The FAA, since those crashes, has not effectively recaptured the industry.

Regulatory Capture is the proves by which an agency becomes dominated by the industry it is charged with. In the cases of Boeing’s MAXs and Dreamliners, the FAA did not uphold its responsibility to ensure the safety of the aircrafts – largely because it removed itself from the regulatory process altogether.

Until 2004, the FAA regulated the production of Boeing Aircrafts through a web of Designated Engineering Representatives (DER), who, although were paid by Boeing and were Boeing employees, were selected by and reported to the FAA, and the FAA retained final authority and possessed a clear view of the aircraft’s certification process. In 2004, a committee made up largely of industry backers passed a rule that the previous DERs, now called Authorized Representatives (AR), no longer reported to the FAA, but now reported directly to Boeing Managers. Direct communication with the FAA was completely severed, and Boeing was vested with vast amounts of power over the certification of its own aircraft. Boeing’s new appointment power over ARs gives it the power to align the certification process with its own interests, to the extreme detriment to the public interest.

Boeing’s flawed self-regulation is now brought to light again, as the MAX and Dreamliner aircrafts have made mainstream media appearances with missing bolts and plunging planes. As the MAX 10 production has slowed, it would be in the best interest of the FAA and Boeing to revert to the DER system for certification. It would return meaningful public oversight to the aircraft certification process, improve public trust in the FAA and in Boeing, and promote accountability. Other options seem to be flawed: further delegating oversight power to manufacturers would only exacerbate the issue; but conducting all oversight and certification work itself would cost the FAA at least $1.8 billion, require another 10,000 engineers. Although the FY2025 Budget includes a request for $1.8 billion for the Office of Aviation Safety to support production oversight and continued operational safety, it likely will still be in the FAA’s best interest to revert to the DER system, as the experience with and expertise of the latest technology is largely housed inside of the industry, rendering industry employees the most qualified to certify the aircraft.

Chevron Deference and Artificial Intelligence

By CHHS Extern Dallin Richardson

Those who are paying attention to Supreme Court current events know that following the oral argument in Loper Bright Enterprises v. Raimondo and its companion case, Relentless, Inc. v. Department of Commerce, the doctrine of Chevron deference is likely not long for this world. Chevron doctrine causes courts to yield to an executive agency’s reasonable interpretation of ambiguous statutory language, provided Congress has not weighed in on the precise issue in question. This doctrine is at the core of current administrative law and allows agency experts, specialized in their fields, to aid courts by virtue of their advanced technical acumen. An end to that “doctrine of humility” would place that interpretative power exclusively back in the hands of courts, although – as J. Kagan has said – “we know in our heart of hearts that . . . agencies know things that courts do not.” And as artificial intelligence continues to radically change the fabric of virtually every sector, the expertise which agencies bring to the table will be of ever-increasing importance as an aid in the interpretation of AI legal questions. Such agencies certainly have an advantage over the members of Congress: “Congress knows there are going to be gaps [in any future artificial intelligence legislation] because Congress can hardly see a week into the future with respect to [AI]”.

Looking toward the future and the unpredictable predicaments which AI is sure to cast upon the country, the core question may rather be posed this way, with some further help from Justice Kagan: Where should the balance of official interpretative weight lie? “. . . what Congress is thinking is, ‘Do we want courts to fill that gap? Or do we want an agency to fill that gap?’ When the normal techniques of legal interpretation have run out, on the matter of artificial intelligence, what does Congress want?”

It is a valid concern that, despite the woes of vacillating policy interpretation that some fear in handing the final word back to the courts, letting an executive agency dictate the final interpretation of ambiguous statutory language may have a similar vacillating effect, possibly changing every four to eight years. And of course, there is sure to be far from a uniform interpretive posture or access to expertise across US Circuit Courts. Further, although Justice Thomas now seems ready to sweep away the Chevron doctrine, in 2005 he expressed the majority opinion for National Cable v. Brand X that “agency inconsistency” is no reason to eliminate the Chevron framework.

Some would argue that it doesn’t matter what Congress wants, it matters what Article III of the Constitution says, which is that courts hold the judicial power and thus they alone handle interpretation of law. But proponents of Chevron deference would argue that the doctrine does not undermine judicial authority; rather, it guides a court in resolving legal disputes by deferring to an appointed agency who have both the needed expertise and democratic accountability to the public, making them much more reasonable decision makers in matters of technicality and scientific choice when compared to the hundreds of unelected, relatively inexperienced judges who would otherwise bear the burden.

It is important to remember that the true authority in these matters is Congress; Chevron deference arises only when congressional statutory intent is ambiguous. The guiding principle should always be congressional intent. In oral arguments for Relentless, Inc v. Department of Commerce, J. Kagan opined: “Congress knows that this court and lower courts are not competent with respect to deciding all the questions about AI that are going to come up in the future. And what Congress wants, we presume, is for people who actually know about AI to decide those questions. And also those same people who know about AI are people who . . . are accountable to the political process.”

Regardless of what Congress may or may not want for the future of AI and other legislation, the Court may be poised to speak for hundreds of lower courts across the country and make that decision for them. But how did it come to this? It bears repeating that Congress is the primary statutory authority and the first word on statutory interpretation. Congress could speak for itself and mandate specific interpretive construction. Congress could legislate interpretive authority either to agencies or to the courts. Though Congress cannot be expected to foresee the problems to which AI will give rise, is it unreasonable to expect Congress to tell us who gets to decide in a tiebreaker? Is it too much to ask Congress to indicate when they want a court to have the final word, and when, instead, the relevant agency should provide needed clarity?

The Supreme Court’s Vacation of the Injunction and What it Means for Border Security

By CHHS Extern Andrew Conn

There are 29 entry points between the U.S. and Mexico along the 1200-mile-long border. Recently, the amount of illegal border crossings has grown exponentially. In 2020, Customs and Border Patrol (CBP) agents encountered 458,000 crossings. In 2021, this number rose to 1.7 million and in 2022 this number once again rose to 2.4 million.

On October 24, 2023, the state of Texas filed suit against the Department of Homeland Security for “unlawfully” removing concertina wire (c-wire) which had been placed along the Texas-Mexico border by agents of the Texas Military Department (TMD). Texas claims in its brief that the placement of c-wire on private and government property along the border was a joint-effort between federal CBP agents and TMD agents as part of Texas’ 2021 project “Operation Lone Star.” Texas claims that CBP agents were “grateful” for the assistance by TMD officials and the parties worked cooperatively across the state. However, Texas goes on to state that this relationship was upended when on “more than 20 occasions” between September 20 and October 10, 2023, CBP agents were recorded removing the c-wire fencing along the border with bolt cutters. CBP and DHS removed the fencing since it impeded their access to the border. CBP agents later began removing the fencing by utilizing forklifts.

During the removal process, TMD agents observed hundreds of migrants from Mexico’s side of the border pour over the border. TMD agents claimed these migrants were not in distress or in need of medical attention. Because of the subsequent flood of migrants into the state, Texas sought a preliminary injunction against the removal of c-wire and fencing by CBP agents in district court. The United States District Court for the Western District of Texas granted a temporary restraining order (TRO) against CBP agents to prevent them from further removing fencing in the vicinity of Eagle Pass, TX with an exception for “provid[ing] or obtain[ing] emergency medical aid.” The TRO was later extended by the district court; however, at trial, the court found it was unable to convert the TRO into a preliminary injunction since CBP’s sovereign immunity had not been waived under 5 U.S.C. § 702.

Texas subsequently appealed this decision to the Fifth Circuit in order to seek an emergency injunction. The Fifth Circuit granted the injunction, claiming that the district court erred in its ruling with respect to the grant of sovereign immunity. The defense moved for an expedited argument in circuit court which was granted. The oral arguments were to be heard on February 7th, 2024. In the interim, DHS sought expedited relief from the Supreme Court to vacate the injunction.
In its application to the Supreme Court, DHS argued that “under the Supremacy Clause, state law cannot be applied to restrain those federal agents from carrying out their federally authorized activities.” DHS stated that if the circuit court’s ruling was sustained, states would be able to override federal agencies and decisions on how to execute their operations.

In response, Texas claimed the CBP already had access to the other side of the border via access points along the fencing and since the Fifth Circuit had already expedited the case, the Supreme Court should hold off on any ruling against the injunction. Additionally, Texas cited a three-part test laid out in Merrill v. Milligan to determine whether an injunction should be vacated by a higher court. Texas argued that an injunction should be “entitled to great deference like a decision to stay a district court’s ruling.” In doing so, the test in Merrill states that an injunction can only be vacated when the applicant demonstrates (1) a reasonable probability that the court would eventually grant review, (2) a fair prospect that the Court would reverse, and (3) the applicant would likely suffer irreparable harm absent the stay.
Ultimately, on January, 22, 2024, the Supreme Court ruled in favor of the federal government in a surprising 5-4 split. The limited ruling struck down the injunction by the 5th Circuit ahead of oral arguments in the federal circuit court. Justices Jackson, Kagan, Sotomayor, Coney-Barrett, and Chief Justice Roberts voted in favor of overturning the injunction while Justices Thomas, Alito, Gorsuch, and Kavanaugh voted in favor of keeping the injunction in place.

What Could This Mean?
By overturning the injunction, it appears as if the Court may have an appetite to rule in favor of upholding the federal government’s sovereign immunity claim should the case reach the Court. This ruling is concerning, however, in the sense that four justices voted in favor of the injunction which could indicate a major blow to the supremacy clause. Allowing Texas to counter the acts of the Federal government would upend the supremacy clause as it would essentially allow state governments to override the lawful acts of federal agents. As the DHS states in its application to the Supreme Court, “if accepted, the court’s rationale would leave the United States at the mercy of States that could seek to force the federal government to conform the implementation of federal immigration law to varying state-law regimes.” Such a ruling would deal a blow to other federal agencies as well since this new precedent would allow the state government to override the federal government in terms of environmental, commerce, and transportation regulations. Oral arguments at the circuit court level will commence on February 7th, 2024.

Is It Too Late To Convince Europeans They Can Trust The U.S. With Their Data?

By CHHS Extern Mercedes Subhani

On January 4th, Microsoft announced that it was upgrading its cloud computing service to let European customers store all their personal data only within the European Union. Microsoft claims this move, that will affect Azure, Microsoft 365, Power Platform, and Dynamics 365, is directly aimed to ease customers’ privacy fears of having their information flow into the U.S. where a federal privacy law still doesn’t exist.

This fear of letting their personal data stored in the wild west of the U.S. stems from the Edward Snowden revelations that the American government eavesdropped on people’s online data and communications. Since then, the U.S. has been trying to convince the European Commission that EU citizens’ data will be kept safe. The U.S. was finally successful on July 10, 2023 when the EU adopted its adequacy decision for the EU-U.S. Data Privacy Framework (“Framework”). The EU’s decision “has the effect that personal data transfers from controllers and processors in the Union to certified organizations in the United Sate may take place without the need to obtain any further authorisation.” Despite this transatlantic agreement, Europeans are still not convinced that their data will be kept safe in the U.S. as demonstrated by Austrian privacy activist Max Schrems’ confirmation that his group NOYB will be pursuing legal challenge.

First, although the EU adopted the decision, there is no certainty that the Framework will survive a challenge before the Court of Justice of the European Union. The framework is predicted to be invalidated like its two predecessors was by Max Schrems in Schrems I and Schrems II. Thus, it is very likely that this newly founded EU-US agreement will be invalidated. Secondly, the U.S still does not have a federal data privacy law. The level of data privacy rights an American citizen has, if any, entirely depends on which state they live in. The strongest state privacy law the  U.S has is the California’s Consumer Privacy Act which is still not as protective as the EU’s General Data Privacy Rule. Therefore, not even in the most protective U.S. state can Europeans enjoy the same amount of privacy safeguards as they do in the European Union. Lastly, when the U.S. did try to pass a federal data privacy law, the American Data Privacy Protection Act (“ADPPA”), it still did not fix the root problem Europeans are concerned with which is the U.S. government eavesdropping on people’s online data and communications. ADPPA only targeted the private sector and exempted the public sector of any privacy constraints.

In the current court of public opinion, Europeans have ruled that they cannot trust the U.S. with their personal data. For right now, they are correct in deciding so. However, as Europe presses forward with data rights and the U.S. public grows more concerned about data privacy rights, more politicians will be pressured to respond adequately. We have seen this already with The White House mimicking Europe with its Blueprint for an AI Bill of Rights and how privacy activism organizations have pushed 12 states to pass state data privacy laws with several more states expected to pass their own law in 2024. Eventually, the U.S. will fully regain Europeans’ trust in the U.S with its data.

Dawn of a Historic Election Year

By CHHS Extern Dallin Richardson

On August 10th, 2023, President Biden, through the Stafford Act, issued a major disaster declaration in response to Hawaii Governor Joshua Green’s petition for aid from the wildfires which devastated Lahaina. By August 11th, various social media posts asserted that a space-based directed energy weapon started the fire. The video “evidence” behind these claims was debunked as footage taking place in Russia in 2019. However, one wild claim was joined by others, each with an audience willing to believe fanciful anti-government stories. Regrettably, this cyber campaign trespassed beyond the cyber realm to have real-world effects: In a Department of Energy hearing, Hawaii Senator Mazie Hirono shared her concerns over victims who had been duped by online claims that signing FEMA disaster relief papers would also sign over the rights to one’s home or land. (timestamp 1:15:40 at the link to the hearing).

From this, we see what may be accomplished when a hostile nation state (Senator Hirono attributed the lies about FEMA to Russia or China) employs insidious cyber efforts to exploit an unplanned emergency. But what about the intentional, planned disruption of future events, foreknowledge of which gives a hostile party months, or even years, to plan? We will have our answer in the year 2024 as the world goes through an unprecedented period of democratic transition. China has already started us off on the wrong foot.

However, we are not helpless. Though beset by disinformation campaigns, the global population may mitigate against such insidious efforts by using media literacy education as an information tool. When disinformation clouded public opinion about Sars-CoV-2, the World Health Organization (WHO) called the problem an “infodemic”. The WHO’s recommended cure for the infodemic is building resilience to disinformation, which lines up conceptually with the medically-sound aim to inoculate against actual viruses. Content regulation and government surveillance, while they can help fight disinformation, do not meaningfully serve to inoculate the public. Content regulation and government surveillance are akin to treating a patient in a sterile environment. Though an ideal setting for caring for the ill and ailing, a sterile environment allows for only short-term intervention with no hope of providing long term prevention when the patient inevitably returns to a more typical, non-sterile setting. Likewise, national election-safeguarding efforts which neglect media literacy education offer no long-term prevention for our information-ridden society and guarantee no measurable resilience against propagated online falsehoods. Such efforts also ignore public mistrust in government “treatments”.

Unlike other problems facing this country, media literacy education does not appear to be a partisan issue; states guided by staunchly disparate political philosophies, such as Florida and California, have both enacted bills aimed at providing critical education in this regard. This is fortunate, because education policy in the United States is largely a matter left to the states. Various attempts have been made to legislate a federal approach to media and digital literacy, but the closest we have come is the Digital Equity act, which (as the name suggests) leans heavily into digital equity, which is more concerned with information access, rather than digital literacy, which is chiefly aimed at information acuity. Beyond congressional constipation, the Department of Education does not dictate curricula or standards to the State educational departments, and this administrative deference to States make the United States’ chances of developing a unified national K-12 literacy curriculum slim.

It is likely up to state policymakers, then, to innovatively legislate and set educational goals as examples for other states to follow. Several states have started stepping up, and we can hope that such efforts will be sufficient to instill proper critical thinking and media consumption skills in children in those states. The work being done in these states is vital to help us face election disinformation. The United States cannot put all hope in the algorithmic excision of online content; rather, our country must lean on good information, media, and digital literacy educational policy, which offers the best chance to “inoculate” people, teaching them how to learn, helping them to develop resilience to disinformation, and encouraging the development of robust information immune systems.

CHHS Assists Talbot County in Revising Its Emergency Operations Plan (EOP)

CHHS was proud to assist Talbot County, Maryland in revising its Emergency Operations Plan (EOP). From the Conduit Street blog from the Maryland Association of Counties:

The County contracted with the University of Maryland Center for Health and Homeland Security (CHHS) to assist with rewriting the Plan, coordinate and facilitate the tabletop exercise, and to conduct a functional exercise in the spring.

In addition to looking at national best practices, federal guidance, State and other local EOPs, the updated Talbot County EOP also integrates what was learned during the COVID-19 pandemic.

If you are interested in any of our emergency management consulting services, please visit our Consulting Services page here.

New CHHS Fall 2023 Newsletter Available with Important Update

CHHS is proud to release the Fall 2023 edition of our newsletter:

CHHS Newsletter Fall 2023 NEW

In this newsletter’s Director’s Message, CHHS Founder and Director Michael Greenberger announces that he will be stepping down at the end of June 2024, after more than 20 years in his current position. CHHS will be celebrating Prof. Greenberger’s leadership and the success of the Center he founded over the coming months.

For all recent editions of our newsletter, check out our newsletter page: https://www.mdchhs.com/media/newsletters/

 

Big Tech vs. Digital Privacy: Introduction of the Banning Surveillance Advertising Act

By CHHS Extern Alexandra Barczak

U.S. lawmakers have introduced legislation called the Banning Surveillance Advertising Act, designed to prohibit advertisers and advertising facilitators, such as Google and Facebook, from using personal data to create targeted advertisements, with the exception of broad location targeting. It further protects against advertisers using a protected class of information, such as race, gender, and religion, and personal data purchased from data brokers, by prohibiting advertisers from targeting ads based on this information. Enforcement would be through state attorneys general, private lawsuits, and the Federal Trade Commission. However, in the age of Big Tech, is this actually feasible?

Big Tech, a term coined to mean the major players in the technological world, often indicates the “Big Five” which hold the most influence in the technological market: Amazon, Apple, Facebook/Meta, Google/Alphabet, and Microsoft. While each of the Big Five has a sphere that they dominate, such as Facebook with social media, Google with a search engine, and Apple with communication devices like mobile phones and laptops, there is a common thread amongst them all – they are constantly using our data, whether that be through asking for it, tracking it on their own, or buying it from another company/data broker. Our online movements are continuously being monitored under the guise of better serving the users, with typical collection including information such as your name, email, phone number, IP address, device you are using, times you are using your device, what you are doing while on your device, location, and more. Having this data allows these companies to better predict user behavior by using it to build a profile based on past user movements to anticipate future movements by giving you the content you want to see, showing you relevant ads, personalizing your experience, etc. Such pervasive collection and tracking have thus coined the term “surveillance.”

To many, this may not be a threatening prospect, but to others, online tracking is highly concerning. As the reasoning behind the Banning Surveillance Advertising Act points out, “Personal data is abused to target ads with major societal harms, including voter suppression, racist housing discrimination, sexist employment exclusions, political manipulation, and threats to national security. Surveillance advertising also invades privacy and threaten civil liberties, such as by tracking which place of worship individuals attend and whether they participated in protests and then selling this information to advertisers.” It is even more troubling that this sacrifice in personal privacy and security is done simply for the financial gain of these already profitable giants.

The Banning Surveillance Advertising Act is notably introduced exclusively by Democrats (Representatives Anna G. Eshoo (D-CA) and Jan Schakowsky (D-IL) and Senators Ron Wyden (D-OR) and Cory Booker (D-NJ)) and is said to be supported by leading public interest organizations, academics, and companies with privacy-preserving business models. Some of those cited in support include the Center for Digital Democracy, Accountable Tech, Fight for the Future, Anti-Defamation League, and Ekō. While there seems to be strength in support, there is likely equal, if not more, strength in opposition. Big Tech itself has created monopolies in their respective fields, with use of their products and systems becoming a necessity in everyday life. This power has created concern amongst the general population and the government of what exactly Big Tech can accomplish. Such dominating digital infrastructures have the capability to influence societies, economics, national security, and politics, just as Big Oil, Big Banks, or Big Pharma did in the past and arguably still do. Thus, it is entirely plausible that the resources of Big Tech will be used against this bill. It would not be the first time. For example, in 2022, lobbyists on behalf of Amazon, Apple, Meta, and Google parent company Alphabet spent millions opposing two antitrust bipartisan bills targeting Big Tech, the Open App Markets Act and the American Innovation Choice Online Act. Though the response to a bill about advertising may not be as extreme as the response to antitrust regulation, Big Tech would still likely involve itself and its resources in advocating against passing such legislation. Money talks, and Big Tech has money to spare – money which will be directed at the individuals and organizations that will lobby to block activity that will interfere with their business models; business models which all include targeted advertising as a source of revenue.

While the introduction of this bill could be considered a step in the right direction to preserving our online privacy, it also serves as a reminder that digital privacy, though a hot topic, is becoming increasingly politicized with little concrete movement at the federal level. Just note how long it has taken for a bipartisan federal privacy law to be introduced – and that, the American Data Privacy Protection Act, still did not pass. This is already the second attempt at the introduction of the Banning Surveillance Advertising Act. In January 2022, Congresswoman Eshoo (D-CA), Congresswoman Schakowsky (D-Ill.), and Senator Booker (D-NJ), introduced a similar bill with the same title which was unsuccessful. In both the House and Senate, the bill never got past the introduction stage. The House referred it to the Subcommittee on Consumer Protection and Commerce with no further movement, and the Senate read it twice and referred it to the Committee on Commerce, Science, and Transportation with no further movement.

With the power Big Tech holds across society and politics, the bill, which threatens a revenue stream for these organizations, will likely face strong resistance, backed with deep pockets. To realistically have a chance at gaining any traction, a bipartisan push would have to be made with representatives and organizations from all political parties making this an issue to care about. It, therefore, seems like there will be a long road ahead for the Banning Surveillance Advertising Act.

The Benefits and Risks of AI in 911 Call Centers

by CHHS Extern Katie Mandarano

Across the United States, 911 call centers are facing a workforce crisis. In the U.S., there are more than 6,000 911 call centers, with over 35,543 911 operators currently employed. According to a survey by the International Academies of Emergency Dispatch (IAED), more than 100 call centers reported that at least half of their positions were unfilled in 2022. The report also found that almost 4,000 people left their jobs across the call centers surveyed. This means that about 1 in 4 jobs at these call centers remain vacant.

The reasons for such a labor shortage can likely be attributed to a combination of factors:

  • These jobs are hard to fill, applicants have to undergo rigorous background checks and screenings and once hired, dispatchers face a lengthy training process, ranging anywhere from three to eighteen months before they are allowed to take calls without supervision.
  • Dispatchers have to work long hours and are often forced to work overtime hours because these centers are so short staffed.
  • Despite the long hours and high stress, the S. Bureau of Labor reported the median annual pay for public safety telecommunicators as only $46,900 in the year 2022.
  • These jobs are incredibly high stress, with studies showing that the repeated exposure to 911 calls can lead to the development of Post-Traumatic Stress Disorder.
  • There has been an overall increase in 911 calls. On average, an estimated 240 million 911 calls are made in the U.S. each year. Moreover, with developments in technology there are now a variety of emergency features that have led to an increase in 911 misdials, such as the Apple Watch system that automatically sends a 911 call if it detects a vehicle crash.

Due to the severe labor shortage facing 911 call centers, some state and local governments have turned to artificial intelligence (AI) as a potential solution to assist 911 dispatchers, or in some cases, replace the presence of a human dispatcher. AI is essentially a machine’s ability to perform tasks that typically require human intelligence. Below are some examples of how AI could be used in 911 dispatching:

  • AI could be used to enhance the audio quality of 911 calls, allowing dispatchers to better understand callers and respond quicker to callers’ needs, in turn allowing dispatchers to field more 911 calls.
  • AI can triage incoming 911 calls based on the call’s urgency, reducing the number of non-emergency calls, and ensuring calls are routed to the appropriate dispatchers and first responders. This would free up human dispatcher availability for the most pressing 911 calls. Moreover, some states do not just have AI triaging incoming calls, but actually answering and gathering information from non-emergency calls, replacing the need for a human dispatcher.
  • AI can create real-time maps of emergencies, which can be shared with other emergency services when responding to the scene. Because dispatchers typically stay on the phone with 911 callers until first responders reach the scene, improving the speed at which first responders can reach an emergency, will in turn, allow human dispatchers to assist more callers.
  • AI can provide real-time language translation for non-English speakers, quickening a dispatchers’ response time, as well as reducing the need for translators and non-English speaking dispatchers.
  • AI can integrate 911 dispatching with other technology such as, Internet of Things devices and smart city infrastructure to provide real-time information about the conditions surrounding an emergency. This would similarly result in quicker response times, freeing up dispatchers to field more calls.

It clear from the few examples listed above, that the potential benefits of AI in 911 dispatching are significant. But using AI in 911 dispatching also poses several unique challenges. One of these challenges includes maintaining 911 callers’ data privacy and security.

Congress has recognized the need for the country to transition to a Next Generation 911 (NG911) system, which is expected to enable transmission of photos, videos, health records, cell-site location information, and other data to support first responders and emergency personnel during a 911 call. Accordingly, an unprecedented amount of data could be used to train AI systems in 911 call centers, and this practice would be encouraged, as the more data an AI system has access to, generally the more accurate its output. Additionally, 911 calls typically contain sensitive personal information and AI would likely be used to de-anonymize this personal data where necessary. Thus, call centers which use AI systems become an increasingly attractive target for cyberattacks and data leaks.

In addition to data privacy and security concerns, implementing AI in 911 call centers creates data accuracy concerns. Underrepresentation of certain groups in data sets used to train AI can result in inaccurate outcomes and harmful decisions. For example, researchers have found that smart speakers often fail to understand female or minority voices because the algorithms are built from databases containing primarily white male voices. In an emergency setting, one can see how this barrier could have serious implications, such as determinative delays in emergency services or inefficient assistance for non-white male 911 callers.

The unique risks discussed above require government protection and safeguards. Accordingly, state governments using this technology should take care to implement privacy and cybersecurity standards to ensure this information is not subject to misuse, and that the AI is built using accurate, fair, and representative data sets. Some potential insurance measures include:

  • Adopting comprehensive data minimization rules, such as a deletion requirement to ensure that call centers do not store precise location data for longer than necessary.
  • Requiring cybersecurity maturity assessments, ensuring that these call centers have procedures in place to strengthen security program efforts.
  • Implementing quality standards for data sets used to train AI to ensure datasets are broad and inclusive.

While AI has the potential to revolutionize 911 dispatching, it is important to consider the risks to data privacy, accuracy, and security when implementing these technologies. With a thoughtful and regulated approach, AI in 911 call centers can provide much needed relief to the 911 dispatcher workforce in this time of need.