Big Tech vs. Digital Privacy: Introduction of the Banning Surveillance Advertising Act

By CHHS Extern Alexandra Barczak

U.S. lawmakers have introduced legislation called the Banning Surveillance Advertising Act, designed to prohibit advertisers and advertising facilitators, such as Google and Facebook, from using personal data to create targeted advertisements, with the exception of broad location targeting. It further protects against advertisers using a protected class of information, such as race, gender, and religion, and personal data purchased from data brokers, by prohibiting advertisers from targeting ads based on this information. Enforcement would be through state attorneys general, private lawsuits, and the Federal Trade Commission. However, in the age of Big Tech, is this actually feasible?

Big Tech, a term coined to mean the major players in the technological world, often indicates the “Big Five” which hold the most influence in the technological market: Amazon, Apple, Facebook/Meta, Google/Alphabet, and Microsoft. While each of the Big Five has a sphere that they dominate, such as Facebook with social media, Google with a search engine, and Apple with communication devices like mobile phones and laptops, there is a common thread amongst them all – they are constantly using our data, whether that be through asking for it, tracking it on their own, or buying it from another company/data broker. Our online movements are continuously being monitored under the guise of better serving the users, with typical collection including information such as your name, email, phone number, IP address, device you are using, times you are using your device, what you are doing while on your device, location, and more. Having this data allows these companies to better predict user behavior by using it to build a profile based on past user movements to anticipate future movements by giving you the content you want to see, showing you relevant ads, personalizing your experience, etc. Such pervasive collection and tracking have thus coined the term “surveillance.”

To many, this may not be a threatening prospect, but to others, online tracking is highly concerning. As the reasoning behind the Banning Surveillance Advertising Act points out, “Personal data is abused to target ads with major societal harms, including voter suppression, racist housing discrimination, sexist employment exclusions, political manipulation, and threats to national security. Surveillance advertising also invades privacy and threaten civil liberties, such as by tracking which place of worship individuals attend and whether they participated in protests and then selling this information to advertisers.” It is even more troubling that this sacrifice in personal privacy and security is done simply for the financial gain of these already profitable giants.

The Banning Surveillance Advertising Act is notably introduced exclusively by Democrats (Representatives Anna G. Eshoo (D-CA) and Jan Schakowsky (D-IL) and Senators Ron Wyden (D-OR) and Cory Booker (D-NJ)) and is said to be supported by leading public interest organizations, academics, and companies with privacy-preserving business models. Some of those cited in support include the Center for Digital Democracy, Accountable Tech, Fight for the Future, Anti-Defamation League, and Ekō. While there seems to be strength in support, there is likely equal, if not more, strength in opposition. Big Tech itself has created monopolies in their respective fields, with use of their products and systems becoming a necessity in everyday life. This power has created concern amongst the general population and the government of what exactly Big Tech can accomplish. Such dominating digital infrastructures have the capability to influence societies, economics, national security, and politics, just as Big Oil, Big Banks, or Big Pharma did in the past and arguably still do. Thus, it is entirely plausible that the resources of Big Tech will be used against this bill. It would not be the first time. For example, in 2022, lobbyists on behalf of Amazon, Apple, Meta, and Google parent company Alphabet spent millions opposing two antitrust bipartisan bills targeting Big Tech, the Open App Markets Act and the American Innovation Choice Online Act. Though the response to a bill about advertising may not be as extreme as the response to antitrust regulation, Big Tech would still likely involve itself and its resources in advocating against passing such legislation. Money talks, and Big Tech has money to spare – money which will be directed at the individuals and organizations that will lobby to block activity that will interfere with their business models; business models which all include targeted advertising as a source of revenue.

While the introduction of this bill could be considered a step in the right direction to preserving our online privacy, it also serves as a reminder that digital privacy, though a hot topic, is becoming increasingly politicized with little concrete movement at the federal level. Just note how long it has taken for a bipartisan federal privacy law to be introduced – and that, the American Data Privacy Protection Act, still did not pass. This is already the second attempt at the introduction of the Banning Surveillance Advertising Act. In January 2022, Congresswoman Eshoo (D-CA), Congresswoman Schakowsky (D-Ill.), and Senator Booker (D-NJ), introduced a similar bill with the same title which was unsuccessful. In both the House and Senate, the bill never got past the introduction stage. The House referred it to the Subcommittee on Consumer Protection and Commerce with no further movement, and the Senate read it twice and referred it to the Committee on Commerce, Science, and Transportation with no further movement.

With the power Big Tech holds across society and politics, the bill, which threatens a revenue stream for these organizations, will likely face strong resistance, backed with deep pockets. To realistically have a chance at gaining any traction, a bipartisan push would have to be made with representatives and organizations from all political parties making this an issue to care about. It, therefore, seems like there will be a long road ahead for the Banning Surveillance Advertising Act.

The Benefits and Risks of AI in 911 Call Centers

by CHHS Extern Katie Mandarano

Across the United States, 911 call centers are facing a workforce crisis. In the U.S., there are more than 6,000 911 call centers, with over 35,543 911 operators currently employed. According to a survey by the International Academies of Emergency Dispatch (IAED), more than 100 call centers reported that at least half of their positions were unfilled in 2022. The report also found that almost 4,000 people left their jobs across the call centers surveyed. This means that about 1 in 4 jobs at these call centers remain vacant.

The reasons for such a labor shortage can likely be attributed to a combination of factors:

  • These jobs are hard to fill, applicants have to undergo rigorous background checks and screenings and once hired, dispatchers face a lengthy training process, ranging anywhere from three to eighteen months before they are allowed to take calls without supervision.
  • Dispatchers have to work long hours and are often forced to work overtime hours because these centers are so short staffed.
  • Despite the long hours and high stress, the S. Bureau of Labor reported the median annual pay for public safety telecommunicators as only $46,900 in the year 2022.
  • These jobs are incredibly high stress, with studies showing that the repeated exposure to 911 calls can lead to the development of Post-Traumatic Stress Disorder.
  • There has been an overall increase in 911 calls. On average, an estimated 240 million 911 calls are made in the U.S. each year. Moreover, with developments in technology there are now a variety of emergency features that have led to an increase in 911 misdials, such as the Apple Watch system that automatically sends a 911 call if it detects a vehicle crash.

Due to the severe labor shortage facing 911 call centers, some state and local governments have turned to artificial intelligence (AI) as a potential solution to assist 911 dispatchers, or in some cases, replace the presence of a human dispatcher. AI is essentially a machine’s ability to perform tasks that typically require human intelligence. Below are some examples of how AI could be used in 911 dispatching:

  • AI could be used to enhance the audio quality of 911 calls, allowing dispatchers to better understand callers and respond quicker to callers’ needs, in turn allowing dispatchers to field more 911 calls.
  • AI can triage incoming 911 calls based on the call’s urgency, reducing the number of non-emergency calls, and ensuring calls are routed to the appropriate dispatchers and first responders. This would free up human dispatcher availability for the most pressing 911 calls. Moreover, some states do not just have AI triaging incoming calls, but actually answering and gathering information from non-emergency calls, replacing the need for a human dispatcher.
  • AI can create real-time maps of emergencies, which can be shared with other emergency services when responding to the scene. Because dispatchers typically stay on the phone with 911 callers until first responders reach the scene, improving the speed at which first responders can reach an emergency, will in turn, allow human dispatchers to assist more callers.
  • AI can provide real-time language translation for non-English speakers, quickening a dispatchers’ response time, as well as reducing the need for translators and non-English speaking dispatchers.
  • AI can integrate 911 dispatching with other technology such as, Internet of Things devices and smart city infrastructure to provide real-time information about the conditions surrounding an emergency. This would similarly result in quicker response times, freeing up dispatchers to field more calls.

It clear from the few examples listed above, that the potential benefits of AI in 911 dispatching are significant. But using AI in 911 dispatching also poses several unique challenges. One of these challenges includes maintaining 911 callers’ data privacy and security.

Congress has recognized the need for the country to transition to a Next Generation 911 (NG911) system, which is expected to enable transmission of photos, videos, health records, cell-site location information, and other data to support first responders and emergency personnel during a 911 call. Accordingly, an unprecedented amount of data could be used to train AI systems in 911 call centers, and this practice would be encouraged, as the more data an AI system has access to, generally the more accurate its output. Additionally, 911 calls typically contain sensitive personal information and AI would likely be used to de-anonymize this personal data where necessary. Thus, call centers which use AI systems become an increasingly attractive target for cyberattacks and data leaks.

In addition to data privacy and security concerns, implementing AI in 911 call centers creates data accuracy concerns. Underrepresentation of certain groups in data sets used to train AI can result in inaccurate outcomes and harmful decisions. For example, researchers have found that smart speakers often fail to understand female or minority voices because the algorithms are built from databases containing primarily white male voices. In an emergency setting, one can see how this barrier could have serious implications, such as determinative delays in emergency services or inefficient assistance for non-white male 911 callers.

The unique risks discussed above require government protection and safeguards. Accordingly, state governments using this technology should take care to implement privacy and cybersecurity standards to ensure this information is not subject to misuse, and that the AI is built using accurate, fair, and representative data sets. Some potential insurance measures include:

  • Adopting comprehensive data minimization rules, such as a deletion requirement to ensure that call centers do not store precise location data for longer than necessary.
  • Requiring cybersecurity maturity assessments, ensuring that these call centers have procedures in place to strengthen security program efforts.
  • Implementing quality standards for data sets used to train AI to ensure datasets are broad and inclusive.

While AI has the potential to revolutionize 911 dispatching, it is important to consider the risks to data privacy, accuracy, and security when implementing these technologies. With a thoughtful and regulated approach, AI in 911 call centers can provide much needed relief to the 911 dispatcher workforce in this time of need.

Is the Threat of Lawsuits Against Powerful AI Tools Like ChatGPT a Good Thing?

By CHHS Extern Daniela Eppler

Since the launch of ChatGPT in November 2022, people’s interest in artificial intelligence (AI) has been heightened. There has been excitement surrounding the capabilities of powerful AI tools and their ability to contribute to innovations across industries, such as enhancing telemedicine, improving customer service, and predicting passenger demand to optimize public transportation schedules. This excitement has also been coupled with concerns about potential threats to cybersecurity and national security, particularly related to relying on AI generated information and data privacy risks. The undeniable potential of ChatGPT and similar AI tools like it in fields like scientific research, business, and intelligence analysis has sparked interest around the world. Although the power and novelty of these tools may be overwhelming and alarming for some, they are the future and hold the key to unlocking previously unobtainable innovations.

Despite this excitement, a California law firm recently filed a class-action lawsuit against OpenAI over their use of people’s data to train the chatbot, ChatGPT. The lawsuit claims that OpenAI violated the rights of millions of internet users by using their publicly available internet data without consent to create the bot and to generate large profits. Specifically, while OpenAI is projected to reach $1 billion in revenue by the end of 2023, the lawsuit claims that ChatGPT relies on the consumption of “billions of words never signed off on” by the owners. Companies like Google, Facebook, Microsoft have taken similar actions to train their own AI models, but OpenAI is the only company currently facing legal action. Larger technology companies, like Facebook, have faced recent lawsuits concerning their deception of users in their ability to control the privacy of their personal information shared with the company. However, the question remains whether people should be concerned about AI companies using publicly available data to develop powerful tools and generate large profits. AI developers have argued that their use of data from the internet should fall under the “fair use” exception in copyright law. The class action lawsuit will largely be centered around whether the use of the data meets the requirements for “fair use”.

As we await the outcome of this lawsuit, it is important to consider the implications of restricting AI developers’ access to publicly available internet data for the development of AI tools. Despite the disruptive nature of the release of tools like ChatGPT, it is difficult to deny the endless pathways for advances that have been unlocked across various industries, like the acceleration of drug discoveries and early disease detection in the health industry, improved fraud detection and faster and more accurate trade executions in the finance industry, and supply chain optimization and product defect detection in the manufacturing industry, to name a few. AI tools directly rely  on ingesting huge volumes of complex data and without access to the volume and diversity of publicly available internet data, it is likely that their capabilities would be curtailed. Although copyright and privacy issues are important, it is essential to consider the implications of stifling the development of AI tools like ChatGPT and impeding the development of similar tools in the future.

Extreme Heat Should be Included as a Major Disaster Under the Stafford Act

By CHHS Extern Brittany Hunsaker 

We hear about extreme heat, particularly in Arizona, every summer. In Phoenix, the average temperature in July is 106 degrees. The term “extreme heat” refers to a period of high heat where temperatures reach above 90 degrees for at least two days. Such temperatures can lead to heat disorders and can especially harm older adults, young children, and those with underlying health concerns. The heat can often be far worse in urban areas, where cities face the “urban heat island effect,” where heat is stored in asphalt and concrete, and continues to impact the temperature throughout the night.

Arizona officials want to add extreme heat to the Federal Emergency Management Agency’s (FEMA) declared disasters list. The list currently includes sixteen types of declared disasters, such as hurricanes, typhoons, tropical storms, and fires. By adding extreme heat to the list, a national emergency could be declared, which would allow for federal assistance. The funding could provide resources such as pop-up shelters, cooling centers, and additional outreach to vulnerable residents, thus preventing avoidable serious harm and death. The addition of extreme heat to the list of natural disasters was supported unanimously at the U.S. Conference of Mayors earlier this month. As stated by Phoenix Mayor Kate Gallego, “heat causes more deaths each year than most other natural hazards combined.” In addition to the roughly 702 heat deaths that occur each year, it is estimated that there are 67,512 emergency visits and 9,235 hospitalizations annually. Mayor Gallego addressed this issue during her annual state of the city address on April 12, 2023.

Two months later, on June 5, 2023, Representative Ruben Gallego (AZ-03) introduced legislation to amend FEMA’s list of eligible disasters to declare extreme heat a major disaster. The bill, known as the Extreme Heat Emergency Act, would take effect as early as January 2024, if passed. FEMA’s spokesperson, David Passey, stated that the assistance would become available once the need exceeded what the state and local resources could handle. The bill, which is still in the early stages of the legislative process, has been referred to the House Transportation and Infrastructure Subcommittee on Economic Development, Public Buildings, and Emergency Management

This is not the first time Arizona officials have introduced legislation to combat extreme heat. On April 28, 2023, Representative Gallego, along with Senator Sherrod Brown (D-OH), and Representative Bonnie Watson Coleman (NJ-12) introduced the Excess Urban Heat Mitigation Act of 2023, which would create a grant through the U.S. Department of Housing and Urban Development to provide funding that addresses the excess urban heat and heat islands. It would provide $30 million per year between 2023 and 2030 to curb the effects of excess heat through cool pavements, cool roofs, bus stop covers, cooling centers, and local heat mitigation education efforts. The bill has been referred to the Committee on Banking, Housing, and Urban Affairs. Rep. Gallego stated, “In urban areas, the effects of these rising temperatures is compounded by a lack of shade and miles of heat-absorbing concrete. And too often, it is our lower-income communities that are disproportionately impacted by this extreme urban heat. That is why I am proud to introduce this bill to address this deadly issue, keep Phoenix cooler, and ensure the hardest hit communities are prioritized.”

With temperatures continuing to increase, federal support for areas like Phoenix is vital to protect individuals from the catastrophic effects of extreme heat. If governments are able to allocate more funding toward mitigating or preventing the dangers of extreme heat, less would be needed to fund relief and recovery efforts. To learn more about the dangers of extreme heat, click here.

Ransomware Remains a Top Global Economic Threat

By CHHS Extern Barbara Key

In the 2023 National Cybersecurity Strategy, the White House stated, “Together with our allies and partners, the U.S. will disrupt and dismantle threat actors by addressing the ransomware threat through a comprehensive Federal approach, in step with our international partners.”  To that end, a recent Joint Advisory by the U.S. Cybersecurity and Infrastructure Security Agency and its U.S. and international partners, highlights the threat posed by ransomware threat actors using LockBit – which functions as a Ransomware-as-a-Service model where affiliates are recruited to conduct ransomware attacks using LockBit ransomware tools and infrastructure. Although ransomware impacts all sectors, the FBI warned that the federal government is particularly concerned about its impact on government and other critical infrastructure networks because these types of attacks can delay a police or fire department’s response to an emergency or prevent a hospital from accessing lifesaving equipment.

In addition to the impacts of ransomware on government and other critical infrastructure, recently, Verizon released the results of its 16th annual Data Breach Investigations Report (2023 DBIR), which highlighted the soaring costs of ransomware with 95% of incidents that experienced a loss costing between $1 and 2.25 million. 

Despite these warnings, in late May, Clop – a Russian ransomware gang, executed a sprawling hacking campaign that targeted major U.S. universities and state governments in which they gave victims until 14 June to discuss a ransom before they would start publishing data from companies they claim to have hacked. Furthermore, CNN reported, on 15 June several U.S. federal government agencies were hit in a global cyberattack by the same Russian criminals. While no ransom demands have been made of federal agencies, this hacking campaign, which exfiltrates employee sensitive information, mounts pressure on federal officials who pledged to put a dent in the scourge of ransomware attacks that have crippled schools, hospitals and local governments across the U.S.

With damage related to cybercrime projected to hit $10 trillion annually by 2025, the White House declared, in a 27 June memorandum to the Heads of executive departments and agencies, “Ransomware is a threat to national security, public safety, and economic prosperity.” Further, the Administration re-stated its commitment to mounting disruption campaigns, which use tools of national power to make malicious actors incapable of threatening the U.S. security, safety, and economy, and other efforts that are so targeted that they render ransomware no longer profitable. However, beyond its profitability, according to the  Arms Control Association, based on the 2018 Nuclear Posture Review’s claim that an enemy cyberattack on U.S. nuclear command, control, and communications (NC3) facilities would constitute a “non-nuclear strategic attack” of sufficient magnitude to justify the use of nuclear weapons in response, new hybrid ransomware attacks such as  RedEnergy – which unveils a powerful blend of stealthy data theft and encryption designed to cause extensive harm and establish complete control over its targets, “could lead to a major conflict and possibly nuclear war.”

Could Canadian Wildfire Smoke on the East Coast Spark More Legislation to Combat Climate Change?

By CHHS Extern Daniela Eppler

This summer, the East Coast has experienced two periods of heavy smoke originating from wildfires in the Quebec region of Canada. The smoke has swept across large cities like Washington D.C., New York, and Philadelphia, leading to dangerously poor air quality throughout the United States. The impact of the smoke has been significant, with unhealthy air quality being reported everywhere from Minnesota and Indiana through the Mid-Atlantic and Southern regions of the United States. During this time, nearly 300 air quality monitoring stations reported all-time high air pollution figures. Notably, in early June, New York City reached a peak value of 460 on the air quality index, a hazardous level of air quality with significant risk particularly for vulnerable populations such as children, senior citizens, pregnant women, and people with heart or respiratory issues. However, the detrimental health impacts extend to all segments of the population. While these types of wildfires can occur naturally, climate change can exacerbate their frequency, duration, and size. Although the West Coast is no stranger to dealing with wildfires and their growing intensity over recent years, this is the first time that the East Coast is experiencing the effects of climate change in this manner.

Over the past few years, the West Coast has experienced some of the most destructive wildfire seasons, posing serious health risks to the public. In 2020, California recorded its largest fire season ever and the pollution from these wildfires likely offset decades of progress in improving air quality. More than half of the counties in California experienced their worst air pollution since satellite measurements began collecting data in 1998. Exposure to poor air quality can have serious long-term consequences for people’s health. In certain counties, if the particulate concentration from the 2020 wildfires would have been sustained, the average life span of their residents would have been shortened by 1.7 years. Fine particulate matter released into the air by wildfires is a significant threat to public health because it can get deep into the lungs and a person’s blood stream and cause heart and respiratory problems. Although the Clean Air Act has helped reduced the fine particulate pollution by 66.9% since 1970, wildfire smoke remains responsible for about half of the creation of fine particle pollution on the West Coast. As wildfire season lengthen and become more extreme in the Western United States, the East Coast is now facing some of the devasting effects of climate change driven wildfires for the first time.

Increased emission of greenhouse gases trap excess heat in the atmosphere and cause the climate to warm. A warmer climate leads to more frequent wildfires due to the impact of producing dryer vegetation, providing more fuel for wildfires. Despite the historic Inflation Reduction Act of 2022, which invested $369 billion in climate and clean energy investments to reduce pollutants like greenhouse gases, there is more work to be done.  One of the biggest challenges deals with tightening regulations on the oil and gas industries within states, as they are some of the biggest polluters in the nation. Power plants that use coal and/or gas contribute significantly to greenhouse gas emissions, accounting for ninety percent of the top 50 polluters and emitting twenty seven percent of all greenhouse gases produced from electricity nationwide.

As more constituents begin to feel the effects of climate change on the East Coast and the disruptive nature of the increased frequency of natural events like wildfires, the extent to which Congress will experience heightened pressure to pass additional legislation to combat climate chain remains uncertain. However, although the East Coast has not experienced the same magnitude of consequences from wildfires that the West Coast has in recent years, people’s lives have been disrupted by the recent periods of smoke. A natural consequence of that disruption is that people are paying more attention to wildfires, what is causing them, and potentially, what can be done so that they do not continue to disrupt our summer plans. The duration of people’s interest in wildfires is difficult to determine. However, frequent periods of poor air quality caused by wildfire smoke persist, it is likely that wildfires will continue to be a topic of interest and conversation for the average East Coast citizen, increasing awareness around climate changes issues and the impacts that will be felt by this generation.

Should The U.S. Ban TikTok? Shifting Implications In Today’s Quest For National Security And Digital Privacy.

By CHHS Extern Peter Scheffel

National security and privacy concerns have grown with the advent of the internet and the subsequent shift to online-embedded life. Today, a functioning member of society enjoys arguably all of life’s necessary societal connections in a format digitally connected through hundreds of miles of fiber-optic cable and continent-spanning servers. To the average U.S. citizen, finances, education, entertainment, consumer habits, and much more have a close linkage with the digital world.

Recently, the social media app TikTok has been under discussion in Congress due to both privacy and national security concerns directed at TikTok’s connections to China, and the ongoing geopolitical tensions currently escalating between the U.S. and China. TikTok is owned by ByteDance, a Chinese company based in Beijing, though allegations assert TikTok and ByteDance as being functionally the same company, sharing chat applications, data analysis mechanisms, and managerial contact. Given the growth in geopolitical tension between the U.S. and China, questions have been raised as to the amount of data TikTok collects and to what extent Chinese authorities have channels with which to access TikTok’s U.S. data. While such privacy and national security concerns have merit, especially considering the lack of transparency in what TikTok data China has access to and the risk of algorithm manipulation China could be utilizing for U.S. users, an outright ban of TikTok in the U.S. would be a mistake.

Given the speed with which legislation has been drawn up to potentially ban TikTok, what concerns are being discussed? Privacy and national security concerns grew further in early March of 2023, after Senator Josh Hawley sent a letter to Treasury Secretary Janet Yellen seeking review of whistleblower allegations which had been brought to Senator Hawley. These allegations included Chinese Communist Party (CCP) members being able to toggle between U.S. and Chinese TikTok data with ease despite the supposed separation between U.S. and Chinese TikTok. This includes alleged access to U.S. citizen user data and most concerning: personal location. Other allegations detailed China-based employees being able to access U.S. data with simple managerial approval and that TikTok continues to maintain near constant contact with ByteDance, the China-based parent company. Other possible abuses the CCP could engage in using U.S. TikTok user data include increasing cyberespionage and hacking efforts based on the accessed data, especially with the data’s potential to benefit social engineering attacks (a cyber-attack chiefly built around how people think and behave, manipulating human error to gain access), which are often incredibly successful if prior intelligence has been gathered on the target. Also, psychological manipulation via algorithm bias, as TikTok has not shared how its algorithm functions leading to allegations the algorithm is biased in reinforcing/influencing negative content on America’s population.

Government cybersecurity officials share in the growing concern surrounding TikTok, with a focus on national security. Rob Joyce, who leads the U.S. National Security Agency’s cybersecurity division, has likened TikTok to a “Trojan Horse” which carries long term security concerns years into the future given the questions surrounding CCP access to and manipulation of data on the 150 million U.S. TikTok users. In 2022 four ByteDance employees were discovered to have accessed U.S. reporters’ data which has led to Department of Justice investigations into journalist surveillance by ByteDance. Though the employees were later fired, the fact that such surveillance took place has increased the pressure for Congress to act by banning or forcing the sale of TikTok in the United States. TikTok’s CEO Zi Chew did recently testify before Congress, but his testimony only further escalated the concern and the feeling around Congress that something ought to be done. On one occasion, Chew replied to a question on if ByteDance, TikTok’s parent company had spied on Americans at the behest of the CCP by stating that “I don’t think ‘spying’ is the right way to describe it.” Chew was also unable to definitively make a commitment TikTok would not sell data or if any data is currently being sold, responding “I can get back to you on the details.” On if TikTok would commit to not focus on targeting people under the age of 17, Chew replied that “It’s something we can look into and get back to you.”

Counter arguments include that TikTok is no different than other large social media apps in terms of “risk” of data being used or sold. While some truth may exist in this argument, such an argument does not differentiate between the U.S. based companies and TikTok. Of particular concern are the Chinese laws which enable the CCP to compel companies based in Beijing like TikTok, to share data. And even without the legal ability, given the rising competitive technological atmosphere between China and the U.S., it would not be surprising for the CCP to utilize coercion against ByteDance to access TikTok user data. This highlights the main difference in the comparison argument: China is a rival in many respects due to ongoing economic and national security tensions. Thus, unlike Instagram or Facebook, TikTok’s data, whether or not purposefully, carries a higher risk of problematic use simply due to its potential connection to the CCP. China is more adversary than ally in terms of national security currently. Now, such national security concerns have led to action, with the U.S. banning TikTok on government devices. But while this covers national security to an extent, further calls for banning of the app overall is a mistake for reasons beyond the argument that it collects data like U.S. social media companies.

To pursue an outright ban would be ill-advised. Even assuming such a ban would overcome already established legal concerns addressed during the Trump Administration’s failed attempts; it would set a new concerning precedent on government control over technology and what information, speech, and communication means Americans have access to in the name of “privacy” as defined and controlled by Congress. Banning TikTok is a drastic response to concerns that while real, lack sufficient corroboration to warrant such an extreme remedy. Principles matter in times of tension and peace, and the United States ought not mimic China’s policies to ban foreign apps (YouTube, WhatsApp) in order to punish and compete against China. While some have argued that this is precisely what needs to happen; sacrificing the idea of an open internet (as China has), in order to control and punish countries as well as control what and how a populace communicates; this is not a strong enough answer to overcome the dangers in broadening the ability of the U.S. government to control what technology its citizens can communicate with. This response of banning TikTok, even if never carried out, is a clear reminder of how a hastened legislative response to combat foreign powers can quicken the pace to adopt the very ideals such legislation would be drawn up to defeat. As James Madison penned in a 1798 letter: “Perhaps it is a universal truth that the loss of liberty at home is to be charged to provisions agst. [against] danger real or pretended from abroad.”

HEALTHCARE’S CYBERSECURITY PROBLEM: WHY THE INDUSTRY HAS FALLEN BEHIND ON PREPAREDNESS

By CHHS Extern Peter Scheffel

The healthcare industry is facing a dismal outlook in terms of cybersecurity in 2023 and beyond. A new report from Proofpoint and Cybersecurity at MIT showed that cybersecurity remains much lower on the priority list of healthcare boards versus other sectors. According to the report: 61% of healthcare boardrooms talk about cybersecurity monthly at minimum, and 64% of healthcare boards reported that they had invested sufficiently in cybersecurity. This is in stark contrast to the 75% of all other sectors which discuss cybersecurity at least monthly, and the 76% of all sectors who are satisfied with their investment in cybersecurity. Looking ahead is equally grim, with a smaller percentage of healthcare board participants (77%) expecting to see their cybersecurity budgets increase in the next 12 months compared to 87% of all other study participants expecting an increase. In addition, the healthcare industry has fallen behind in utilizing dark web intelligence, with only 57 percent of healthcare chief information security officers incorporating dark web intelligence into their strategies. The dark web acts as an exchange for malware, ransomware, and stolen information among other illegal activities, and collecting information on this activity as part of pre-attack planning can help companies prevent cyber intrusions before they take place.

It is therefore troubling then, to consistently find such an important industry struggling to counteract the myriad of cybersecurity threats it constantly faces. In terms of outlook, it is critical that the healthcare industry improve its cybersecurity measures and grow in capability and response.

The first issue one must analyze in assessing the state of the healthcare cybersecurity system is why it is so susceptible to attack in the first place? The answer is directly connected to healthcare’s importance in everyday life, and the valuable information the industry utilizes. For one, healthcare organizations engage with enormous amounts of personal and private data. For criminals engaged in malicious cyber activity, healthcare organizations also offer higher chances of compliance with, for example, ransomware (a form of malicious software that prevents access to computer files, systems, or networks until a ransom is paid) given the higher consequences if the attack continues unobstructed to patients, necessary medication, and devices with which medical practitioners rely on to provide care to patients. And as with most criminal enterprises, laziness continues to be a facet of the perceived opportunity. Healthcare appears as an easy target regarding this malicious actor laziness because healthcare organizations continue to lag behind in funding, planning, and promoting of cybersecurity measures.

Over the last few years, several circumstances have exacerbated this problem. First of which has been the continuous uptick in ransomware attacks on U.S. hospitals, which have doubled since 2016. This inevitably overwhelms already unprepared organizations, only worsening the present healthcare cybersecurity problem. Second, malicious hacking groups are becoming bolder in their consistency in healthcare sector attacks. Indeed, healthcare was the most targeted sector for cyberattacks as early as 2021, and groups such as the KillNet, a pro-Russia hacktivist collective continue to increase distributed denial of service (DDoS) attacks on healthcare organizations (DDoS attacks send too many connection requests to a server, overloading it and slowing down/freezing systems). According to Microsoft’s Azure Network Security, DDoS attacks from KillNet rose from 10-20 daily attacks in November of 2022 up to 40-60 daily attacks as of February of 2023. Targets varied and included hospitals, health insurance companies, pharmaceutical companies, and other general medical health services. Third is COVID-19. The healthcare industry at large has had to operate with the difficulties brought on by the pandemic, which only compounded pre-existing pressures, thereby widening opportunities for exploitation. Hospitals and the healthcare industry unsurprisingly then experienced higher cybercrime activity during the height of the pandemic, with ransomware and phishing attacks (an attempt to trick users into for example clicking on a bad link that will download malware) being the dominant method of infiltration by malicious actors. Even medical equipment within the healthcare industry has been subject to cyberattacks and can act as an easy entry point into the healthcare system.

Several recent responses do highlight a desire for broad improvement for healthcare cybersecurity. Examples include the Food and Drug Administration (FDA) releasing updated guidance on cybersecurity measures for medical devices the week of March 26, 2023. The FDA now recommends that medical device manufacturers submit a plan which identifies and addresses cybersecurity vulnerabilities discovered after market. The FDA is also asking medical device manufacturers to implement measures which provide reasonable assurance that the medical device and its systems are secure from cyber vulnerabilities and to release patches both for discovered critical vulnerabilities and routine maintenance. Lastly, the FDA is requesting newly manufactured medical devices include a software bill of materials (a record of components used to develop the applicable software and its relationships in the supply chain).

On March 8, 2023 the Department of Health and Human Services also released a cybersecurity framework implementation guide to help bolster cybersecurity efforts within the healthcare sector. In summary, the framework aims to implement the 2018 National Institute of Standards and Technology (NIST) Framework for Improving Critical Infrastructure Cybersecurity by gap-filling identified risk practices and providing risk management principles and best practices as well as promoting application of a comprehensive industry specific cyber risk management structure. This includes a seven-step implementation process and five high-level functions.

Lastly, in March of 2023, President Biden signed a provision into law as part of government funding legislation. This provision, based on drafted legislation by United States Senators Gary Peters (D-MI) and Rob Portman (R-OH), will require critical infrastructure owners/operators to report substantial cyberattacks to the Cybersecurity and Infrastructure Security Agency (CISA) inside of 72 hours; and within 24 hours after a ransomware payment is made. Under the provision, CISA is also required to create a program capable of warning relevant organizations of ransomware exploitable vulnerabilities and provides CISA with authority to institute a joint ransomware taskforce for coordination with industry of these efforts. Organizations which fail to report are subject to subpoena and may be referred to the Department of Justice if the subpoena is ignored. This legislation may promote better communication and awareness of vulnerabilities in the healthcare sector and serves as a starting point for increased coordination.

Despite potential improvements from government policy, the most recognizable change will have to begin inside the healthcare sector and its organizational boards. Core issues continue to hinder cybersecurity progress in healthcare, including problematic misplaced confidence within the healthcare sector. Unfortunately, only 50% of healthcare boards believe their organization is at risk of substantial cyberattack in the next 12 months. And only 43%  believe their organizations are unprepared to deal with a targeted attack (65% and 47% for all other sectors). Other ongoing issues include lack of cybersecurity expertise by healthcare board directors and poor communication with the organization’s CISO.

Improvement must be sought after with higher urgency because lack of preparedness affects patient wellbeing. Of the critical infrastructure sectors, healthcare arguably has the closest connection to life and death outcomes in the near-term post cyberattack. Testing and procedure delays affect people’s lives. Therefore, a continued responsibility to improve cybersecurity exists within the healthcare sector. Knowledge and planning serve as the first step towards industry wide improvement if the industry is to ‘wake-up’ and engage in stronger cybersecurity before the consequences escalate further.

Drowning Out a Call to Action: Harmonizing Cybersecurity Regulations in the Wake of the National Cybersecurity Strategy

By CHHS Extern Kimberly Gainey

The oft discussed and much anticipated National Cybersecurity Strategy addresses cybersecurity regulatory harmonization with two paragraphs entitled “Harmonize and Streamline New and Existing Regulation.” The Strategy instructs regulators, when feasible, “to harmonize not only regulations and rules, but also assessments and audits of regulated entities.” However, it diverges from a Presidential advisory committee recommendation by designating the Office of National Cyber Director (ONCD), coordinating with the Office of Management and Budget, to lead these efforts, rather than a newly created office within the Cybersecurity and Infrastructure Security Agency (CISA). It is perhaps unsurprising that ONCD received the nod, given their involvement drafting the National Cybersecurity Strategy and the role of the National Cyber Director “as a principal advisor” on cybersecurity policy and strategy to the President.

However, one wonders about the wisdom of this decision on the heels of the retirement of the National Cyber Director Chris Inglis last month, leaving big shoes to fill for the former deputy, now Acting National Cyber Director Kemba Eneas Walden. Serving as the first National Cyber Director, Inglis brought decades of experience in the federal government in a variety of positions at the National Security Agency and the Department of Defense. Praised as a “tremendous leader” and “the best person for the job,” CISA Chief of Staff Kiersten Todt remarked on Inglis’ establishment of ONCD and unifying influence; in “a short period of time, he established an office and a reputation and this ability to unify in many ways, this interagency process and he has been such a tremendous partner to CISA.”

In contrast to two paragraphs in the National Cybersecurity Strategy, another option is presented within a 27 page report after years of study by the committee charged with providing “the best possible industry advice” to the President to assure “the availability and reliability of telecommunications services,” along with other national security and emergency preparedness challenges, which issued several recommendations to ensure internet resilience. The clear theme, per Politico, is “to coax, cajole and needle agencies toward consistent, or ‘harmonized,’ cybersecurity regulations.” The President’s National Security Telecommunications Advisory Committee (NSTAC) released the draft report in anticipation of a meeting where they approved the Strategy for Increasing Trust in the Information and Communications Technology (ICT) and Services Ecosystem, voting to send it to the President for consideration per the Washington Post. The report represents the culmination of a multi-phase study on “Enhancing Internet Resilience in 2021 and Beyond.” After a series of significant cybersecurity incidents, the White House tasked the NSTAC with three crucial cybersecurity topics “foundational” to national security and emergency preparedness. Prior phases developed recommendations regarding those three topics: 1) Software Assurance in the Information and Communications Technology and Services Supply Chain (November 2021); 2) Zero Trust and Trusted Identity Management (February 2022); and 3) Information Technology and Operational Technology Convergence (August 2022). Building from earlier recommendations, the draft report contains a veritable treasure trove of information. For those unable to delve into the 27 page report and appendices, here are the main nuggets.

Attracting immediate attention, several recommendations encourage harmonizing cybersecurity regulations and requirements. First up, recommending that CISA establish the Office of Cybersecurity Regulatory Harmonization (OCRH) to “institutionalize and expand upon existing harmonization efforts” of existing government forums, which lack “the required combination of mission, expertise, and resources that can address the scale of the challenge.” To elucidate the need to establish the OCRH, the report highlights two government entities attempting to address this issue, at least in part. First, the intergovernmental Cyber Incident Reporting Council established to “coordinate, deconflict, and harmonize federal incident reporting requirements, including those issued through regulations.” Note that the Cyber Incident Reporting Council’s mission is limited to incident reporting. Second, the federal interagency Cybersecurity Forum for Independent and Executive Branch Regulators critiqued for lacking the dedicated staffing necessary to develop expertise across sectors as most participating officials juggle Forum participation on top of various responsibilities for their home agencies. This may explain why the Forum has only held one meeting since it was relaunched in February 2022 under Federal Communications Commission (FCC) leadership, after several years of inactivity.

Resolving the dearth of dedicated staff and resources by forming and funding OCRH “would create an institutionalized source of in-depth cybersecurity regulatory expertise across sectors that does not currently exist within the federal government,” which is one of OCRH’s responsibilities. Other OCRH responsibilities include creating resources for regulators to use to develop cybersecurity requirements that leverage consensus standards where possible and providing regulators with technical assistance during rulemaking. These responsibilities respond to “[a] recurring challenge that . . . even though most regulations cite consensus standards as the basis for their requirements, variations in implementations across regulators often result in divergent requirements. Developing regulatory resources that provide common language that could be used across sectors could address the challenge.” OCRH’s first task would involve coordinating with the National Institute of Standards and Technology “to publish a public report that catalogs existing cybersecurity requirements across sectors, analyzes how they align or diverge from consensus standards down to the control level, and identifies opportunities to drive harmonization.”

Unlike the National Cybersecurity Strategy, the NSTAC report explains why CISA was selected to house the OCRH:  “primarily because the responsibilities are well aligned with CISA’s role as National Coordinator for critical infrastructure security and resilience, which includes ensuring a unified approach to cyber risk management . . . .” The OCRH also aligns with CISA’s preference to remain non-regulatory as it “would act only in an advisory capacity in support of other federal government regulators.” CISA Director Jen Easterly describes regulation as “one tool” for federal officials, “not a panacea,” which she has previously disavowed in favor of partnerships: “I am certainly not a proponent of regulation, because we’re a voluntary agency.”

Two other recommendations in the NSTAC report involve regulatory harmonization, specifically creating policies and processes to encourage:  1) regulation harmonization and 2) federal government cybersecurity requirement harmonization and drive consensus standards development, listing suggested Presidential actions for federal agencies. The NSTAC Report’s remaining recommendations are to advance the adoption of Post Quantum Cryptography and further work done in earlier phases by: creating and improving transparent procurement language to encourage vendor security best practices; enhancing CISA’s Continuous Diagnostics and Mitigation Program; and maximizing automation and reuse of evidence in federal compliance with the Federal Information Security Management Act.

Response to the NSTAC report, prior to being overrun by a deluge of coverage around the National Cybersecurity Strategy, was positive. United States House of Representatives Committee on Homeland Security Chairman Mark Green welcomed the emphasis on regulatory harmonization, citing “duplicative and burdensome regulatory obligations, most of which stem from the White House push for cross-sector mandates.” Chairman Green expressed enthusiasm for “pursuing strong oversight over this Administration’s scattershot cybersecurity regulations this Congress and . . . working with CISA to ensure the red tape doesn’t strangle industry,” referring to “Cyber Incident Reporting for Critical Infrastructure Act rulemaking.” However, American Banker reported that while many share the goal of regulatory harmonization, many barriers may hinder achievement including the lack of authority to harmonize a variety of state-level requirements. Further, expanding CISA’s role to include advising on regulations may change political dynamics, negatively affecting the support CISA currently receives in conjunction with its “primarily operational role in improving cybersecurity across all levels of government and providing resources that private enterprises can use to improve their own cybersecurity.”

Pursuant to the National Cybersecurity Strategy, the Acting National Cyber Director will likely encounter similar difficulties and faces the daunting prospect of leading regulatory harmonization on a national and international scale. The Strategy calls for the pursuit, when necessary, of “cross-border regulatory harmonization to prevent cybersecurity requirements from impeding digital trade flows.” Despite consensus around the value of regulatory harmonization, the path toward realization remains murky.

Norfolk Southern Train Derailment: Where Inaction in Crisis Created Further Crisis

By CHHS Extern Rebecca Wells

On February 3, 2023 thirty-eight cars of a Norfolk Southern train went off track in East Palestine, Ohio, at least ten of which contained hazardous and combustible liquids, creating concern over the potential for health and environmental crises. These chemicals included the colorless and flammable butyl acrylate and vinyl chloride, which are typically used for the industrial production of polymers.

The derailment caused a multi-day fire in the area. Residents within a 2-mile zone were required to evacuate by a mandatory evacuation order concern of the toxicity and flammability of the cargo. An estimated 3,500 fish have been killed by the chemical release. Cleanup crews are continuing to excavate a “grossly contaminated” 1,000-foot area around the train tracks. Visible plumes of contaminants floated down waterways along the Ohio River. The Ohio River courses through or borders Illinois, Indiana, Kentucky, Ohio, Pennsylvania, and West Virginia and supplies drinking water for over 5 million Americans. At the time of publishing, those contaminants appear to have been contained and have not polluted the Ohio River itself.

The evacuation order was lifted on Wednesday, February 8, and since then residents have reported a burning sensation in their eyes, animals falling ill, and a strong odor lingering in town.

Immediately following the derailment, U.S. Senator Sherrod Brown (D-OH) sent letters to the Ohio state government and federal government asking for an emergency declaration. Emergency declarations allow governments to act faster and provide more funding than they would be able to do otherwise. At the federal level, the president may make an emergency declaration under the Stafford Act, which provides two routes for receiving federal support: (1) emergency declarations and (2) major disaster declarations. Emergency declarations trigger aid that protects property, public health, and safety. The objective of these funds is to lesson or avert threat of an incident becoming a catastrophic event. Because of this, emergency declarations can be made before the event in question occurs. In contrast, major disaster declarations are issued after catastrophes occur, and give broader authority for federal agencies to provide supplemental assistance to help communities recover from the event. Requests must be submitted by Governors and the decision to approve a request rests solely with the President.

Despite asking for emergency declarations and swift action, the residents of East Palestine have been met with inaction and mixed messages. On the public health and environmental front, while the Ohio Health Director, Bruce Vanderhoff, urges residents of East Palestine to drink bottled water, messaging from other state and federal agencies has been inconsistent. On February 14, the Environmental Protection Agency (EPA) released a statement finding, “air monitoring has not detected any levels of health concern in the community that are attributed to the train derailment.” However, just a few days prior, the EPA sent a letter outlining Northfolk’s potential liability. In that letter they found that there were more chemicals dumped in the river than initial evaluations detected, including vinyl chloride, ethylene glycol monobutyl ether, ethylhexyl acrylate, isobutylene, and butyl acrylate. Notably, the EPA has set the Maximum Contaminant Level Goal (MCLG) of vinyl chloride at zero, meaning the only level at which there is no known or expected health risk is none.

Messaging on the transportation front has been lacking as well. After initially being largely absent in the conversation, Pete Buttigieg has since reflected that he could have “spoken sooner” about the derailment and its devastating impacts on human and environmental health.

On February 21, the EPA ordered Norfolk Southern to clean up their toxic spill, cautioning of fines and potential liability. On Thursday, February 23, the National Transportation Safety Board released their preliminary report, twenty days after the initial derailment. Chair of the Safety Board, Jennifer Homendy, called the derailment, “100% preventable,” citing a failure to detect an overheating car as a cause of the derailment, without assigning full liability to Norfolk Southern. The NTSB’s report marks a transition from identifying the crisis into recovering from it.

In addition to being preventable, the derailment was, unfortunately, foreseeable. While the federal government has urged Norfolk Southern to act and change their behavior, the government itself played a role in creating this disaster. During the Trump administration, several guidelines for railways were relaxed, including inspection and brake requirements. The most recent investigations into the derailment indicate that the train while undetected by sensors, the train overheated and its wheel bearings broke after the crew engaged the brakes. In the months preceding the derailment, rail workers across the country went on strike, demanding safer working conditions. President Biden signed a bill in December, 2022 making strikes by rail workers illegal.

Several lessons can be learned from this tragedy. Good emergency planning requires that the needs and perspectives of all impacted persons are considered, not just administrators.

The derailment in East Palestine also demonstrates how difficult disaster recovery can be when appropriate preventative measures are not taken. Since the East Palestine derailment, a second Norfolk Southern train has derailed in Ohio. A third train derailed in Alabama, just hours before the C.E.O. of Norfolk Southern testified before Congress regarding his company’s liability and safety protocols. A failure to act proactively has created a scenario while these derailments will be expected until Congress acts to strengthen safety requirements for railroad and railway companies.