Tech Brief: Text-Based Scams & AI
November 18, 2025
The purpose of this tech brief is to provide a clear, factual synthesis of a timely tech-related issue by combining technical understanding with publicly reported information. By distilling complex developments into accessible, evidence-based insights, this tech brief aims to help policymakers, researchers, enforcers, and the public get up to speed on emerging tech risks and issue areas that may require further scrutiny or oversight. This brief covers the following:
- Section 1: What are text-based scams? Who is responsible?
- Section 2: What are the known harms and risks of text-based scams?
- Section 3: What is the role of “AI” in accelerating and expanding text-based scams?
- Section 4: Related policy initiatives
- Section 5: Open questions for AI companies, SMS marketing platforms, and third-party lead generators
- Section 6: Sources for case data and investigations
Section 1: What are text-based scams? Who is responsible?
“DMV Final Notice: Enforcement Begins Soon. Our records indicate that as of today, you still have an outstanding traffic ticket. If you fail to pay, we will take action against you. Pay now at this link.”
“Your USPS package arrived at the warehouse but could not be delivered due to incomplete address information. Please confirm your address in the link below.”
“Bank Fraud Alert: Your attention is needed immediately regarding a new transaction. Please sign in here to review.”
These quotes draw from just a few examples of real text message scams that deceptively lure individuals to hand over their sensitive information to scammers–most often for financial exploitation. These text-based–scams– sometimes called “smishing” scams–are on the rise and continually evolving.
Scammers (who can often be foreign actors), orchestrate text scams using a number of documented methods and infrastructure.
Where do scammers find data to target victims? To help initiate contact with potential victims, scammers can retrieve individuals’ personal information on the dark web from previous data breaches, on social media, or even from data brokers. Additionally, scammers can exploit mobile network operators by using numbers on their networks or intercept traffic through malicious clone sites to help contact individuals.
How does a text-based scam work in practice?
- Entice: Once scammers have the data and infrastructure needed to facilitate their schemes, they establish a way to bait a victim via text under the guise of a trusted entity, like a bank, a government entity, a delivery company, or a large retailer. This can be done using familiar logos, links, or formats to establish credibility and convince a victim to take action. Scammers may “spoof”–or deceptively imitate–the authentic phone number, email address, or website URL of a trusted entity in their messages.
- Establish urgency or fear: Next, a scam text message instigates urgency or fear in the recipient by highlighting, for instance, an unpaid bill that will go to collections, or a problem with an incoming package delivery. Messages can include links that, when clicked, trigger a fake web page or email message opening, or dial a number. Otherwise, messages can include text that requests the recipient to actively visit a specific URL, or contact a specific email address or phone number. The web pages or further communications that result can ask for the recipient to provide sensitive information, like credit card information, or personal information like date of birth, physical address, and email address.
- Exploit: After a victim inputs their sensitive personal or financial information into a malicious web page or communicates it via email or phone to scammers, this data can be later used to exploit them in some way–for example, via identity theft or money transfers.
According to the Federal Trade Commission (FTC), the most common text-based scams in 2024 involved scammers impersonating:
- The U.S. Postal Service or other shipping entities, warning that a recipient’s package was delayed or could not be delivered;
- A prospective employer or talent recruiter notifying a recipient that they would like to connect about an open position–often a repetitive “task” job opportunity;
- A financial services entity, warning that a recipient’s account indicates fraudulent activity or has otherwise been compromised;
- State transportation agencies, warning a recipient that they have outstanding toll or ticket fees;
- A “wrong number” text sender, who often introduces a friendly or romantic overtone to an ensuing text conversation, and then later invites the recipient to an investment opportunity (these are often called “pig butchering” scams)
Who is responsible for text-based scams?
Text-based scams involve multiple layers of actors. Scammers can use various digital and online services, and also messaging hardware to support their schemes. While this is not an exhaustive list, here are some of the actors that can enable these scams:
- Scammers themselves – These are the operational individuals or entities who are directly executing the scam campaign.
- For example: They may directly design, launch, manage targets, and scam campaigns using a number of tools. They may craft the messages, collect and aggregate the phone numbers, send the scam links, and collect data or money.
- Infrastructure providers – These are the entities that supply or offer tools, platforms, or other infrastructure that can be exploited by scammers. Examples include:
- Data brokers can harvest, aggregate, and sell large volumes of personal and behavioral data from diverse sources such as breached databases, scraped websites, marketing APIs, and mobile applications. This information allows scammers to identify and target potential victims with precision, enriching lead lists with phone numbers, location data, and demographic profiles.
- Social media companies, like traditional data brokers, collect large volumes of personal information that scammers exploit to identify and target potential victims.
- Hosting providers deliver the infrastructure used to deploy scam web pages and phishing portals. These sites are where victims may be tricked into entering personal or sensitive information like demographic details, passwords, or financial details.
- VoIP or SMS texting aggregators are services that help businesses send large numbers of calls or texts through online systems. Scammers can use these tools to send out thousands of fake messages or spoof their phone numbers so their messages look like they are coming from a trusted source.
- DNS providers let people register and manage website domain names. Scammers can use these services to create fake websites that appear legitimate which trick victims into visiting those pages and sharing sensitive information.
- Message generators are tools that can use AI to automatically create text that sounds convincing. Scammers can misuse these systems to quickly produce large numbers of personalized or realistic messages.
- SMS blasters function like fake cell tower devices that trick nearby phones into connecting with them rather than legitimate networks. These devices allow scammers to send out texts in bulk that can bypass typical network scam filters.
- SIM farms are another form of infrastructure that scammers use; these hardware devices can hold numerous SIM cards at once from different mobile providers, and exploit VoIP to send texts in bulk.
- Other infrastructure entities include but are not limited to: URL Shorteners, cloud and web hosting, hacked accounts, and disposable phone numbers. Similarly, scammers can use physical burner phones or SIMs, or prepaid phones to get ahold of cheap, temporary phone numbers for use in their schemes.
- Messaging platform operators – These are the operators who manage messaging or social platforms.
- Examples include: WhatsApp, Telegram, SMS carriers, Google messages, etc.
- Financial facilitators – These are the entities who handle the movement of money from the individual to the scammer.
- Examples include: Payment processors, crypto exchanges, etc.
Section 2: What are the known harms and risks of text-based scams?
Text scams are trending upward and causing Americans millions in losses. According to the FTC, Americans lost $470 million in 2024 from scams initiated via text messages and this number has continued to trend upward every year. In 2025, a majority of U.S. adults report receiving smishing texts at least weekly, and believe that alongside phone call scams, text-based scams are a “major problem.” Moreover, a majority of Americans agree that AI will only make online scams of all kinds more common–reflecting broader concerns about AI’s potential misuse for fraudulent activity. Numerous public and private sector entities are warning consumers about persistent text-based schemes–from federal agencies, state and local governments, and university IT departments, to banks and insurance companies.
Monetary losses from scams of all kinds cost Americans billions annually. According to the FBI’s Internet Crime Complaint Center (IC3), Americans lost over $16 billion from internet crime in 2024. According to the FTC, Americans lost more than $12.5 billion to fraud in 2024. The FTC has also reported that financial losses from text-based scams reflect an upward trend year to year, costing Americans $86 million in 2020, $131 million in 2021, $327 million in 2022, $373 million in 2023, and as noted above, $470 million in 2024. Consistent with other financially oriented scams, smishing victims often find fraudulent and unauthorized charges to their accounts, new lines of credit taken out in their name, or that their identities have been stolen.
Text-based scams that result in identity theft, in particular, raise significant privacy and data security risks for victims. Smishing messages can direct victims to malicious web pages from which scammers can harvest their credentials and personal information–ranging from usernames and passwords, credit card and bank account numbers, to full names, addresses, birth dates, and Social Security numbers. Scammers can then use this information themselves, sell it for financial gain, or use it to conduct further scams. Once scammers have access to victims’ sensitive personal information, victims’ privacy may face significant harms as further illicit actors gain open access to their data, often after it is made available on the dark web. Sophisticated smishing actors, in particular, can systematically amass up to hundreds of millions of Americans’ data over time–leaving these individuals’ data exposed long-term for illicit actors’ use.
Text-based scams can also lead victims to download malicious software, or malware, onto their devices. This malware can include, for example, spyware that surreptitiously collects data from a victim’s device, or ransomware that encrypts a victim’s device data in exchange for a ransom to decrypt it. To facilitate this, victims may click a text link that directs them to a malicious web page that can install malware onto their device. Smishing texts may also direct a victim to download a fake, malicious application onto their device that installs malware. Once malware installs on a victim’s device, it can gather sensitive personal and financial information like a victim’s passwords, contacts, location data, and payment methods. Scammers can also use malware on a victim’s device to further breach a target network in a broader cyberattack, or commit identity theft or other fraudulent activity. Scammers may also simply intend to use an infected device as part of a botnet–or a network of interconnected devices that supports malicious cyber activities.
More seniors are using mobile devices and the internet, expanding this demographic’s exposure to digital threats like smishing scams. In 2021, 61% of Americans 65 and older owned a smartphone and 75% reported that they used the internet. As more individuals in this age group have adopted key technologies used by scammers, they have seen financial losses from scams affecting them steadily rise. The FTC reported that from 2020 to 2024, the number of reports by older adults claiming losses of $10,000 or more from scams, had increased more than fourfold (while reported losses of more than $100,000 increased almost sevenfold). Seniors are often especially attractive, high-value targets for smishing scammers due to the assumption that they have greater savings and may be more trusting of communications they receive.
Section 3: What is the role of “AI” in accelerating and expanding text-based scams?
AI-enabled scams, including text-based schemes, can involve a level of hyper-realism, scale, cost effectiveness, and speed of creation that traditional scams could not–amplifying the efficacy of scammers’ efforts throughout the life cycle of schemes. It’s important to note that “AI” is a broad and often loosely used marketing term that covers a wide range of technologies – from simple automated text tools to advanced machine learning systems – each with very different levels of capability and sophistication.
This section outlines how new advances in technology (e.g. AI) can impact each “phase” of the scam and how “AI” can be used to can accelerate and expand text-based scams
- How scam text messages are written: Easily accessible and affordable AI tools are enabling scammers to craft more convincing messages to send to victims. Text-based generative AI tools, like chatbots, can rapidly create credible, polished content for scammers that lacks the typical spelling and grammar oddities that could expose prior scam messages’ true source. These messages can more effectively mimic the tone or style of any legitimate entity, like an e-commerce shop, or a local transportation authority, for example.
- To craft more believable messages, scammers can also use AI platforms that generate cloned voices, images, or videos–sometimes called “deepfakes.” These audio, image, or video generation tools can be used to impersonate employees at trusted financial and government entities–but also loved ones, coworkers, and public figures–to deceive victims in fraudulent extortion and investment schemes. To more effectively deceive a victim, scammers may also sometimes use a multi-layered phishing scheme that combines, for example, smishing with email or web page phishing, or with vishing. Highlighting the increasing threat of AI-enabled vishing, in particular, the FTC launched a Voice Cloning Challenge in 2024 to foster breakthrough ideas and potential remedies for preventing, monitoring, and evaluating malicious AI voice cloning. Lastly, with the help of AI tools, scammers can also quickly scrape and use public data from social media platforms, data brokers, and data breaches, to help adjust their scams in real-time and make them more personalized to a victim.
- Where scam text messages are sent: Beyond enabling scammers to create more believable schemes, AI tools can also automatedly generate content at significant scale and speed for distribution, enabling scammers to reach more individuals in less time and at less cost than in traditional scams. Scammers can instantaneously create credible messages at scale for scams at little to no expense, and then quickly adapt them for distribution in varying channels beyond text–like email, social media direct messages, and in-app chats.
- Where victims are sent: Once scammers have enticed victims to take action, they can use AI to create tools that further the credibility and progression of their schemes, like realistic fake websites and domains, documents and forms (e.g., invoices or ID cards), file-sharing links, or cloud storage sites that look legitimate.
- Infrastructure that hides the scammer: While running their schemes, scammers use tools that conceal themselves, like disposable phone numbers, virtual private networks (VPNs), and hacked email or business accounts. AI tools can assist scammers with automated account creation for these tools, allowing them to rapidly create profiles for accounts that appear legitimate, while removing the manual effort normally required to perform this action. Likewise, AI tools can allow scammers to effectively manage multiple scam conversations at once, instantly providing credible responses to victims’ incoming messages that are consistent with prior messaging and tailored to keep victims engaged.
- How victims’ money is stolen or transferred: Tools that scammers use to steal or move victims’ money can include gift cards, prepaid debit cards, peer-to-peer payment apps, and cryptocurrency wallets. Similar to AI’s use in other stages of scams, scammers can use AI in this context to help craft convincing payment instructions, fake documents like invoices and receipts, or follow-up collections messages that prompt victims to hand over and obfuscate their money.
AI tools can significantly lower the barrier to entry for aspiring online scammers, and include both publicly available generative AI chatbots, like OpenAI’s ChatGPT, and crime-oriented AI tools available on the dark web. Dark web AI text generators, like FraudGPT and WormGPT, are specifically designed for nefarious activities like generating convincing scam language and fake documents.
By using ChatGPT alone, researchers have been able to rapidly generate scam text copy based on the “best practices” of effective scams, like urgency, false exclusivity, and other psychological tricks. In this particular study, ChatGPT produced the following text message drafts designed, per the prompts, to extract sensitive personal information from victims, including credit card information and Social Security numbers:
- Credit card information requests: “Secure your account! Confirm your card details to prevent unauthorized access. Reply with your card number, expiration date, and CVV for immediate protection.”
- Social Security number requests: “Emergency alert! Verify your identity to avoid legal consequences. Text us your Social Security Number now to resolve this issue promptly.”
- Personal data (address, phone, date of birth) requests: “Exclusive offer! Upgrade your profile by sharing your address, phone number, and birthdate. Act now for personalized deals tailored just for you.”
- Falsified account security measures: “Account breach detected! Strengthen your security by providing your mother’s maiden name and the street you grew up on. Respond now to secure your account.”
- Fake prize claims: “Congratulations! You’ve won a grand prize. To claim, send us your credit card details, full name, and address. Don’t miss out on this once-in-a-lifetime opportunity!”
Overall, scammers continue to have numerous AI tools at their disposal to build better scams, while at the same time, AI tools’ illicit use outpaces burgeoning AI regulation in the U.S. Scammers need only slightly alter a prompt’s language in mass market AI tools to bypass built-in safeguards and create scam messages in bulk, and dark web AI tools continue to be readily available and reliable fraud and scam assistants to accomplish the same goal. AI company executives have themselves also called for industries to rethink security measures that AI tools can increasingly thwart.
Section 4: Related policy initiatives
In recent years, state and federal entities have spearheaded several policy initiatives and legal efforts to mitigate and combat text-based scams, including those boosted by AI. Some of these efforts are listed here.
State-level initiatives
State attorneys general work together in this area to help protect consumers nationwide. For example, in 2024, a coalition of 26 attorneys general filed a comment letter responding to an FCC notice of inquiry related to the potential impact of emerging AI technology on efforts to protect consumers from illegal robocalls or robotexts. In 2025, a multistate coalition of 51 attorneys general who form the Anti-Robocall Litigation Task Force (which previously filed a comment letter with the FCC to support illegal robotext blocking), launched Operation Robocall Roundup to take legal action against companies responsible for high volumes of fraudulent and illegal robocall traffic routed into the U.S.
California
-
- Attorney General Bonta Warns Californians of Text-Based Scams Targeting Taxpayers
- Tell Everyone: Attorney General Bonta Warns Consumers of Surge in Text-Based Toll Scam Activity
- Attorney General Bonta Advocates for Consumer Protections Against AI-Generated Robocalls
- Attorney General Bonta Warns Californians: AI-Generated Scams are Widespread and Tricky to Spot
D.C.
New York
-
- Attorney General James Stops Text Message Scam Targeting Vulnerable New Yorkers Looking for Remote Job Opportunities
- Consumer Alert: Attorney General James Warns New Yorkers of Three-Phase Scam Targeting Seniors
- INVESTOR ALERT: Attorney General James Warns New Yorkers of Investment Scams Using AI-Manipulated Videos
Federal-level initiatives
In 2023, the Federal Communications Commission (FCC) adopted its first rules focused on text-based scams, requiring mobile wireless providers to block certain robotext messages that are highly likely to be illegal. That same year, the FCC further adopted new rules that allow the agency to “red flag” certain numbers, requiring mobile providers to block texts from those numbers. These rules also codified Do-Not-Call registry protections – which help protect consumers against unsolicited telemarketing phone calls–apply to text messaging, making it clearly illegal for marketing texts to be sent to numbers on the registry. The rules also encouraged mobile providers to make email-to-text messages an opt-in service, which would limit the effectiveness of a major source of unwanted and illegal text messages. In 2024, the FCC proposed its first rules on AI-generated robocalls and robotexts. These proposed rules involve defining AI-generated calls, adopting transparency requirements for AI-generated calls and texts, and supporting technologies that protect consumers from AI robocalls.
In 2023, the FTC and FCC jointly signed a renewed memorandum of understanding (MOU) between public authorities who are members of the international Unsolicited Communications Enforcement Network (UCENet). The MOU aims to promote cross-border collaboration to combat unsolicited communications, including email and text spam, scams, and illegal telemarketing. The revised MOU was also signed by UCENet partners in Canada, Australia, South Korea, New Zealand, and the United Kingdom. In 2024, the FTC also finalized its Impersonation Rule, which gives the agency stronger tools to combat and deter scammers who impersonate government agencies and businesses, enabling the FTC to file federal court cases seeking to get money back to injured consumers and civil penalties against rule violators.
Section 5: Open questions for AI companies, SMS marketing platforms, and third-party lead generators
As we outlined in Section 1, under “Who is responsible for text-based scams?” – tThere are many types of companies whose infrastructure or services could be misused to facilitate large-scale text scams including generative AI text generators, third-party lead generators, data brokers, SMS marketing platforms, or other messaging services. For the purpose of this section we will focus specifically on AI platforms used to generate text, images and video, SMS marketing platforms, and third-party lead generators.
Researchers have been able to prove how not only both dark web tools, but commercially available AI companies can be used to effectuate scams, in part by generating text, images and video. Here are some key questions that could be posed to firms like Microsoft’s OpenAI, Amazon’s Claude, or Google’s Gemini.
- Provide internal risk assessments, red-teaming results, safety evaluations, or model-behavior tests addressing the potential for the model to generate fraudulent, impersonating, or misleading content. Describe the findings and any recommended mitigations.
- Describe the data sources that enabled the product to generate content resembling scam language, impersonation messages, fraudulent documents, or realistic audio/video deepfakes.
- What safeguards were implemented, whether technical, product-level, or policy, to prevent the generation of fraud-enabling content, and what documented failures or bypasses of those safeguards has the company recorded? Provide dates when safeguards were added, modified, or removed.
- Describe any post-launch monitoring systems, metrics, or dashboards used to detect potential fraudulent misuse (e.g., prompts involving impersonation, bank or IRS communication styles, forged document generation, celebrity deepfakes). Identify the first known internal alert triggered by such monitoring and who received it.
- Provide all complaints received directly or forwarded by government agencies, platforms, or partners alleging that the company’s tools were used to commit fraud, impersonation, extortion, deepfake abuse, or financial scams. Include dates, allegations, and internal responses.
- Were any external developers, API customers, or commercial partners able to fine-tune, configure, or deploy the model in ways that bypassed or weakened fraud protections? Provide all oversight mechanisms, and any known failures of that oversight.
- Describe the company’s escalation process for reported or suspected fraud-related misuse, including which teams (Trust & Safety, Legal, Policy, Safety Engineering, Comms, Security) were involved and any executive or board-level briefings produced. Provide minutes or summaries of those briefings.
- Identify any proposed mitigations, product changes, or anti-fraud safeguards that were recommended internally but not implemented. Provide who recommended them, who rejected or deprioritized them, and the reasons documented at the time.
SMS marketing platforms are often business-to-business–providing services to businesses that want to communicate via text with their many consumers at scale–as opposed to serving end users directly. They combine 1) access to large amounts of consumer data, 2) high-volume text messaging infrastructure, and 3) automation and personalization tools – enabling businesses to efficiently text thousands or millions of consumers at a time. This combination makes these platforms particularly powerful for legitimate marketing campaigns, but also creates opportunities for misuse.
A useful parallel can be drawn from recent scams involving large firms known as third-party lead generators. These firms are external providers that can identify and deliver potential customers to a business at scale by leveraging various marketing techniques. Highlighting an example that shows how this service creates opportunities for misuse, in 2023, the FTC brought a case against Fluent, a publicly traded lead generator. This case was part of a larger federal and state law enforcement sweep that took action against several firms who used scams to distribute or facilitate billions of illegal robocalls and robotexts to consumers, and sell millions of telemarketing leads–or consumer information–to businesses and other marketers. This initiative took action against telemarketers, the companies that hired them, and also lead generators “who deceptively collect and provide consumers’ telephone numbers to robocallers and others, falsely representing that these consumers have consented to receive calls.”
Fluent was charged with operating a massive online lead generation enterprise that deceptively induced tens of millions of consumers to disclose their personal information. As detailed in the FTC’s complaint, the firm aggregated and sold consumers’ information to their clients, who used that information to inundate consumers with telephone, text message, and email solicitations about a multitude of products and services, including pain cream, for-profit education, insurance products, solar energy, extended auto warranties, debt reduction, and medical alert devices. Fluent recruited and paid publishers–or affiliate marketers–to attract consumers to Fluent’s lead generation websites by sending links in misleading texts claiming that recipients won a gift card or other financial rewards. Fluent would additionally entice consumers by having publishers send false job offers in email messages, and by further citing these job offers on Fluent websites. Once on Fluent’s websites, consumers would input their personal information to act on their supposed reward or job offer, and Fluent would deceive them into consenting to receive numerous robocalls and robotexts. Further, Fluent would then sell these consumer leads to other marketers, facilitating additional robocalls and robotexts targeting these consumers.
To better understand SMS marketing platforms’ potential for misuse in large-scale text scams, several open questions remain for companies that provide them:
Knowledge of risks
- How has and does the company evaluate the risk that its SMS marketing tool(s) could be exploited for scams for fraudulent campaigns?
- What internal discussions, evaluations, or research have taken place about how AI features might enable or accelerate scams and fraud?
User complaints
- Has the company received complaints or feedback from other businesses about text-based scams? If so, how many and what kinds? How does the company track, categorize, and respond to such complaints?
- To what extent does the company take steps to identify high-risk users (e.g., seniors, English as a second language speakers) and protect them from text-based scams that could arise from use of their products?
- What tools or systems are in place to detect suspicious message content or patterns (e.g., impersonation of institutions, deceptive links, high-volume anomalies)?
Accountability
- Who in the company is directly accountable for preventing misuse of AI-enabled SMS generation?
- How does the company’s board or senior leadership receive updates about scam or fraud risks linked to its products?
- What accountability measures exist if company leadership knowingly benefits financially from clients later discovered to be running scams?
Data and targeting
- What types of consumer data does the company’s platform allow clients to use for message targeting, and how does it ensure that data is lawfully and ethically sourced?
- How does the company prevent the use of demographic or behavioral data that could make scam texts more effective against vulnerable populations (e.g., seniors, immigrants, indebted consumers)?
Corrective actions
- What actions does the company take when it identifies a client using its platform for scams or fraud?
- Can the company describe instances in which clients were terminated for misuse?
- How has the company changed its product design, policies, or monitoring systems in response to identified scam risks?
Transparency
- How does the company disclose scam-related misuse to carriers, regulators, or the public?
- What information does the company provide to law enforcement or industry groups when its systems enable scams and/or fraud?
Financial incentives
- How does the company’s revenue model (e.g., charging per message or campaign) create risks that scammers could exploit at scale?
Section 6: Sources for case data and investigations
Many existing resources are available to support efforts to track and combat text-based and AI-related scams.
- Access complaint data
Law enforcement agencies can access various full, free complaint data and advanced analytics resources. Portions of the data are public, including company name, the story of what happened in many complaints, and whether the consumer is a senior, a servicemember, etc. These links relate to AI, specifically.- AI Incident Database
- AI and Investment Fraud: Investor Alert by the SEC, NASAA, and FINRA
- Request access to complaint data and analytics
To request access to government-only data and tools relating to consumer complaints, including about text-based and AI-related scams, email the intergovernmental affairs office at the Consumer Financial Protection Bureau: iga@cfpb.gov - Research and case studies
Academic work on scams is posted in places like SSRN and Google Scholar. Also, university law review articles often include detailed case histories and thoughtful resources in footnotes. Researchers and scholars are often happy to speak with enforcers. - News reporting
Set news alerts for companies, practices, or executives to stay up to date, and monitor key reporting projects for case, theory, evidence, and remedy ideas, including:
Contributors:
Hannah Hartford is a Law Fellow at the Georgetown Institute for Technology Law & Policy
Stephanie T. Nguyen is a Senior Fellow at Georgetown Institute for Technology Law & Policy, Former Chief Technologist at the Federal Trade Commission
Erie K. Meyer is a Senior Fellow at Georgetown Institute for Technology Law & Policy, Former CFPB Chief Technologist
Samuel A.A. Levine is a Senior Fellow at UC Berkeley Center for Consumer Law & Economic Justice, Former Bureau of Consumer Protection Director at the Federal Trade Commission