We just finished our second week of the semester and I’m already exhausted, partly because I just submitted the first draft of a law review article that’s 123 pages with over 600 footnotes on a future-proof framework for AI regulation to the University of Tennessee Journal of Business Law. I should have stuck with my original topic of legal ethics and AI.

But alas, who knew so much would happen in 2023? I certainly didn’t even though I spent the entire year speaking on AI to lawyers, businesspeople, and government officials. So, I decided to change my topic in late November as it became clearer that the EU would finally take action on the EU AI Act and that the Brussels effect would likely take hold requiring other governments and all the big players in the tech space to take notice and sharpen their own agendas.

But I’m one of the lucky ones because although I’m not a techie, I’m a former chief privacy officer, and spend a lot of time thinking about things like data protection and cybersecurity, especially as it relates to AI. And I recently assumed the role of GC of an AI startup. So

As I reflect on the current contentious world environment, I cannot help but note the impact that electronic communication has on maintaining quality personal and professional relationships.  Although it sometimes may seem that business law professors are less impacted by domestic and global events, our work’s engagement with broader economic, social, and political issues and our individual intersectionalities can keep us in the throes of it all.  As someone who cares deeply about (and believes in the power of) human relationships and interpersonal communication (leading me to co-design and co-teach small group communication course for our leadership curriculum), I offer some food for thought here.

We all enjoy free speech.  And I respect that right deeply.  I bear a tattoo on my body (an open “speech bubble” on my right scapula) as a symbol of that belief.

I also believe in the careful, considerate exercise of that important right.  I have written a bit about this before, in another blog space, arguing for well considered communication.  My conclusion in that post?

Just because a person can say something in the exercise of their rights to free speech, does not mean that the person should say something. And if someone

I’m a law professor, the general counsel of a medtech company, a podcaster, and I design and deliver courses on a variety of topics as a consultant. I think about and use generative AI daily and it’s really helped boost my productivity. Apparently, I’m unusual among lawyers. According to a Wolter’s Kluwers Future Ready Lawyer report that surveyed 700 legal professionals in the  US and EU, only 15% of lawyers are using generative AI right now but 73% expect to use it next year. 43% of those surveyed see it as an opportunity, 25% see it as a threat, and 26% see it as both.

If you’re planning to be part of the 73% and you practice in the US, here are some ethical implications with citations to select model rules. A few weeks ago, I posted here about business implications that you and your clients should consider.

  • How can you stay up-to-date with the latest advancements in AI technology and best practices, ensuring that you continue to adapt and evolve as a legal professional in an increasingly technology-driven world? Rule 1.1 (Competence)
  • How can AI tools be used effectively and ethically to enhance your practice, whether in legal research,

Last week I had the pleasure of joining my fellow bloggers at the UT Connecting the Threads Conference on the legal issues related to generative AI (GAI) that lawyers need to understand for their clients and their own law practice. Here are some of the questions I posed to the audience and some recommendations for clients. I’ll write about ethical issues for lawyers in a separate post. In the meantime, if you’re using OpenAI or any other GAI, I strongly recommend that you read the terms of use. You may be surprised by certain clauses, including the indemnification provisions. 

I started by asking the audience members to consider what legal areas are most affected by GAI? Although there are many, I’ll focus on data privacy and employment law in this post.

Data Privacy and Cybersecurity

Are the AI tools and technologies you use compliant with relevant data protection and privacy regulations, such as GDPR and CCPA? Are they leaving you open to a cyberattack?

This topic also came up today at a conference at NCCU when I served as a panelist on cybersecurity preparedness for lawyers.

Why is this important?

ChatGPT was banned in Italy for a time

I am excited to highlight the recent posting by Matteo Gatti of his draft paper entitled Corporate Governing: Promises and Risks of Corporations as Socio-Economic Reformers.  I got a preview of this work at the National Business Law Scholars Conference back in June.  The title of the paper is both descriptive and clever, as the abstract below reveals.

Corporations are involved in public affairs: racial equity, women’s rights, LGBTQIA rights, climate efforts are just a few examples of an increasingly long list of areas in which corporations are active and vocal. One phenomenon is well-known: corporations promote, contrast, or finetune governmental initiatives through political messaging. In addition, corporations perform quasi-governmental functions when the actual government cannot (because of its dysfunction) or does not want to (because of its political credo) perform such functions. Economists, legal scholars, and policymakers are split as to whether corporations should take this role.

This Paper contributes to the literature in several ways. First, it maps various areas of reform by corporations in the socio-economic sphere. Then, it provides legal and policy frameworks for corporate governing by analyzing the underlying conducts under our current laws and by evaluating its multifaceted normative merits: Is there a

Greetings from SEALS, where I’ve just left a packed room of law professors grappling with some thorny issues related to ChatGPT4, Claude 2, Copilot, and other forms of generative AI. I don’t have answers to the questions below and some are well above my pay grade, but I am taking them into account as I prepare to teach courses in transactional skills; compliance, corporate governance, and sustainability; and ethics and technology this Fall.

In no particular order, here are some of the questions/points raised during the three-hour session. I’ll have more thoughts on using AI in the classroom in a future post.

  1. AI detectors that schools rely on have high false positives for nonnative speakers and neurodivergent students and they are easy to evade. How can you reliably ensure that students aren’t using AI tools such as ChatGPT if you’ve prohibited it?
  2. If we allow the use of AI in classrooms, how do we change how we assess students?
  3. If our goal is to teach the mastery of legal skills, what are the legal skills we should teach related to the use of AI? How will our students learn critical thinking skills if they can

Depending on who you talk to, you get some pretty extreme perspectives on generative AI. In a former life, I used to have oversight of the lobbying and PAC money for a multinational company. As we all know, companies never ask to be regulated. So when an industry begs for regulation, you know something is up. 

Two weeks ago, I presented the keynote speech to the alumni of AESE, Portugal’s oldest business school, on the topic of my research on business, human rights, and technology with a special focus on AI. If you’re attending Connecting the Threads in October, you’ll hear some of what I discussed.

I may have overprepared, but given the C-Suite audience, that’s better than the alternative. For me that meant spending almost 100 hours  reading books, articles, white papers, and watching videos by data scientists, lawyers, ethicists, government officials, CEOs, and software engineers. 

Because I wanted the audience to really think about their role in our future, I spent quite a bit of time on the doom and gloom scenarios, which the Portuguese press highlighted. I cited the talk by the creators of the Social Dilemma, who warned about the dangers of social

Corporate governance has become a bit of an alphabet soup over the years–CSR,* DEIB,** and ESG*** (among other initialisms) are all part of the current practical lexicon for those of us working with businesses.  As a day celebrating the emancipation of the last enslaved Black Americans, Juneteenth, connects with so many of those acronyms in one way or another.  Businesses have been noticing.

For example, Hassina Obaidy’s June 13, 2023 article, Juneteenth in the Workplace: Why your company should celebrate, posted on the website of workplace training and compliance provider Emtrain, offers one perspective on Juneteenth and CSR.

Black Lives Matter has taught both individuals and companies what allyship can really look like. We’ve also learned that the passing of time is not enough to make real change. Companies need to support employees that come from demographics that have historically been marginalized through company policies, workplace culture, and corporate social responsibility (CSR). Giving employees a day off to celebrate Juneteenth and engage with their communities in a productive way is one step leaders can take to move the needle on CSR.

Similarly, in an article entitled Seven thoughtful ideas for observing Juneteenth in the workplace, Christina Bibby at

I’m excited to announce this new position. It’s particularly timely as just this morning, I had breakfast with venture capitalists, founders, and others in the tech ecosystem nurtured and propelled by the founders of Emerge Americas. This is a great time to be in Miami. Here are the details.

The University of Miami School of Law seeks to appoint an Inaugural Law & Technology Resident Fellow.  

This will be an exciting opportunity as the Fellow will join a vibrant community of scholars and practitioners working at the intersection of law and technology. Miami-Dade County and the surrounding Tech Hub is enjoying a dramatic expansion in technology-related startups and finance.  MiamiLaw has an established J.D. degree concentration in Business of Innovation, Law, and Technology (BILT). Faculty have set up numerous technology-related programs including Law Without Walls (LWOW) and the We Robot conference.

MiamiLaw currently offers courses in: AI and Robot Law; Blockchain Technology and Business Strategies; Digital Asset and Blockchain Regulation; Digital Transformation Services: Business & Legal Considerations; Dispute Resolution; Technology and The Digital Economy; E-Sports; Electronic Discovery; Genomic Medicine, Ethics and the Law; Intellectual Property in Digital Media; Introduction to Programming For Lawyers; NFTs: Legal and Business Considerations

A few months ago, I asked whether people in the tech industry were the most powerful people in the world. This is part II of that post.

I posed that question after speaking at a tech conference in Lisbon sponsored by Microsoft. They asked me to touch on business and human rights and I presented the day after the company announced a ten billion dollar investment in OpenAI, the creator of ChatGPT. Back then, we were amazed at what ChatGPT 3.5 could do. Members of the audience were excited and terrified- and these were tech people. 

And that was before the explosion of ChatGPT4. 

I’ve since made a similar presentation about AI, surveillance, social media companies to law students, engineering students, and business people. In the last few weeks, over 10,000 people including Elon Musk, have called for a 6-month pause in AI training systems. If you don’t trust Musk’s judgment (and the other scientists and futurists), trust the “Godfather of AI,” who recently quit Google so he could speak out on the dangers, even though Google has put out its own whitepaper on AI development. Watch the 60 Minutes interview with the CEO of