Sometimes, the scholarly enterprise offers one the opportunity to deeply learn while sharing embedded knowledge.  I never thought that my 2022 Southeastern Association of Law Schools discussion group on Elon Musk and the Law would turn into such a rich learning experience.  But it did.  

In organizing the group, I knew folks would focus on all things Twitter (especially as the year proceeded).  But because of the kind offer of the Stetson Law Review to host a symposium featuring the work of the group and publish the proceedings, I was able to dig in a bit deeper in my work, which focused on visioning what it would be like to represent Elon Musk.  The resulting article, "Representing Eline Musk," can be found here.  The SSRN abstract follows.

What would it be like to represent Elon Musk on business law matters or work with him in representing a business he manages or controls? This article approaches that issue as a function of professional responsibility and practice norms applied in the context of publicly available information about Elon Musk and his business-related escapades. Specifically, the article provides a sketch of Elon Musk and considers that depiction through a professional conduct lens

We just finished our second week of the semester and I’m already exhausted, partly because I just submitted the first draft of a law review article that’s 123 pages with over 600 footnotes on a future-proof framework for AI regulation to the University of Tennessee Journal of Business Law. I should have stuck with my original topic of legal ethics and AI.

But alas, who knew so much would happen in 2023? I certainly didn’t even though I spent the entire year speaking on AI to lawyers, businesspeople, and government officials. So, I decided to change my topic in late November as it became clearer that the EU would finally take action on the EU AI Act and that the Brussels effect would likely take hold requiring other governments and all the big players in the tech space to take notice and sharpen their own agendas.

But I’m one of the lucky ones because although I’m not a techie, I’m a former chief privacy officer, and spend a lot of time thinking about things like data protection and cybersecurity, especially as it relates to AI. And I recently assumed the role of GC of an AI startup. So

I’m a law professor, the general counsel of a medtech company, a podcaster, and I design and deliver courses on a variety of topics as a consultant. I think about and use generative AI daily and it’s really helped boost my productivity. Apparently, I’m unusual among lawyers. According to a Wolter’s Kluwers Future Ready Lawyer report that surveyed 700 legal professionals in the  US and EU, only 15% of lawyers are using generative AI right now but 73% expect to use it next year. 43% of those surveyed see it as an opportunity, 25% see it as a threat, and 26% see it as both.

If you’re planning to be part of the 73% and you practice in the US, here are some ethical implications with citations to select model rules. A few weeks ago, I posted here about business implications that you and your clients should consider.

  • How can you stay up-to-date with the latest advancements in AI technology and best practices, ensuring that you continue to adapt and evolve as a legal professional in an increasingly technology-driven world? Rule 1.1 (Competence)
  • How can AI tools be used effectively and ethically to enhance your practice, whether in legal research,

Last week I had the pleasure of joining my fellow bloggers at the UT Connecting the Threads Conference on the legal issues related to generative AI (GAI) that lawyers need to understand for their clients and their own law practice. Here are some of the questions I posed to the audience and some recommendations for clients. I'll write about ethical issues for lawyers in a separate post. In the meantime, if you're using OpenAI or any other GAI, I strongly recommend that you read the terms of use. You may be surprised by certain clauses, including the indemnification provisions. 

I started by asking the audience members to consider what legal areas are most affected by GAI? Although there are many, I'll focus on data privacy and employment law in this post.

Data Privacy and Cybersecurity

Are the AI tools and technologies you use compliant with relevant data protection and privacy regulations, such as GDPR and CCPA? Are they leaving you open to a cyberattack?

This topic also came up today at a conference at NCCU when I served as a panelist on cybersecurity preparedness for lawyers.

Why is this important?

ChatGPT was banned in Italy for a time

Greetings from SEALS, where I've just left a packed room of law professors grappling with some thorny issues related to ChatGPT4, Claude 2, Copilot, and other forms of generative AI. I don't have answers to the questions below and some are well above my pay grade, but I am taking them into account as I prepare to teach courses in transactional skills; compliance, corporate governance, and sustainability; and ethics and technology this Fall.

In no particular order, here are some of the questions/points raised during the three-hour session. I'll have more thoughts on using AI in the classroom in a future post.

  1. AI detectors that schools rely on have high false positives for nonnative speakers and neurodivergent students and they are easy to evade. How can you reliably ensure that students aren't using AI tools such as ChatGPT if you've prohibited it?
  2. If we allow the use of AI in classrooms, how do we change how we assess students?
  3. If our goal is to teach the mastery of legal skills, what are the legal skills we should teach related to the use of AI? How will our students learn critical thinking skills if they can

Depending on who you talk to, you get some pretty extreme perspectives on generative AI. In a former life, I used to have oversight of the lobbying and PAC money for a multinational company. As we all know, companies never ask to be regulated. So when an industry begs for regulation, you know something is up. 

Two weeks ago, I presented the keynote speech to the alumni of AESE, Portugal’s oldest business school, on the topic of my research on business, human rights, and technology with a special focus on AI. If you're attending Connecting the Threads in October, you'll hear some of what I discussed.

I may have overprepared, but given the C-Suite audience, that’s better than the alternative. For me that meant spending almost 100 hours  reading books, articles, white papers, and watching videos by data scientists, lawyers, ethicists, government officials, CEOs, and software engineers. 

Because I wanted the audience to really think about their role in our future, I spent quite a bit of time on the doom and gloom scenarios, which the Portuguese press highlighted. I cited the talk by the creators of the Social Dilemma, who warned about the dangers of social

The University of Tennessee College of Law's business law journal, Transactions: The Tennessee Journal of Business Law, recently published my essay, "The Fiduciary-ness of Business Associations."  You can find the essay here.  This essay–or parts of it, anyway–has been rattling around in my brain for a bit.   It is nice on a project like this to be able to get the words out on a page and release all that tension building up inside as you fashion your approach.

The abstract for the essay is included below. 

This essay offers a window and perspective on recent fiduciary-related legislative developments in business entity law and identifies and reflects in limited part on related professional responsibility questions impacting lawyers advising business entities and their equity owners. In addition—and perhaps more pointedly—the essay offers commentary on legal change and the legislative process for state law business associations amendments in and outside the realm of fiduciary duties. To accomplish these purposes, the essay first provides a short description of the position of fiduciary duties in U.S. statutory business entity law and offers a brief account of 21st century business entity legislation that weakens the historically central role of fiduciary duties in unincorporated

A few months ago, I asked whether people in the tech industry were the most powerful people in the world. This is part II of that post.

I posed that question after speaking at a tech conference in Lisbon sponsored by Microsoft. They asked me to touch on business and human rights and I presented the day after the company announced a ten billion dollar investment in OpenAI, the creator of ChatGPT. Back then, we were amazed at what ChatGPT 3.5 could do. Members of the audience were excited and terrified- and these were tech people. 

And that was before the explosion of ChatGPT4. 

I've since made a similar presentation about AI, surveillance, social media companies to law students, engineering students, and business people. In the last few weeks, over 10,000 people including Elon Musk, have called for a 6-month pause in AI training systems. If you don't trust Musk's judgment (and the other scientists and futurists), trust the "Godfather of AI," who recently quit Google so he could speak out on the dangers, even though Google has put out its own whitepaper on AI development. Watch the 60 Minutes interview with the CEO of

Last Friday, I had the privilege of speaking, with other colleagues, at the 2023 Stetson Law Review Symposium on "Elon Musk and the Law."  (See the flyer on the program, below.)  This symposium grew out of a discussion group I organized at the 2022 Southeastern Association of Law Schools Conference.  I posted about it here back in May of last year.

I could not have been happier with the way the symposium worked out.  The Stetson Law students, faculty, and administration were well organized, kind, and fun–a total pleasure to work with.  And I got excellent questions and feedback on my early draft paper, Representing Elon Musk, which focuses attention on the lawyer-client relationship under the American Bar Association's Model Rules of Professional Conduct.  I look forward to seeing the final published proceedings in two forthcoming books of the Stetson Law Review.

*               *               *

Stetson2023(flyer)

My mind is still reeling from my trip to Lisbon last week to keynote at the Building The Future tech conference sponsored by Microsoft.

My premise was that those in the tech industry are arguably the most powerful people in the world and with great power comes great responsibility and a duty to protect human rights (which is not the global state of the law).

I challenged the audience to consider the financial price of implementing human rights by design and the societal cost of doing business as usual.

In 20 minutes, I covered  AI bias and new EU regulations; the benefits and dangers of ChatGPT; the surveillance economy; the UNGPs and UN Global Compact; a new suit by Seattle’s school board against social media companies alleging harmful mental health impacts on students; potential corporate complicity with rogue governments; the upcoming Supreme Court case on Section 230 and content moderator responsibility for “radicalizing” users; and made recommendations for the governmental, business, civil society, and consumer members in the audience.

Thank goodness I talk quickly.

Here are some non-substantive observations and lessons. In a future post, I'll go in more depth about my substantive remarks. 

1. Your network