If you happen to be in Miami or think it’s worth it to fly there next week, this is for you. I’ll be moderating the panel on regulatory considerations for promoters and influencers and we have student teams competing from all over the country. 

February 29 – March 1
University of Miami

Content is king. We live in the golden age where content creators, artists, and influencers wield power and can shift culture. Brands want to collaborate. Creators need to be sophisticated, understand deal points and protect their brand and intellectual property. Miami Law will be the first law school in the country to pull together law students with leading lawyers, influencers, artists, creatives and trendsetters for a negotiation competition and conference.  

Negotiation Competition – Thursday, February 29 

Where

Shalala Student Center, 1330 Miller Drive, Coral Gables, FL 33146

Who Should Participate

This competition is ideal for law and business students. THE. TEAMS ARE FINALIZED ALREADY.

What to Expect

Participants will have the chance to represent influencers, brands, artists, fashion companies and other creators in the first ever Counseling Creators: Influencers, Artists and Trendsetters Negotiation Competition

  • Register a team of law students (can include business school students)
    1. Team of

We just finished our second week of the semester and I’m already exhausted, partly because I just submitted the first draft of a law review article that’s 123 pages with over 600 footnotes on a future-proof framework for AI regulation to the University of Tennessee Journal of Business Law. I should have stuck with my original topic of legal ethics and AI.

But alas, who knew so much would happen in 2023? I certainly didn’t even though I spent the entire year speaking on AI to lawyers, businesspeople, and government officials. So, I decided to change my topic in late November as it became clearer that the EU would finally take action on the EU AI Act and that the Brussels effect would likely take hold requiring other governments and all the big players in the tech space to take notice and sharpen their own agendas.

But I’m one of the lucky ones because although I’m not a techie, I’m a former chief privacy officer, and spend a lot of time thinking about things like data protection and cybersecurity, especially as it relates to AI. And I recently assumed the role of GC of an AI startup. So

I’m a law professor, the general counsel of a medtech company, a podcaster, and I design and deliver courses on a variety of topics as a consultant. I think about and use generative AI daily and it’s really helped boost my productivity. Apparently, I’m unusual among lawyers. According to a Wolter’s Kluwers Future Ready Lawyer report that surveyed 700 legal professionals in the  US and EU, only 15% of lawyers are using generative AI right now but 73% expect to use it next year. 43% of those surveyed see it as an opportunity, 25% see it as a threat, and 26% see it as both.

If you’re planning to be part of the 73% and you practice in the US, here are some ethical implications with citations to select model rules. A few weeks ago, I posted here about business implications that you and your clients should consider.

  • How can you stay up-to-date with the latest advancements in AI technology and best practices, ensuring that you continue to adapt and evolve as a legal professional in an increasingly technology-driven world? Rule 1.1 (Competence)
  • How can AI tools be used effectively and ethically to enhance your practice, whether in legal research,

Over the summer, friend-of-the-BLPB Bernie Sharfman posted a draft paper to SSRN that was the subject of a short colloquy between us.  The paper, The Ascertainable Standards that Define the Boundaries of the SEC’s Rulemaking Authority, asserts, among other things, that materiality is one of three “ascertainable policy standards that Congress has placed in the Acts to guide the SEC’s rulemaking discretion.”  The reasoning? 

  • “[T]here are multiple references to materiality in the Acts.”
  • The SEC’s 1972 annual report avers that “[a] basic purpose of the Federal securities laws is to provide disclosure of material financial and other information on companies seeking to raise capital through the public offering of their securities, as well as companies whose securities are already publicly held.”
  • “As observed by Professor Ruth Jebe, it is fair to say that materiality ‘constitutes the primary framing mechanism for financial reporting.'”

Bernie acknowledges that “there is no explicit statutory language in the Acts that forbids the SEC from promulgating rules requiring non-material disclosures.”  I might add that nothing in either the Securities Act of 1933, as amended (“1933 Act”), or the Securities Exchange Act of 1934, as amended (“1934 Act”), explicitly limits the SEC’s rulemaking

Last week I had the pleasure of joining my fellow bloggers at the UT Connecting the Threads Conference on the legal issues related to generative AI (GAI) that lawyers need to understand for their clients and their own law practice. Here are some of the questions I posed to the audience and some recommendations for clients. I’ll write about ethical issues for lawyers in a separate post. In the meantime, if you’re using OpenAI or any other GAI, I strongly recommend that you read the terms of use. You may be surprised by certain clauses, including the indemnification provisions. 

I started by asking the audience members to consider what legal areas are most affected by GAI? Although there are many, I’ll focus on data privacy and employment law in this post.

Data Privacy and Cybersecurity

Are the AI tools and technologies you use compliant with relevant data protection and privacy regulations, such as GDPR and CCPA? Are they leaving you open to a cyberattack?

This topic also came up today at a conference at NCCU when I served as a panelist on cybersecurity preparedness for lawyers.

Why is this important?

ChatGPT was banned in Italy for a time

Depending on who you talk to, you get some pretty extreme perspectives on generative AI. In a former life, I used to have oversight of the lobbying and PAC money for a multinational company. As we all know, companies never ask to be regulated. So when an industry begs for regulation, you know something is up. 

Two weeks ago, I presented the keynote speech to the alumni of AESE, Portugal’s oldest business school, on the topic of my research on business, human rights, and technology with a special focus on AI. If you’re attending Connecting the Threads in October, you’ll hear some of what I discussed.

I may have overprepared, but given the C-Suite audience, that’s better than the alternative. For me that meant spending almost 100 hours  reading books, articles, white papers, and watching videos by data scientists, lawyers, ethicists, government officials, CEOs, and software engineers. 

Because I wanted the audience to really think about their role in our future, I spent quite a bit of time on the doom and gloom scenarios, which the Portuguese press highlighted. I cited the talk by the creators of the Social Dilemma, who warned about the dangers of social

The history of the present King of Great Britain is a history of repeated injuries and usurpations, all having in direct object the establishment of an absolute Tyranny over these States. To prove this, let Facts be submitted to a candid world.

 . . .

He has combined with others to subject us to a jurisdiction foreign to our constitution, and unacknowledged by our laws; giving his Assent to their Acts of pretended Legislation:

 . . . 

For cutting off our Trade with all parts of the world:

For imposing Taxes on us without our Consent:

 . . .

We, therefore, the Representatives of the United States of America, in General Congress, Assembled, appealing to the Supreme Judge of the world for the rectitude of our intentions, do, in the Name, and by Authority of the good people of these Colonies, solemnly publish and declare, That these United Colonies are, and of Right ought to be Free and Independent States; that they are Absolved from all Allegiance to the British Crown, and that all political connection between them and the State of Great Britain, is and ought to be totally dissolved; and that as Free and Independent States, they have

The University of Tennessee College of Law’s business law journal, Transactions: The Tennessee Journal of Business Law, recently published my essay, “The Fiduciary-ness of Business Associations.”  You can find the essay here.  This essay–or parts of it, anyway–has been rattling around in my brain for a bit.   It is nice on a project like this to be able to get the words out on a page and release all that tension building up inside as you fashion your approach.

The abstract for the essay is included below. 

This essay offers a window and perspective on recent fiduciary-related legislative developments in business entity law and identifies and reflects in limited part on related professional responsibility questions impacting lawyers advising business entities and their equity owners. In addition—and perhaps more pointedly—the essay offers commentary on legal change and the legislative process for state law business associations amendments in and outside the realm of fiduciary duties. To accomplish these purposes, the essay first provides a short description of the position of fiduciary duties in U.S. statutory business entity law and offers a brief account of 21st century business entity legislation that weakens the historically central role of fiduciary duties in unincorporated

A few months ago, I asked whether people in the tech industry were the most powerful people in the world. This is part II of that post.

I posed that question after speaking at a tech conference in Lisbon sponsored by Microsoft. They asked me to touch on business and human rights and I presented the day after the company announced a ten billion dollar investment in OpenAI, the creator of ChatGPT. Back then, we were amazed at what ChatGPT 3.5 could do. Members of the audience were excited and terrified- and these were tech people. 

And that was before the explosion of ChatGPT4. 

I’ve since made a similar presentation about AI, surveillance, social media companies to law students, engineering students, and business people. In the last few weeks, over 10,000 people including Elon Musk, have called for a 6-month pause in AI training systems. If you don’t trust Musk’s judgment (and the other scientists and futurists), trust the “Godfather of AI,” who recently quit Google so he could speak out on the dangers, even though Google has put out its own whitepaper on AI development. Watch the 60 Minutes interview with the CEO of

As much as I love being a professor, it can be hard. I’m not talking about the grading, keeping the attention of the TikTok generation, or helping students with the rising mental health challenges.

I mean that it’s hard to know what to say in a classroom. On the one hand, you want to make sure that students learn and understand the importance of critical thinking and disagreeing without being disagreeable.

On the other hand, you worry about whether a factual statement taken out of context or your interpretation of an issue could land you in the cross hairs of cancel culture without the benefit of any debate or discussion.

I’m not an obvious person who should be worried about this. Although I learned from some of the original proponents of critical race theory in law school, that’s not my area of expertise. I teach about ESG, corporate law, and compliance issues.

But I think about this dilemma when I talk about corporate responsibility and corporate speech on hot button issues. I especially think about it when I teach business and human rights, where there are topics that may be too controversial to teach because some issues are too close