Posts 2025-11-05T02:46:29-05:00 B Cavello https://posts.bcavello.com Futures Worth Building, Futures Worth Fighting For B Cavello https://posts.bcavello.com/futures-worth-building-futures-worth-fighting-for/ 2025-04-09T11:43:26-04:00 B speaks on stage at the Great Hall of the Internet Archive, where light shines through from the stained glass above

On March 15th, 2025, I was honored to be invited to speak at Funding the Commons: Infrastructures of Resilience, a conference hosted at the awe-inspiring Internet Archive in San Francisco, a testament to the essential gift that open libraries are to our world. The event took place amidst a purge of public information and public officials by the Trump Administration. The conference brought together leaders stewarding open technologies and public goods who are working to build infrastructures to support shared flourishing. See all the talks.

I did something a little out of my comfort zone for this event, as I felt that the moment called for something different. While I don’t typically write things out when I’m speaking, one added benefit is that I can share them here! What follows is Futures Worth Building, my remarks at Funding the Commons: Infrastructures of Resilience.


In this room, there are builders. There are climate hackers. There are librarians and archivists. Wisdom keepers and philanthropists. There are institution navigators and extitution bridgers. In this room, there are dreamers.

And in this room, there’s a dream.

A dream for better things. Shared futures worth building. Public commons worth tending. A vision of the way that things can be, should be. An understanding that the future belongs to the many, not just the few.

And all around us, there are people who can feel it. People who know that this isn’t the way things should be. And they know that many of the systems that exist aren’t working (for them). They feel lost. Afraid. Even angry at what they know—in their bones—is not right. And they’re grasping for our common dream.

But they’re being sold a false promise. They’re being told that there is no other way. That it’s too late. That there isn’t enough. That they can’t trust other people. They can’t trust themselves.

The false promise says “Only I can protect you.”

And in following that false promise, people erect walls, burn libraries, and drown in misinformation and rising waters. People shut doors instead of opening them. Deplete resources instead of cultivating them. And in doing so, lay waste to the things that help us thrive.

It doesn’t have to be like this.

My name is B Cavello. I am the director of emerging technologies at a nonprofit organization called the Aspen Institute, a nonprofit based in Washington, DC. I've been privileged to serve in many roles across many sectors, from small nonprofits and garage startups to huge multinationals, even the US Federal Government. I’ve spent most of the last ten years working in AI.

And I am a dreamer, too.

One of my dreams is called public AI.

Some of you fellow dreamers know about public AI. But for those who don’t, here’s a primer: Public AI is not a model or a compute network or an AI system. It is an approach. It is a relationship with AI that emphasizes public access, accountability, and sustainable public goods.

Public AI is not government AI (but it can be enabled by governments OR non-governmental organizations or even for-profit companies)! Public AI is rooted in the belief that AI is too important, too powerful a tool to be wielded by only the few. Public AI is rooted in the belief that the tools to create the future should rest in the hands of the public. It comes from a recognition that societies don’t have to just consume the technologies shaping their lives—they can and should create them.

Public AI is a practice. It’s a way of doing things. As I like to say, inspired by the preamble of the US Constitution, just as we can form a more perfect Union, we can build a more public AI.

What is public AI? Public Access (meaning visible public benefits and Universal public access to technology for free or at cost), Public Accountability (meaning the public has ultimate control and Continuous accountability aligned with public goals and values), and Sustainable Public Goods (meaning they're durable and reliable and legal frameworks that prevent private capture of public benefits)

This means expanding access. Recognizing that certain capabilities are so important for participation in public life that access to them should be universal. That’s why Public Access is a key value of public AI—providing everyone with direct, affordable access to these tools so that everyone can realize their potential.

“More public” AI means increasing meaningful public participation and governance. While few technologists would actually oppose the idea of advancing the common good, few AI developers proactively seek public guidance to steer the development process or change course when the public disapproves of their decisions.

This is why Public Accountability is a key value of public AI—giving the public the power to truly shape the development of technology, setting priorities for research and product, including the option to not pursue certain capabilities if they would be harmful.

Finally—and I know this is all-too-familiar for the folks in this room (Funding the Commons)—often developers who do want to do good with technology are unable to secure the resources needed to create lasting public value. (How many brilliant social impact tech projects have faded into obscurity after the hackathon ends and their repos gather dust?)

Meanwhile, the terms of private capital and investment too often lock well-intentioned creators into problematic incentives structures leading to instability, distracting from innovating on the BIG challenges that face our world.

To truly realize the opportunities of AI we need sustainable development models that enable us to chart a different course. That’s why Sustainable Public Goods are a key value of public AI—creating robust and stable foundations that everyone can innovate upon.

In building “more public” AI, we can shift people’s relationship to these technologies. Today, many of the public’s thoughts about AI are dominated by concern, uncertainty, and fear. By centering the public—with access, accountability, and sustainable public goods—we can shift the conversations toward capabilities, priorities, and potential.

I’m proud to be working with a community of dreamers to make this a reality. There are people, in this room, guaranteeing ACCESS to critical information resources for current and future generations. There is a coalition of public computing labs working to innovate on new models for funding and SUSTAINING public goods. In my own work, I’m collaborating with brilliant minds at the Collective Intelligence Project, Simon Fraser University, and Metagov along with my incredible colleagues at the Aspen Institute to develop new mechanisms for public ACCOUNTABILITY. 

Too often, “accountability” is used synonymously with “punishment.” Deterrence is a powerful mechanism, but there is more that we can do than simply reducing harm.

I believe that public accountability is also about being truly responsive to the needs and desires of the public. A “more public” AI ecosystem must not only avoid potential risks, but also clearly articulate priorities, opportunities, and challenges that AI must address in order to be a true public good.

In this room of fellow dreamers, I would like to challenge you: We must more clearly articulate what direction we want to be headed in.

And then: we need to be able to measure it!

Without meaningful measures of progress, AI development may continue to pursue goals that are orthogonal or even antagonistic to the public interest. While there are many claims being made about the potential good AI might enable, we lack answers to basic questions.

Are we delivering on the promises of medical and climate progress we so often hear about? Are the tools being developed today helping us solve the tough problems that we face? Are they enabling increased standards of living? Are they achieving the sustainable development goals? (Like are the AI capabilities being developed today actually going to help with gender equality? With food security? With decent work?)

If we want to realize the dreams of flourishing that these technologies can inspire, then we need to get concrete about prioritizing progress on the things that matter. Public accountability demands that we be transparent about how things are going so we can live up to the aspirations of the public and make good on the potential of these powerful tools.

Now, I want to be clear: public AI, as an approach, doesn’t necessarily stop other people from pursuing AI in a harmful way. We do still need to do that.

But I sometimes think of public AI playing the role of smuggled outside media in North Korea or the book drops across the Iron Curtain (perhaps some of the work that Mark Graham and the Wayback team are doing): it proves that another way is possible.

The good news is that we have tools at our disposal for reducing bad behavior like sensible regulations and public decision-making processes. But as has happened throughout history countless times, if those tools prove to be insufficient, the public will find other ways to shut things down, if necessary.

But they need to know that there are alternatives. That there are futures worth building, futures worth fighting for.

Today, there are people all around us who are hungry for better futures. People who know that this isn’t the way things should be. And they know that many of the systems that exist are not working (for them).

They feel lost. Afraid. Even angry. And they’re being sold a false promise.

But we know that that false promise won’t last. The cracks are already showing.

We know that those people are builders, are dreamers, are climate hackers, librarians, archivists, wisdom keepers, philanthropists, institution navigators, extitution bridgers—the public is so many things!

We know this because we are the public, too.

And through the cracks in these failing systems, all around us, the light of our shared dream is shining through.

Thank you.

 

]]>
How to get into AI policy (part 5) B Cavello https://posts.bcavello.com/how-to-get-into-ai-policy-part-5/ 2025-01-25T10:43:00-05:00 billboard reads "I'm not interested in competing with anyone. I hope we all make it."

billboard reads "I'm not interested in competing with anyone. I hope we all make it."

I’m frequently asked for advice on how to break into the industry I’m in or how to achieve a position I’ve held. I’ve been privileged to serve in many interesting and varied roles across sectors, from small nonprofits and garage startups to huge multinationals and even the US Federal Government. While I personally owe a lot of my journey to privilege, luck, and the good graces of others, I know that “be lucky” isn’t particularly useful advice. In the spirit of providing something actionable, I’ve gathered here some reflections on things that I think have served me in my journey. I hope these reflections will be helpful for others on their journeys as well, and I encourage those with experience in this space to share more about their journeys, lessons learned, and advice too!

Note: For the purpose of this post, I’ll be talking about “getting into AI policy” and offering some specific examples, but most of these suggestions should hold regardless of what it is specifically you want to do. For more on entering the tech policy field, see my Emerging Technology Policy Careers profile.

This series comes in five parts:

  1. Mindset
  2. Be perceived
  3. Be the reply guy you want to see in the world
  4. Don’t wait to do the work
  5. Pay it forward (don’t skip this!)

Prefer to read everything as one long piece instead? i gotchu. 💖


Pay it forward (don't skip this!)

If you’ve made it here, congratulations! 🎉 There’s no “right way” to get into AI policy, so I hope that my reflections and suggestions have been helpful to you. Whatever your own journey, if you share your learnings and experiences along the way, you will probably find yourself getting lots of requests from other people who are interested in getting into this space. You are, of course, welcome to share this blog series with them, but I challenge you to do more.

After all, as I said in part 1:

Unlike some other fields or career paths, there isn’t (yet) an agreed upon qualification, like getting a particular certification or degree, that will clarify that you are “ready for AI policy.”

That’s why in part 2, we recognized that:

The world is not a meritocracy, and often “who you know” does make quite a bit of difference, even if people might pretend otherwise.

The good news is—as we discussed in part 3:

Change-making is a team sport, so the success of your colleagues, allies, and inspirations is your success, too.

Because, ultimately (part 4):

Movements are made by people joining each other.

I believe we all have a responsibility to make it as easy as possible to do good in the world. I also believe that AI is a tremendously consequential technology that everyone should play a role in shaping. As such, I think that all of us who are in the AI policy space (whatever that means!) should do what we can to bring others along with us.

My own path into this space is absolutely thanks to others who brought me along with them. I am overwhelmed with gratitude to all the many, many people who helped me make it to where I am today. It’s still honestly a funkydunky feeling to realize that not only do people aspire to follow in my footsteps, but I actually have the power to help them realize their dreams and forge paths of their own. As I said at the start, “I personally owe a lot of my journey to privilege, luck, and the good graces of others,” and while I may not have much power over the first two things, I can definitely pay it forward on the last one.

And you can, too!

You don’t have to wait until the words “AI policy” are in your job title to start paying it forward, either. There are many ways to pay it forward. Some big and small. Wherever you are in your AI policy journey, I’ll bet you have something to offer.

Sometimes paying it forward looks life-changing like being able to offer a job opportunity to someone with nontraditional experience. Sometimes it’s something small, and maybe even mundane, like helping someone with directions in a new place. When it comes to lifting others up in the world of AI policy, you’ll encounter countless opportunities to be an ally and a champion and to advocate for others, but I think it helps to have some practical examples, so I’ve gathered some of my favorites.

This is a beefy one, so I’ve created jumps to each one:

  1. Share your experiences where others can learn
  2. Make time to chat
  3. Nominate people for awards, invitations, and talks
  4. Engage with groups who aren’t “strategic”
  5. Discuss your salary and negotiation info

Share your experiences where others can learn

I know I’ve said this a lot, but it bears repeating! If there’s nothing else you take away from what I have to say, I hope you will consider getting a personal website (even a very basic one) and writing a bit about your journey. Even just having more visibility into the diversity of paths people can take is really powerful for helping bring others along. Personal websites and blogs are also fantastic places to highlight resources you appreciate and channels for directing people toward opportunities they might not otherwise find out about.

Fun fact: in several of my jobs, I’ve been involved in managing parts of my organization's website including the web analytics. Do you know what’s often the MOST high-traffic part of the site? The bio pages! People are constantly looking each other up online, and by having a personal website you get to shape other people’s first impression of you.

Other fun ideas of things to post on your site or blog about:

  • Your answers to applications: If you had the opportunity to go to school, participate in a fellowship program, or even get a job relevant to AI policy, you can probably just post your answers to application forms you’ve submitted. (Here are two of mine from TechCongress and Assembly!) Of course, make sure you didn’t agree to NOT do that, but honestly in my experience that’s pretty rare.
  • Your advice to others: Take a moment to reflect. What’s something that you’ve done (or not done) that helped you get to where you are? Now share that so that others can stand on your giant shoulders.
  • Works-in-progress and ideas you’re exploring: Not everything you share has to be polished and fully-formed. You can also bring others along by “learning out loud” and sharing ideas and problems you’re still chewing on. Consider sharing a list of questions you still have (which could also be a fun way to meet experts who can help you get the answers)!
  • Talk openly about things you’ve tried that HAVEN’T worked: Similarly, sometimes you try… and you fail. But it’s great that you tried! While we’re often encouraged to share the most unblemished versions of ourselves, being candid about mistakes can also help people recognize that you don’t have to be perfect to be a person making a difference.

Make time to chat

The #1 ask I get from folks interested in getting into AI policy is to schedule a call. It can honestly be pretty overwhelming, and a lot of people in the space simply declare bankruptcy on the whole thing and stop taking intro calls altogether.

Trust me: I get it.

"I... declare... BANKRUPTCY!" shouts Michael in season four of The Office

When you’re a visible person in a desirable field, people often can’t even imagine how many requests you get for a “quick chat” or a “short call” from folks who don’t realize they’re not the only one with that brilliant idea. Realistically, there’s only so much time in a day, week, or year, and you can’t spend ALL of your time on intro calls.

But I don’t think that means you can’t do them at all. Intro calls are a great way to expand your own perspectives, stay grounded in what challenges people entering the field are facing, and, of course, meet new friends and allies. (We’re all on the same team, after all!) The trick is figuring out the cadence and the format that works for you.

Here are some ways I manage things:

  • Offer to answer questions in writing: This puts the ball in their court to be clear about why they’re reaching out to me in the first place and means that I can answer things asynchronously on my own time.
  • Use a scheduling link: I use Calendly to cut down on the overhead of coordinating timing and also to help enforce my boundaries so that I don’t over-commit myself.
  • Redirect people toward publicly available resources: Sometimes, I just don’t have the bandwidth to have a meeting or to craft individualized responses to folks. In that case, it’s handy to have a place to point people to. (see previous section 😜)

Nominate people for awards, invitations, and talks

The last two sections were focused on things you can do that help people you don’t know (yet!), but what about the folks you do?

“Delightfully devilish, Seymour” says principal Skinner to himself in the Simpsons “steamed hams” episode

Something I’ve found supreme delight in is secretly nominating people for awards in this field. As you continue your work in AI policy, you’ll no doubt encounter many brilliant people working hard to make the world a better place. Often this work is not visible, if you’re not paying attention, but you can change that.

Awards may be big national recognition opportunities or something within the context of your neighborhood or workplace. Often, there are public submission forms to nominate people. Although they sometimes take a little work, it’s really satisfying to see the nominations get announced and really, really satisfying when someone you nominated actually wins!

In addition to awards, other ways you can shine a light on others’ greatness is by suggesting their name as a potential invitee to exclusive events. Whether you can go yourself or (especially) if not, this has the double benefit of helping the event organizers by putting good folks on their radar and helping the people you recommend by calling attention to their great work. You might even consider nominating someone to be a speaker or panelist. (Do check in with the person first, though, as I’ve learned the hard way that not everyone likes to be asked to do public speaking! 😅)

Engage with groups who aren’t “strategic”

While you should definitely nominate others to prestigious awards and suggest them for shiny events, I encourage you to also consider participating in events and activities that aren’t necessary noteworthy. It’s fantastic to be asked to keynote an important conference, but it can also be life changing to be a guest speaker for your high school or a local community group.

Tom from the show "Parks & Recreation" holds up a shoe with a red insole, saying "Oh, what's this in my shoe? Red carpet insole. Every where I go, I'm walking on red carpet."

Similar to making time to chat, above, this can quickly become overwhelming if you’re not careful, but that doesn’t mean it’s not worth doing. I recommend trying for diversity with the types of people and groups you meet with. You’ll learn so much from these conversations, and they’ll give you an opportunity to connect with people on a different level than the high-profile, high-polish engagements. Figuring out a cadence (once per quarter? per year?) that works for you is important for managing your commitments. Bonus upside: speaking with groups of people like this can be one way to bundle a bunch of intro chats into one conversation.

Discuss your salary and negotiation info

Finally, one last way you can help bring others along is to break the taboo around discussing salary and pay.

As noted in part 1 on Mindset, working professionally in AI policy requires selling your labor. It helps, when you’re negotiating how you should be compensated to have information about what others in the space are being offered and paid. Unfortunately, not everyone has personal connections to give them the inside scoop on this information, especially if they’re just starting out in a new field. This can have cumulative effects over the course of people’s careers and tends to disproportionately disadvantage people who already face other barriers in work.

Shrek flings open outhouse door to the lyrics "someBODY once told me"
You're an all-star (get paid!)

One way you can counteract this is to bear the slight discomfort of a mildly awkward conversation and offer to share your info with others. It helps to lead with vulnerability and make it clear that people don’t need to reciprocate.

Here’s a template from my world with underlines for places to fill in the blanks:

I believe salary transparency helps workers build power, so here’s some info on my end (no pressure to share anything on yours): I was offered $140k per year, I asked for closer to $185k and signed with $155k. I am currently making about $175k.

It may not make sense for you to share your info publicly (although hopefully your organization publishes salary ranges on job descriptions so it’s not a total mystery!) but it definitely can be really powerful to share in 1:1 conversations.

Huge S/O to Hannah Williams and the team at Salary Transparent Street for their leadership in the public conversation about compensation and pay equity. They’re doing essential work!

You made it!

When I started writing this series, I honestly thought everything I had to say would fit in a single post, but here we are. 😅 Thanks for coming on this wild ride. I continue to be grateful to those who have come before me, who take the time to speak my name in rooms of power, to celebrate my work, and give me guidance. Here’s to more learning in the years to come!

Car salesman meme: slaps roof of car "this world can fit so much kindness in it"
]]>
How to get into AI policy (part 4) B Cavello https://posts.bcavello.com/how-to-get-into-ai-policy-part-4/ 2024-01-29T22:00:00-05:00 B stands in front of a wall of colorful post-its clustered into groups labeled things like "data provenance" and "public policy"

B stands in front of a wall of colorful post-its clustered into groups labeled things like "data provenance" and "public policy"

I’m frequently asked for advice on how to break into the industry I’m in or how to achieve a position I’ve held. I’ve been privileged to serve in many interesting and varied roles across sectors, from small nonprofits and garage startups to huge multinationals and even the US Federal Government. While I personally owe a lot of my journey to privilege, luck, and the good graces of others, I know that “be lucky” isn’t particularly useful advice. In the spirit of providing something actionable, I’ve gathered here some reflections on things that I think have served me in my journey. I hope these reflections will be helpful for others on their journeys as well, and I encourage those with experience in this space to share more about their journeys, lessons learned, and advice too!

Note: For the purpose of this post, I’ll be talking about “getting into AI policy” and offering some specific examples, but most of these suggestions should hold regardless of what it is specifically you want to do. For more on entering the tech policy field, see my Emerging Technology Policy Careers profile.

This series comes in five parts:

  1. Mindset
  2. Be perceived
  3. Be the reply guy you want to see in the world
  4. Don’t wait to do the work
  5. Pay it forward (don’t skip this!)

Prefer to read everything as one long piece instead? i gotchu. 💖


Don't wait to do the work

“But B!” You might be saying. “You haven’t really said anything yet about what we need to know about AI or policy, in particular!” And you’re right. Most of what I’ve shared so far is kinda generic advice that could be applied to any number of fields or career paths. I share it because most of the time, when people reach out to me about how to get a certain job or follow a certain path, they haven’t done those things that I see as essential foundations, and that’s a great place to start.

But okay: you’ve figured out your mindset, you’ve been perceived, you are being the reply guy you want to see in the world, and you’re still wondering how to get into AI policy. What can you do?

In my opinion, the best thing you can do is… just get into AI policy! 😜

You don’t need permission. Where you can, begin engaging in the work of AI policy directly. Start close to home with something more familiar to get started. Attend a city council meeting on police use of facial recognition, go to a school board meeting about generative AI in schools, or start a conversation with your colleagues about cybersecurity and AI at work. You can also start sharing your perspectives on a blog or in op-eds or join an advocacy group to lobby policy decision-makers. In the US, the Federal Register often posts requests for information (“RFIs”) and opportunities for public comment. Share your actions with others and encourage them to participate and make their experiences and opinions heard, too.

AI policy decisions are happening every day in organizations big and small, in and outside of government. There are many organizations already at work on issues you care about that could use the extra help, even if you don’t have deep expertise in AI or policymaking. Your perspective as a person impacted by these issues is valuable, too! Jumping right into AI policy doesn’t mean pretending like you know stuff that you don’t or acting like a jerk. If you can manage to show up consistently on an issue; be humble, open-minded and kind; and take good notes, you might be surprised how far that can take you.

I want to say this with extra emphasis:
don’t be afraid to join someone else’s effort.

There’s a tendency in American society to heroize “founders” and “visionaries” and people who start things from scratch. Starting stuff is cool and all, but I actually believe this (over)emphasis on new things results in a bunch of wasted effort. Starting stuff doesn’t just take hard work, it takes infrastructure. Things like branding, documentation, and, often, a lot of legal paperwork. These are critically important things that take time and energy—time and energy that could instead be spent, for example, working on AI policy!

Movements are made by people joining each other. Joining an existing organization, group, community, or initiative often builds greater power and advances goals more effectively than starting off on your own. Even if you are a founder, (like my LinkedIn analytics says a major demographic you are 😜) if you want to convince people to join you, a great way to do that is to show up for them. You’ll learn a ton. Plus joining a team lets you share in the work and the wins. And that’s just fun!

But maybe you don’t feel ready to jump in, just yet. You want to get a little more familiar with the space first. That’s cool, too. It’s great to build a foundation of learning, and there’s a wealth of material to learn from. There are tons of amazing opportunities to read up on these issues and many videos and podcasts available for free online. Try checking out the public lectures or events of a couple different universities (compare how Harvard’s Berkman Klein Center approaches these topics to Stanford’s Institute for Human-Centered AI) or subscribe to one of the many excellent newsletters on tech and policy. (Some highlights: Platform Economy Insights, Tech Policy Press, or even my employer Aspen Digital!)

B's AI video recommendations

For machine learning and AI essentials, I put together a Youtube playlist of different talks and explainers (and I really can’t recommend enough 3blue1brown’s series on the topic). I really do think all of this stuff is best learned in community, though, with other people who you can ask questions and who can challenge your own understanding and assumptions. (Remember: AI policy is a team sport!) See if you can recruit a couple of friends or colleagues to join you in your learning journey or join them on theirs, and then share your learnings with the world as you go. 

There are tons of great resources for learning about AI and policy. I can’t possibly name them all here, so I’ll be a little shameless and plug my team’s work on these Emerging Tech Primers to get you started. I also strongly recommend checking out the Emerging Tech Policy Careers page on AI. For more current events-type reading, check out Sarah Shirazyan’s Trust | Safety | Law AI reading list or Casey Fiesler’s remarkably prompt AI Ethics News.

There are also some fantastic fellowship programs dedicated to helping people like you learn and transition their skills. These programs can teach you to make policy impact and to foster greater collaboration between the diversity of AI policy stakeholders. I’ve gotten to experience a few of these, myself, and they were truly life-changing!

Fellowship programs to check out:

  • Aspen Tech Policy Hub teaches a new generation of policy entrepreneurs about the policy process through fellowship and executive education programs and encourages them to develop outside-the-box solutions to society’s problems. 
  • Horizon Institute for Public Service places fellows at host organizations to help tackle policy challenges related to artificial intelligence, biotechnology, and other emerging technologies.
  • Mozilla provides fellows opportunities to develop new thinking on how to address emerging threats and challenges facing a healthy internet.
  • TechCongress gives talented technologists the opportunity to gain first-­hand experience in federal policymaking and shape the future of tech policy through our fellowships with Members of Congress and Congressional Committees.

And when you’re ready, the All Tech is Human and 80,000 Hours job boards have tons of opportunities for you to apply to, keeping in mind that many jobs are never posted so you’ll want to be on people’s radars.

But I really can’t recommend enough just getting started where you are. Start small. Start with what you know. Look for allies, and don’t underestimate the power of just showing up and taking notes.

And once you get there…

Next up: Pay it forward (don’t skip this!)

]]>
How to get into AI policy (part 3) B Cavello https://posts.bcavello.com/how-to-get-into-ai-policy-part-3/ 2024-01-21T07:00:00-05:00 Tweet from @visakanv reads "Noob-minus: this is the worst thing I have ever read   Noob-plus: this is the best thing I have ever read  Friends: you’re on to something here! I really liked the part where you talked about X, it made me think about Y. I was kinda hoping for more about Z, but that’s just my POV"

Tweet from @visakanv reads "Noob-minus: this is the worst thing I have ever read   Noob-plus: this is the best thing I have ever read  Friends: you’re on to something here! I really liked the part where you talked about X, it made me think about Y. I was kinda hoping for more about Z, but that’s just my POV"
@visakanv has many threads on the topic of upping your reply game

I’m frequently asked for advice on how to break into the industry I’m in or how to achieve a position I’ve held. I’ve been privileged to serve in many interesting and varied roles across sectors, from small nonprofits and garage startups to huge multinationals and even the US Federal Government. While I personally owe a lot of my journey to privilege, luck, and the good graces of others, I know that “be lucky” isn’t particularly useful advice. In the spirit of providing something actionable, I’ve gathered here some reflections on things that I think have served me in my journey. I hope these reflections will be helpful for others on their journeys as well, and I encourage those with experience in this space to share more about their journeys, lessons learned, and advice too!

Note: For the purpose of this post, I’ll be talking about “getting into AI policy” and offering some specific examples, but most of these suggestions should hold regardless of what it is specifically you want to do. For more on entering the tech policy field, see my Emerging Technology Policy Careers profile.

This series comes in five parts:

  1. Mindset
  2. Be perceived
  3. Be the reply guy you want to see in the world
  4. Don’t wait to do the work
  5. Pay it forward (don’t skip this!)

Prefer to read everything as one long piece instead? i gotchu. 💖


Be the reply guy you want to see in the world

Even though the AI policy job market is competitive, the work of AI policy is collaborative. It requires significant coordination between lots of different people and organizations to do things like run a campaign, change industry practices, or pass a law. That means you need to find a bunch of Actual Human Beings who you can team up with to get things done. In short, you need to build a network.

A lot of people I talk to have super icky feelings associated with the word “networking.” It conjures up scenes of hucksters pushing their business cards on people like skeevy used car salesmen. To many, “networking” is merely a euphemism for self-promotion, instead of what it should actually be about: building networks.

As someone who’s literal job is, at least in part, building networks, I can tell you: self promotion ain’t all that.

This isn’t to say you shouldn’t share accomplishments you’re proud of. That’s great! But it’s even better if, when you share them, you genuinely (and better yet: specifically) acknowledge the other people who helped make those accomplishments possible. AI policy work is people work. Arguably, ALL work is! To accomplish the sorts of goals that AI policy demands, we need to find allies, build partnerships, and generally help each other out. Acknowledging the people who have helped you to achieve your goals demonstrates a key skill essential to AI policy.

When you’re new to a domain, however, you may not have a lot of wins to celebrate yet. And that’s great, too! Because there’s a whole universe of people who care about the same things that you care about, and you can celebrate them. (Spoiler alert: even when you do have a massive list of accomplishments of your own, this still holds!) Change-making is a team sport, so the success of your colleagues, allies, and inspirations is your success, too. Even if you don’t have a lot of experience or accomplishments of your own, amplifying others helps to advance your shared goals.

Remember: We exist in an attention economy, and what we choose to give attention thrives. Give attention to the things you want to see more of and be careful about what you might be amplifying if you choose to dunk.

There are some obvious ways to amplify other people whose work you admire. Re-sharing someone else’s post is one easy way to give extra attention to something. Even better if you add some reflections (what stood out to you?) or additional context (can you build on the original post with more info?). But I think people tend to overlook another way to amplify someone else’s awesomeness: reply in a complimentary or constructive way. 

This could mean asking thoughtful questions about the content of someone’s post or finding ways to create connections to other relevant people and communities. If you have a friend who has expressed interest in the topic of a post, for example, try tagging them in the replies with a brief description of what you think they’ll appreciate about it. Network-building often means identifying people in your life who would benefit from particular knowledge or opportunities and helping them to get what they need.

Think through positive experiences you’ve had with interacting with other people online and think of ways you can give more folks those types of experiences. Provide answers to questions people ask, when you have them. Pay it forward by sharing the answers you receive to your own questions in places where others can learn from them (a personal blog can be a great place!), and thank the people who helped you to find what you were looking for. I’ve found it can even be nice to send a personal message to someone who posted something that resonated with me just to tell them that it did. Social media can sometimes feel like calling out into a void, and that’s incredibly lonely. Letting people know that you appreciated and learned from what they shared can go a long way.

Of course, all the great things you might want to elevate may not be posted on social media. Other ways you can amplify people is to cite their work, acknowledge their ideas and actions in your own writing/videos/etc, and nominate people for awards or formal recognition. (Seriously, you might change someone’s whole career with a well-placed nomination!)

Not only does this sort of behavior help you to be perceived, it will help you to cultivate a community (a network!) of people who support and celebrate one another. Participating in these sorts of constructive, supportive conversations is a great way to demonstrate your skill at network-building: drawing relevant connections between people and ideas and helping to facilitate collaboration between different stakeholders. These are essential skills for AI policy.

Next up: Don’t wait to do the work

Buff guys reply meme, but they're explaining the meme and detailing why seeing traditionally macho, muscular men replying in a kind and supportive way is humorous in a wholesome way.
]]>
How to get into AI policy (part 2) B Cavello https://posts.bcavello.com/how-to-get-into-ai-policy-part-2/ 2024-01-07T20:00:00-05:00 Jerry Seinfeld doing standup with the words "what's the deal with being perceived"

Jerry Seinfeld doing standup with the words "what's the deal with being perceived"

I’m frequently asked for advice on how to break into the industry I’m in or how to achieve a position I’ve held. I’ve been privileged to serve in many interesting and varied roles across sectors, from small nonprofits and garage startups to huge multinationals and even the US Federal Government. While I personally owe a lot of my journey to privilege, luck, and the good graces of others, I know that “be lucky” isn’t particularly useful advice. In the spirit of providing something actionable, I’ve gathered here some reflections on things that I think have served me in my journey. I hope these reflections will be helpful for others on their journeys as well, and I encourage those with experience in this space to share more about their journeys, lessons learned, and advice too!

Note: For the purpose of this post, I’ll be talking about “getting into AI policy” and offering some specific examples, but most of these suggestions should hold regardless of what it is specifically you want to do. For more on entering the tech policy field, see my Emerging Technology Policy Careers profile.

This series comes in five parts:

  1. Mindset
  2. Be perceived
  3. Be the reply guy you want to see in the world
  4. Don’t wait to do the work
  5. Pay it forward (don’t skip this!)

Prefer to read everything as one long piece instead? i gotchu. 💖


Be perceived

All right, so you (still) want to get into AI policy! The first question you should ask yourself when trying to move into a new domain is “how would anyone know that this is something I am trying to do?”  

For the purposes of this section, I’ll mostly be talking in terms of getting a paying job in AI policy, but a lot of this will apply even if you aren’t looking for paid employment.

Even though the AI policy market is competitive, there are still many opportunities. Many of those opportunities may not be accessible, however, through public channels (like on job boards). Many jobs are never listed publicly or are published only as a formality once a candidate has already been selected. The world is not a meritocracy, and often “who you know” does make quite a bit of difference, even if people might pretend otherwise.

Don’t despair! Meritocracy actually sucks in a lot of ways, and you can navigate this reality. You can become legible to your prospective collaborators, employers, and funders. You just need to help them to help you. You need to be perceived. 

Whether you are already deep in this field or just getting started, you can make it easier for other people to find out how interested and passionate you are about AI policy. I am not talking about just adding the words “AI Policy Enthusiast” to your LinkedIn headline. Like the classic writing advice instructs: show, don’t tell. There are lots of people that say they want to work on these topics, but what are you doing (or what have you done) that demonstrates to people that you care about AI policy?

Here are some examples of things I have done that helped demonstrate my interest and expertise in this space:

  • Created an “AI study group” to audit free online machine learning courses with a group of coworkers
  • Followed the hashtags of conferences on social media and engaged in conversations about the presentations (even if I wasn’t physically present!)
  • Participated in and helped host meetups and events focused on tech and social impact
  • Shared articles about tech policy that I thought were interesting or informative
  • Joined a machine learning research paper discussion group
  • Publicly asked questions and shared reflections about topics I was learning about

The things you do don’t need to be particularly grand or consequential. They don’t need to be fancy or “official” or especially advanced. Sure, it’s great if you can run a public campaign or draft a new AI policy strategy from scratch, but you can also show your commitment and interest just by starting a three-person reading group with some friends. You can commit to summarizing one article per week on LinkedIn. You can maintain a running list of your favorite lectures on AI on a personal website. There are many ways to demonstrate your interest, growth, and commitment. You just need to give other people the chance to learn about it.

You do not have to wait until you are an “AI policy expert” to do this, either! Rather, it is incredibly useful to “learn out loud” by sharing your questions and journey as you learn. The first item on my list (created an “AI study group”) was something that I did knowing full-well that most, if not all, of the people who joined the study group would know more about the topic than I did. I scheduled the meetings, reserved the room, and handled the logistics, but often I was the one asking the most—and often the most basic—questions during our meetings.

Despite this, not only was I learning, but the other people were learning, too. So they kept showing up. Through my “learning out loud,” my coworkers got answers to questions they might never have asked and got to practice explaining topics that they thought they understood. All the while, whether I knew it or not, I was establishing myself, even if just within my company, as someone who cares deeply about these topics.

I was PERCEIVED even to the extent that people started sending me unsolicited links to articles or invites to events based on my clear interest. If people in your friend group or at work are sending interesting AI policy stuff to you, that’s a good sign that you have effectively indicated that this is a thing you care about.

Step one: Mindset

Step two: Be percieved

Next up: Be the reply guy you want to see in the world

]]>
How to get into AI policy (part 1) B Cavello https://posts.bcavello.com/how-to-get-into-ai-policy-part-1/ 2024-01-07T12:00:00-05:00 B stands in front of the US Capitol holding up a hand to gesture toward the iconic dome

B stands in front of the US Capitol holding up a hand to gesture toward the iconic dome

I’m frequently asked for advice on how to break into the industry I’m in or how to achieve a position I’ve held. I’ve been privileged to serve in many interesting and varied roles across sectors, from small nonprofits and garage startups to huge multinationals and even the US Federal Government. While I personally owe a lot of my journey to privilege, luck, and the good graces of others, I know that “be lucky” isn’t particularly useful advice. In the spirit of providing something actionable, I’ve gathered here some reflections on things that I think have served me in my journey. I hope these reflections will be helpful for others on their journeys as well, and I encourage those with experience in this space to share more about their journeys, lessons learned, and advice too!

Note: For the purpose of this post, I’ll be talking about “getting into AI policy” and offering some specific examples, but most of these suggestions should hold regardless of what it is specifically you want to do. For more on entering the tech policy field, see my Emerging Technology Policy Careers profile.

This series comes in five parts:

  1. Mindset
  2. Be perceived
  3. Be the reply guy you want to see in the world
  4. Don’t wait to do the work
  5. Pay it forward (don’t skip this!)

Prefer to read everything as one long piece instead? i gotchu. 💖


Mindset

Before we get started, it’s important to acknowledge that in many ways “AI policy” isn’t really A Thing. It may be becoming more of A Thing, but more often AI policy is actually privacy policy, intellectual property policy, trade policy, labor policy, etc with a technology angle. AI policy is very mushy. Since it can be thought of as the Venn diagram intersection of AI and [insert just about any other sort of policy here], that means there are a million ways to “be in it,” and even more ways to “get into it.”

Unlike some other fields or career paths, there isn’t (yet) an agreed upon qualification, like getting a particular certification or degree, that will clarify that you are “ready for AI policy.” This may feel daunting, especially if you are trying to make decisions about what to pursue in school, for instance. On the other hand, this may feel liberating. I studied economics as an undergrad, and some of my AI policy inspirations studied aerospace engineering, international relations, cognitive science and philosophyhistory of art and visual studies, and computer science. There are many different courses of study and life experiences that are relevant to AI policy, and my hope is that the field continues to embrace this variety because we need many diverse perspectives to do this work well.

AI policy is wiggly. (Much of the world is!) There are many possible paths, and there likely (and HOPEFULLY) always will be.

That said, it does help to get specific. Remember “AI policy” isn’t really A Thing. It is everything and nothing. It’s almost like saying you want to work on “academia” or “business.” It is important to be able to articulate the particular elements of AI policy that are interesting to you or that you have skills or experience in.

Maybe there’s an issue you’re especially passionate about. Do you have personal experience with something that makes you an experiential expert? Your particular background or expertise may be needed but missing in the dominant conversation. Perhaps you’re transitioning from another career, and you have a deep understanding of an industry and how AI is impacting (or could impact) that field. Maybe you have technical expertise you can bring to the policy space or a policy background that you're now ready to apply to AI.

Whatever the case, try to go at least one level deeper beyond “AI policy” to give some color to the particular problems you want to solve. If you’re not sure yet, that’s okay. But you should aim to develop an answer (or a couple) to this question if you’re serious about getting into this space.

Another thing to acknowledge before we dive in: while there are many possible paths into AI policy, there are also many possible humans. If you want to get a job in this space (which is NOT the only way to be involved, more on that to come!), you’ll need to convince another person to pay actual money to employ you. The AI policy market is competitive. There are lots of people trying to enter this domain, so it’s important to understand what value you bring and what other people actually want.

When I say “what other people want,” I use the word “want” on purpose. There are lots of things that are in need of doing in the world, but the harsh reality is that if you want to do this work professionally, you are subject to the forces that make people willing to pay for things and, thus, to the desires of people who have the money to pay for them. Be prepared to think about what skills and experience you have to offer and how your labor “adds value” compared to alternatives. It also helps to understand that careers in this space are often full of compromises (on things like salary and location but also things like subject matter, impact, and even personal safety).

Also worth noting: Pretty much everywhere is shaped by dominating forces of capitalism. Everyone has funders/clients to satisfy, be they in nonprofit, academia, government, or corporate work. We may be able to create enclaves as we build alternatives, but even those spaces will  have to interface with capitalism, at least at their edges.

Sorry. I know this is a kinda spooky start to a piece on entering an exciting new field, but I feel like if you’re ready to navigate the ambiguity of qualifying expertise and to acknowledge the power dynamics at play, then you’re ready to really get started.

Step 1: Mindset

Step 2: Be perceived

]]>
Leave your job like a boss B Cavello https://posts.bcavello.com/leave-your-job-like-a-boss/ 2023-12-27T10:00:00-05:00 Small child waves goodbye before disappearing down a tunnel slide

Sometimes, leaving a job is thrilling!

Other times, it’s decidedly less so.

It’s estimated that over 100,000 tech workers lost their jobs in the last year. I’ve known several people who were laid off, often with very little warning. It sucks. It’s scary and frantic and can be hard to know what to do.

Inspired by a moment like this a couple years ago, some friends and I put our heads together to make a gameplan so we could support one of our colleagues through one of these hectic, stressful moments.

What we ended up creating became a resource that we would turn to again and again, as we transitioned roles, whatever the circumstances. I’ve shared this checklist (along with my favorite resume writing guide) with teammates, colleagues, and family members, and I've adapted it over time based on feedback.

Below, you’ll find the How you leave a job like a boss checklist, but you can also save a copy of your own as a GDoc including some extra templates for writing your out-of-office, LinkedIn, and goodbye email messages.

Even though this particular job may be over, you likely will continue to have relationships (and potentially promises) to keep, connected to your old employer. Keep in mind any agreements (nondisclosure, nondisparagement, noncompete, etc) that you may have entered into. If you have questions or doubts about whether you have ongoing obligations, check with an employment-law attorney to make sure you don't unintentionally create friction between yourself and your former employer down the road.

Huge thank you to Anamita, Kevin, Rachel, Susannah, and all the friends who helped make this list such a useful resource for so many people!


How you leave a job like a boss

  • First step: take a breath
  • Contacts
    • Personal thank you email to everyone who you might want to stay in touch with in the future (CC yourself)
      • If you cannot export contacts: DO NOT BCC and do not use listservs/groups; these need to be individual emails you can reach from an outside address
      • If you can export contacts: do that, check that you have them accurately, then you can BCC if needed
    • Turn on Vacation Reply with your new contact information (allow external people, if that’s a setting you need to check)
      • You can do this before you actually leave (last three days or so)
    • Back-up any phone contacts you might need to access
    • Save copy of any org charts/lists you need to make sure you can reach out to people on LinkedIn
    • Post in your Slack/chat channel(s) with your contact info
  • Emails
    • Forward any emails you may need to reference from your work inbox to yourself
      • Look for: positive feedback from clients/managers, significant dates, etc.
      • Be careful as to not forward any confidential information (and note that forwarding emails can trigger flags in the system if your email and communications are monitored)
    • If there are any significant success stories you will want to reference for writing resume, make a record
  • Files
    • Save what you want and can - FWD or PDF
    • Be sure that you have transferred any personal files/photos/documents
  • Hardware
    • Do you have to return your phone? Check
    • Do you have to return your computer? Check
    • If you are not keeping your devices, wipe them in case there was any personal data
  • HR
    • Get record of severance/lay off for unemployment benefits
    • Review severance if applicable and make note of any significant dates or deadlines
    • Review healthcare if applicable and make note of any significant dates or deadlines
    • Confirm how and when you will be sent your final paycheck
    • Make sure your address on record is up-to-date
  • Manager close-out call
    • If you are being laid off: Ask to schedule not on the day of the news, but instead “next Friday” or something (so you can both be in a better headspace)
    • Lead with empathy: “I have had a hard week, but I bet it’s been a real rough one for you, too”
    • When you have your call, ask for three things:
      1. Logistics
        • Establish contact for go-to person for questions about HR (if you don’t already have one)
      2. Personal Feedback
        • Radical candor feedback
        • 3 places to improve
        • 3 places I shine
      3. Future Support
        • Ask them to be a reference
        • Ask for a recommendation on LinkedIn
  • LinkedIn
    • Ask teammates to endorse/recommend you on LinkedIn
      • Include clarity on the specific skills you want to be recognized for in your future pursuits
    • Immediately update your LinkedIn headline to reflect your openness to opportunities (be available early to extend your runway)
    • Update LinkedIn job searching status to “looking”
    • Add all your teammates on LinkedIn
      • You can personalize a copy/paste message if you're feeling overwhelmed
    • Goodbye post on LinkedIn (“thanks for the memories”)
    • LinkedIn Skills
      • find three roles you like on LinkedIn
      • look up those roles and skills required and identify which you have
      • add those skills to your profile
      • get five people to upvote each one (preferably not the same five)
  • File for unemployment (if applicable)
  • Dance Party 🎉

Want your own copy of the How you leave a job like a boss checklist? Use this template including resources for writing your out-of-office, LinkedIn, and goodbye email messages.

Moonwalking second life character
]]>
Why I’m (still) hyped about the Algorithmic Accountability Act B Cavello https://posts.bcavello.com/why-im-still-hyped-about-the-algorithmic-accountability-act/ 2023-09-17T12:21:00-04:00 B kicks up a leg in an enthusiastic pose in front of the iconic columns of the Supreme Court, fenced in with the rest of the capitol compound with barbed wire and armed guards in response to the January 6th attack on the capitol.

Update! The Algorithmic Accountability Act was re-introduced September 21, 2023 by Senators Wyden and Booker and Representative Clarke along with 14 other co-sponsors.

I spent the 2021 year serving as a technology advisor to Senator Ron Wyden (D-OR) and learning about being a staffer in Congress through the TechCongress Congressional Innovation Fellowship.

To apply to TechCongress, visit techcongress.io/apply. You can read about what motivated me to work in Congress and see my answers to the 2021 cohort application as a reference. I've also shared some ideas on how to get into AI policy, in general!

B kicks up a leg in an enthusiastic pose in front of the iconic columns of the Supreme Court, fenced in with the rest of the capitol compound with barbed wire and armed guards in response to the January 6th attack on the capitol.

One of the things that I am most proud of having had the opportunity to work on is the Algorithmic Accountability Act of 2022. That bill had some really cool ideas articulated in it, if I do say so myself, but (as I experienced when I first started on the Hill back in January of 2021) legislation can be difficult to read for those who aren’t deeply familiar with it.

I decided to share what I think makes this bill really interesting and exciting and highlight some stuff I think makes it an important piece of legislation not just for AI governance but for how we think about technology policy more broadly. I also wanted to explain some of the thinking behind the bill in case it might inspire others or challenge people to re-examine how they approach policymakers to make change.

What follows is a cleaned up, edited, and expanded version of what was originally shared on Twitter (and later Mastodon).

Due to length, I’ve split it up into parts:

Legislative text may not always have a reputation for being super exciting stuff. With all of the sections and subsections with references to other sections and subsections, it certainly can feel a little overwhelming to read. But Federal legislation can also be powerful, and I think that it can be really interesting if you understand how decisions about the particular words that make it up shape the ways that legislation may be interpreted.

For a recap of what the Algorithmic Accountability Act does, at a high level, and where it came from, check out Algorithmic Accountability from 10,000ft

There’s so much we could get into, but for now I’ll be sharing three things that I am personally excited about in the approach taken in the Algorithmic Accountability Act of 2022.

LET’S DIG IN!

Thing I’m excited about #1: Impact Assessment is an activity not an artifact

One thing you may notice in reading the body of the bill is that it very rarely talks about impact assessment as a plural (“impact assessments”). This is because it treats impact assessment as a mass noun.

Didn’t think you were gonna get a grammar lesson, did ya?

Koolaid Man bursts through the wall of a kitchen

Mass nouns (also called noncount nouns) are words like diligence, management, information, feedback, hospitality, mail, poetry, software, training, legislation, or even… bacon! Mass nouns can be used to denote continuous rather than discrete entities. By describing “impact assessment” as a mass noun, this is an intentional shift away from thinking of assessments (plural) as discrete, individual units and toward something more continuous. We’re not talking about one-off events, but rather impact assessment as an ongoing activity.

One way I like to think of it is like documentation of code. When you’re developing software, there are sometimes discrete artifacts and resources that are produced (documents), but documentation is not just artifacts, it is a process of tracking changes, describing goals, and communicating functions. It is an ongoing activity throughout the lifecycle of software. Documentation may be updated every time a change is made or whenever something significant happens depending on the architecture and specifics of the process being documented.

Big shout outs to Jingying Yang, Timnit Gebru, Meg Mitchell, Hanna Wallach, Jenn Wortman Vaughn, Deb Raji whose brilliant thinking on documentation for machine learning systems first exposed me to this concept through work like ABOUTML

This “activity not artifact” approach is important because we know that systems are dynamic and that both the technologies and the environments in which they operate are subject to change over time which will influence the impacts.

So that’s thing #1:

In the Algorithmic Accountability Act, impact assessment is a process, a set of actions, an ongoing activity that is integral to deploying automated decision systems to make critical decisions.

Thing I’m excited about #2: Focus on decisions, not data types

Thing 2 from Dr Seuss's Cat in the Hat

A pretty big shift from the 2019 version of the bill (and much of the legislation in this space) is the move away from the definition of “high-risk” systems to the frame of “critical decisions.”

Warning: this may seem a little pedantic at first, but when we’re talking about stuff that may turn into law and be interpreted by the courts for decades to come, it pays to be specific!

The Algorithmic Accountability Act uses this framing of “critical decisions,” but a lot of legislation and regulation for AI and automated decision systems (ADS) uses “high-risk” systems. Before I get into why “critical decisions” might be preferable, let’s break down the criteria most often used to define “high-risk” systems and why—despite being appropriate in some other texts—they’re not appropriate for Algorithmic Accountability.

Here’s the thing about regulating “high-risk” systems, you kinda have to have an idea already about what is risky. The other legislation in this space tends to conceive of risk through three main criteria:

  1. number of people impacted
  2. sensitivity of data involved
  3. severity of impact

Let’s step through this list and talk about why each of these things can be either difficult to use in—or even possibly antithetical to the goals of–the Algorithmic Accountability Act and potentially other bills written to regulate these ADS technologies, as well.

a) Number of people impacted

One way to think about the risk of a system is related to the number of people who are impacted by that system. This makes a lot of sense for many applications, but for Algorithmic Accountability Act, using the number of people impacted to define risk just wouldn’t work.

Don’t get me wrong: I think this is an important thing to try to capture when thinking about what may make a system more or less risky, but for Algorithmic Accountability the issue is that until you have assessed a system, you may not know who all it impacts! You can’t define the type of system that is captured by a rule by something you don’t know until you actually apply the rule, so for Algorithmic Accountability, we had to take a different approach.

Sidebar: there’s a WHOLE other conversation to be had about how to define things like “number of users” that probably needs way more standardization because hoo boy! do people disagree on that. (Even a single company may have many different definitions & metrics for defining “number of users” across different teams!)

If you’re a staffer writing legislation trying to navigate this… Godspeed. For folks outside of Congress who want to make tech laws better, try writing some thought-out definitions for “users” in different contexts. It could really go a long way.
JUST SAYINGGG

b) Sensitivity of data involved

When you can’t define by the scale of impact (number of people), it can be tempting to focus on the types of permissible data instead. This is actually what the original 2019 version of the Algorithmic Accountability Act did.

There is so much established literature and law about sensitive data, personally identifying information, protected health information, and so forth! And for sure: there is a real, pressing need for data privacy legislation in the US. There are real harms that come from sensitive information being exposed or used irresponsibly, and the explosion of data collection about people makes this all the more urgent!

Privacy law is important and urgently needed, BUT privacy law and algorithmic accountability law are complementary causes, not substitutes for one-another. Not only does law and regulation for AI & ADS need to do different things, but sometimes the goals of privacy and algorithmic accountability are in tension!

Problematic old phrenology guide shows two different illustrations of men's faces, one labeled "a genuine husband" and the other labeled "an unreliable husband"

Regulating systems for making decisions based on the data INPUTS to those systems rather than their specific uses creates perverse incentives to use less ~sensitive~ data, even if that is the data actually most pertinent information to the situation. Making decisions using only benign information can still be dangerous. If a system is used for making critical decisions about a person’s healthcare, it probably SHOULD be using sensitive health information! Using more benign data (through proxies or straight up irrelevant info) may not only be unhelpful, it may actually harm people.

This is why focusing on “high-risk” systems as defined through data sensitivity is dangerous.

Finally: c) Severity of impacts

Severity of impacts is now probably one of the most common approaches for defining systems as “high-risk.” This also can make sense in some contexts, but wasn’t the right focus for the Algorithmic Accountability Act.

It’s worth noting that there are different types of bills (which turn into different types of laws). Because different bills have different goals, they may focus on addressing the same problems with different approaches. For example, some bills may try to address the negative impacts from ADS with a goal of providing recourse to people harmed. With that goal, measuring risk by severity of impact can be useful. This strategy may be applicable in cases where the impacts of ADS are well documented. It does depend, however, on knowledge about the impacts of using a system.

Not only do we not know what the impacts of many systems are yet, we actually don’t know all of the different sorts of systems out there. In Algorithmic Accountability, our goal is to UNCOVER the impacts in contexts where automated systems are used in order to identify harms (as well as positive impacts). Because this information isn’t known, focusing on this kind of “high-risk” system doesn’t work.

The alternative? Critical decisions

So we’ve talked through three of the main ways people classify ADS as “high-risk,” but you might be saying to yourself, “but none of those capture how the EU AI Act—one of the most significant pieces of legislation on algorithmic regulation out there—does it!” And you’d be right.

A second understanding of the severity-of-impacts framing is the potential for harm if a system doesn’t perform accurately or performs in a way that is biased or otherwise problematic. This is the approach that the EU AI Act takes, as I understand it. Its approach to “high-risk” systems is defined based upon the potential for a technological system to be the source of harm. This is certainly one of the ways that a system could be “high-risk,” but in this section, I hope to communicate why I think focusing on “high risk” systems is actually incomplete, and why the “critical decision” framework matters. It’s a little subtle, but I think it’s really important!

As we covered above, the Algorithmic Accountability Act of 2022, doesn’t define its automated systems of interest by 1) number of people impacted, 2) sensitivity of data used, or even 3) severity of impact. In fact, if you look closely, you might notice something peculiar. The Algorithmic Accountability Act of 2022 isn’t really about algorithms.

Film still of a man looking distraught with the subtitle reading "The whole damn thing is about decisions..."
I have not seen this film nor can I testify to its quality.

Instead of focusing on particular systems, most of Algorithmic Accountability is written about “augmented critical decision processes.” So let’s unpack that!

As described in Algorithmic Accountability from 10,000ft, an “augmented critical decision process” is a process where an “automated decision system” is used to make a “critical decision.” Said another way: an augmented critical decision process (or ACDP) is a process where computational systems are used, the results of which serve as a basis for a decision or judgment about the cost, terms, or availability of a bunch of critical stuff in people’s lives like education, employment, healthcare, and housing.

These ACDPs are referred to throughout the bill (62 times, in fact!) because the Algorithmic Accountability Act recognizes that harms caused when employing ADS to make critical decisions may not only come from the ADS. Instead, Algorithmic Accountability recognizes that automation has the capacity to scale up and speed up existing harmful processes, often while obfuscating the actual decision makers, making accountability more challenging.

"Pay no attention to the man behind the curtain" scene from the Wizard of Oz

Therefore, Algorithmic Accountability doesn’t just require assessing the impacts of automated decision systems, but rather it requires that we assess the impact of the whole ACDP, the whole critical decision process that is being automated, to document how these decision processes work and the role that ADS plays. By doing this, we not only hope to better understand these processes but, hopefully, can identify and mitigate any harms uncovered along the way.

So, even though the categories used in the EU AI Act and Algorithmic Accountability Act of 2022 may appear similar, the targets of assessment are actually subtly, but importantly different.

Another thing that the ACDP framing accomplishes is that it also narrows the scope somewhat. This is important for potentially boring government-y reasons because, after all, this is an FTC bill.

You may notice that the list of things that make up these “critical decisions” does exclude some things that many people might expect to see in AI/ADS laws (like some of what’s in the EU AI Act).

This is related to that boring government jurisdictional stuff. Since the FTC is about consumer protection it doesn’t cover government use like the criminal legal system, immigration, or government benefits administration.

So that’s thing #2:

The Algorithmic Accountability Act isn’t about governing types of data or types of impact, but rather assessing the impacts of making certain types of decisions with the help of automated technologies.

So that brings us (finally!) to…

Thing I’m excited about #3: Three layers of disclosure

"Ogres have layers! Onions have layers" says Shrek attempting to illustrate a point to Donkey

One big critique of the 2019 version of the Algorithmic Accountability Act was that it did not include reporting on the impact assessment behaviors of covered entities (aka companies). Reporting is an important element to accountability because it offers a level of transparency into processes and it introduces procedural checks to ensure that impact assessment is, indeed, taking place. (Impact assessment is different from some other approaches like audits, licensing, etc, but these things are not mutually exclusive.)

I imagine this is one of the SPICIER elements of the bill, so let’s discuss!

The new version of Algorithmic Accountability has new disclosure requirements in three layers:

  1. internal assessment of impact within companies (an ongoing activity, remember?)
  2. summary reports (a particular artifact that comes out of that activity) submitted to the FTC
  3. information shared by the FTC to the public in the form of:
    1. aggregated anonymized trend reports that the FTC produces
    2. a searchable repository with key (identifying) info about the reports received

Before we get into why this is a Thing I’m Excited About, let’s first talk about what many people want this bill to do (which is doesn’t do), and then I’ll tell you why I think that THIS is actually even better!

Tom, the cartoon cat, on his knees pleading

(warning: caricature incoming)

A lot of people want Algorithmic Accountability to be about catching bad actors red-handed. They want to expose and name-and-shame those who are allowing their automated systems to amplify and exacerbate harms to people. This is righteous, and I empathize. I also want there to be justice for those harmed, and I also want there to be real consequences for causing harm that willful and feigned ignorance do not excuse.

I do believe that this is a step in that direction, but this bill focuses on something slightly different: Algorithmic Accountability is less about helping the FTC catch wrongdoers, (although there is room for that, and I’ll explain more) and it’s more about making it easier and more clear on how to do the right thing.

One of the great challenges in addressing the impacts of automated decision systems is that there is not (yet!) an agreed upon definition of what “good” (or even “good enough”) looks like. We lack standards for evaluating decision systems’ performance, fairness, etc. Worse still, it’s all super contextual to the type of decision being made, the types of information/data available, etc. These standards may one day exist! But they don’t yet. Algorithmic Accountability is about getting us there.

And part of getting there, I believe, is facilitated through the three tiers of disclosure and reporting.

Layer 1: Internal assessment of impact within companies

This comes back to what we talked about in Exciting Thing #1: impact assessment is a process, an ongoing activity, an integral part of responsible automated decision product development and deployment. The Algorithmic Accountability Act of 2022 requires all companies that meet certain criteria to do this and keep records internally of their processes.

Layer 2: Private submission of summary reports to the FTC

Now here comes the potentially ~spicy~ bit!

The bill requires companies to submit documentation substantiating their impact assessment activities to the FTC. (To see what’s required, peep Section 5.) This submission is done PRIVATELY, meaning that it’s between the government and the one submitting one company, here.

This documentation is required to be submitted before a company deploys—as in sells/licenses/etc OR uses, themselves—any NEW automated decision system for the purpose of making critical decisions. It is also required annually for any existing system as long as it is deployed. This reflects the continuous nature (the mass noun!) of impact assessment we talked about earlier. It is an ongoing activity, but these summary docs are snapshots of that activity in action.

Many folks may feel these reports should be made entirely public. I get where that’s coming from, but here’s why I think this private reporting to the FTC is actually a kinda clever way to go about it…

  1. Because we lack standards, it is premature to codify specific blanket requirements for which specific metrics for evaluating performance, for instance, all companies should use. As such, companies will likely choose whichever ones make them look “best” meaning people won’t put out damning info.

To be clear: this kinda "metric-hacking” is to be expected, and whether the reports are private or not, companies (out of fear of accountability or at least judgment) will probably assess impacts and use the metrics that they think will likely reduce the likelihood that they get called out. Such is the nature of humans (especially within a punitive framework)!

  1. (Okay, now here’s the fun part!) Because these reports are submitted privately to the FTC, companies are now in a position of information asymmetry. They do not know what OTHER companies are saying they did or how they performed on THEIR metrics. They may try to do the bare minimum, but they don’t actually know what the bare minimum is!
Kid using a computer gives a thumbs up

Gotta love it when collective action problems work on our side! 😜

The FTC (plus some other agencies), however, get to see across the collection. And this is super useful! Not because companies are going to “tell on themselves,” (they will try incredibly hard to not do that) but because there are really interesting lessons to be learned from how different companies fulfill these requirements. There is as much to be learned from what particular companies do say in their reports as what they don’t. The information asymmetry makes this more JUICY!

See, right now there’s a dynamic where any company (or more like employee) that tries to really interrogate the impacts of these automated decision technologies gets called out for it. Inevitably, doing honest impact assessment will turn up some… room for improvement. But recognizing where things are going wrong is the first step in the process of doing something about it.

At the moment, though, asking the tough questions and being open about challenges makes one a target. It’s a “tall poppy” situation. It’s better to not know, to not try, than to find out the truth. The companies that do the least don’t make headlines. The automated decision systems that no one knows about don’t feature in hashtags. The current culture around responsible tech rewards keeping your head down, not asking questions, and staying obscure. It often punishes those that try to ask, to measure, to identify and prevent harm.

This private reporting dynamic shifts that calculus.

With the Layer 2 reporting constraints, companies aren’t telling on themselves so much as they’re telling on each other. By doing more thorough assessment compared to industry peers, companies make those OTHER companies look worse rather than themselves. This competition could even reduce collusion pressures. With Algorithmic Accountability, there is an opportunity for a race to the top that doesn’t exist in the current equilibrium.

Maybe you think that this is all just “going through the motions,” and this reporting is just a song-and-dance that won’t make any REAL difference. I guess it’s possible, but even “going through the motions” can save lives. Honestly, there’s so much BASIC stuff out there that hurts people that could be avoided if people were even just a little bit conscious of it when designing, developing, and using powerful automation tools. Even the much-maligned “checklist” for impact assessment can be a powerful tool if it provides air cover for well-intentioned employees of companies to work in the public interest.

Maybe you say “but still, there are things that The Public really does deserve to know!” And it’s true. Some things are really essential. Like knowing what decisions about you are being automated or knowing if there is a way to contest or correct one of these decisions.

And so… 

Layer 3: Information shared by the FTC to the public

There are two complementary components to the reporting Layer 3:

  1. Aggregate anonymous trend reports
  2. A searchable repository.

Hop over to Section 6 of the Algorithmic Accountability Act to see detailed information about what information will be made public about companies’ use of automated decision systems to make critical decisions about people’s healthcare, education, and more!

The Algorithmic Accountability Act is a consumer protection bill (where consumer is defined as… any person. Turns out there’s no official FTC definition of consumer! 😜) Part of that consumer protection comes from making key information available to the public in a place where individuals—and also awesome consumer-protection and advocacy organizations—can access it.

This 3rd tier of disclosure consists of two different flavors of information. One is an information-rich, qualitative report of the findings and learnings aggregated from the multitude of individual reports. This is where the FTC can highlight differences and patterns.

Personally, I’m really interested to learn about things like… do different critical decisions (health vs employment) gravitate toward different metrics for evaluating performance? What types of stakeholders are being consulted with? How?

The second half of Layer 3 is the public repository. This has more limited information, but contains a record for every critical decision that has been reported and contains that key information we alluded to earlier. The repository must “allow users to sort and search the repository by multiple characteristics (such as by covered entity, date reported, or category of critical decision) simultaneously,” ensuring that it can be a powerful resource for both consumers and advocates as well as researchers.

Together, these three layers of information disclosure can provide an opportunity to 1) catch issues early where companies can still fix, 2) motivate a greater “race to the top” on both how automated decision systems are used and on impact assessment, itself, & 3) provide the public with essential information for making better-informed choices and for holding companies accountable.

And that’s thing #3:

The Algorithmic Accountability Act uses three different layers of information disclosure to maximize the impact of assessment.

There you have it, folks!

"You did it" exclaims Gene Wilder as Willy Wonka at the end of the movie

My big 3 reasons why I’m (still) hyped about the Algorithmic Accountability Act of 2022:

#1: Impact Assessment is an activity not an artifact 
#2: Focus on decisions, not data types 
#3: Three layers of disclosure

There’s a lot more to say, but I hope this breakdown helps illustrate some of the clever ways that writing robust legal definitions about tech and red-teaming regulatory requirements can potentially produce better legislation for tech policy issues!

Jump around:

]]>
Algorithmic Accountability from 10,000ft  B Cavello https://posts.bcavello.com/algorithmic-accountability-from-10000ft/ 2023-08-15T12:12:00-04:00 Update! The Algorithmic Accountability Act was re-introduced September 21, 2023 by Senators Wyden and Booker and Representative Clarke along with 14 other co-sponsors.

In January of 2021, I moved to Washington, DC. It was a difficult week to come to this nation’s capital, but I am grateful I did. I spent the next year serving as a technology advisor to Senator Ron Wyden (D-OR) and learning about being a staffer in Congress through the TechCongress Congressional Innovation Fellowship. I had incredible mentors and champions both on and off the Hill, and I was able to work on many different important tech policy issues.

To apply to TechCongress, visit techcongress.io/apply. You can read about what motivated me to work in Congress and see my answers to the 2021 cohort application as a reference. I've also shared some ideas on how to get into AI policy, in general!

One of the things that I am most proud of having had the opportunity to work on is the Algorithmic Accountability Act of 2022, introduced in February of that year. The bill reflects the thinking and input from many, many brilliant people, but I’m glad to have been one of the key staffers crafting this text behind the scenes. The Algorithmic Accountability Act of 2022 had some really cool ideas articulated in it, but (as I experienced when I first started on the Hill back in January of 2021) legislation can be difficult to read for those who aren’t deeply familiar with it.

Scene from the Matrix with the world rendered so that the walls and people all look like they're made up of green code on a black screen

I wanted to write something to explain some of the things that I think make this bill really interesting and exciting, and stuff I think makes it an important piece of legislation not just for AI governance but for how we think about technology policy more broadly. I also wanted to explain some of the thinking behind the bill in case it might inspire others or challenge people to re-examine how they approach policymakers to make change.

What follows is a cleaned up, edited, and expanded version of what was originally shared on Twitter (and later Mastodon).

Due to length, I’ve split it up into parts:

Let’s get into it!

For starters: the Algorithmic Accountability Act of 2022 is a bill “to direct the Federal Trade Commission to require impact assessments of automated decision systems and augmented critical decision processes, and for other purposes.”

But what does that mean?

Let’s back up a bit. The Algorithmic Accountability Act of 2022 is a piece of legislation that was introduced in the 117th Congress of the United States. It is a bill, which is a document that has a bunch of legal-sounding language that was written and submitted for Congress to consider as something to turn into a law. The 2022 Algorithmic Accountability Act is actually a revision (an update) and reintroduction (a re-submission for consideration) of an earlier bill originally introduced in 2019, which, itself, was an independent introduction of some text that was originally included as a piece of a different 2019 bill called the Mind Your Own Business Act, which was also revised and reintroduced earlier in 2021).

Patchwork "crazy quilt" with intricate stitching between the various irregular-shaped colorful fabric pieces

You might be noticing a pattern.

It’s pretty common for US federal bills to be revised, reintroduced, remixed, and otherwise Frankensteined into different versions as people make edits, incorporate feedback, and even change office. The Algorithmic Accountability Act underwent some pretty significant updates from 2019 to 2022 and ultimately, got quite a bit longer than its predecessor. This was necessary to clarify definitions, explain processes, reflect best practices, and decrease ambiguity. Personally, I now have a much greater understanding for why lawyers are Like That and a greater appreciation for specificity and caring where the comma goes! There can be very good reasons for legal text to be really long and wordy.

So what does “to direct the Federal Trade Commission, etc etc” actually mean?

Here’s the tl;dr: The Algorithmic Accountability Act of 2022 says that the US Federal Trade Commission (FTC), one of the Federal agencies that—as part of its mission to protect consumers—regulates how companies behave, needs to create and then enforce requirements for companies to assess the impacts of “augmented critical decision processes.” Here’s a one-pager summarizing it, as well.

That was already a lot, so we’re going to break it down further.

An “augmented critical decision process” is a process where an “automated decision system” is used to make a “critical decision.”

What are “automated decision systems,” you say? Here’s exactly what it says in Section 2(2) (or “§2(2)” if you wanna be fancy):

The term “automated decision system” means any system, software, or process (including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques and excluding passive computing infrastructure) that uses computation, the result of which serves as a basis for a decision or judgment.

In essence, these are computational systems, and in this bill, they are pretty broadly defined. This reflects research from experts like Rashida Richardson recognizing both that:

  1. Technology evolves and definitions need to be robust against the rapid rate of change
       AND
  2. Many harmful systems are… kinda boring!

While new innovations in AI and machine learning with deep neural nets are dazzling (and sometimes terrifying!), a lot of the automation that is taking place across society is not particularly technologically advanced. Even so, automated technologies have the power to scale benefits and harms to millions of people. (This is especially true when they are used to make “critical decisions” about people’s lives!) So—even though it’s often thought of as an AI bill—the Algorithmic Accountability Act of 2022 doesn’t specifically focus on AI or particular automation techniques.

Okay, so we have a definition for an automated decision system, now what is a “critical decision?” Critical decisions are decisions relating to consumers’ access to or the cost, terms, or availability of education & vocational training, employment, essential utilities, family planning, financial services, healthcare, housing or lodging, or legal services. (We will dig into this more in Why I’m (still) hyped about the Algorithmic Accountability Act, but you might notice that there are parallels in this language to the EU AI Act’s 2021 “Annex III: High-risk AI Systems Referred To In Article 6(2)”)

So that’s what the bill says it’s about. Put all together: it’s about telling the FTC to create and then enforce requirements for companies to assess the impacts of using computational systems, the results of which serve as (or are intended to serve as) a basis for a decision or judgment about the cost, terms, or availability of a bunch of critical stuff in people’s lives like education, employment, healthcare, and housing.

That’s quite a mouthful, which is why legislative texts often define a bunch of terms to serve as a shorthand (kind of like creating variables in computer code).

In part 2 on why I’m (still) hyped, I’ll break down some of the things that I personally find most exciting, but if you want to get more context you can read the section-by-section summary of the bill for more info (or you can even read the full text if you’re into that) along with other resources linked at the bottom of this press release.

Read more:

]]>
Fortunately, foresight B Cavello https://posts.bcavello.com/fortunately-foresight/ 2022-08-29T12:05:00-04:00 Illustration from 'Fortunately' childrens book showing a boy floating in the sky with a parachute as airline debris falls behind him. The text on the page reads 'Fortunately, there was a parachute in the airplane.'

Illustration from 'Fortunately' childrens book showing a boy floating in the sky with a parachute as airline debris falls behind him. The text on the page reads 'Fortunately, there was a parachute in the airplane.'
“Fortunately, there was a parachute in the airplane.”

Growing up, I had the pleasure of reading (or being read) Fortunately, the 1964 classic by Remy Charlip. This children’s book details the story of a boy who encounters a variety of twists and turns on his way to a birthday party. Every page of the story reveals a new “fortunate” or “unfortunate” event.

For example: Unfortunately, the airplane engine broke down. Fortunately, there was a parachute in the airplane. Unfortunately, there was a hole in the parachute.

Inspired by this story, I have been using the Fortunately format to facilitate discussions about the future of our world with the transformational impacts of technology. The simple back-and-forth alternation of sentences starting with “fortunately” or “unfortunately” provides a useful (and often entertaining) structure for exploring a potential change or scenario beyond just its first or second order effects.

An example of the output of a Fortunately/Unfortunately activity. It reads: Unfortunately there is a problem of fake news online. Fortunately AI tools can help readers distinguish what is real vs fake news. Unfortunately news sites are now putting relevant true key words in their articles that allow AI to determine that news to be true when it may not be. Fortunately the FTC can help regulate this. Unfortunately the FTC takes a long time and decisions can be appealed. Fortunately we can crowdsource solutions from human readers. Unfortunately bots are submitting false reviews. Fortunately AI can determine fake reviews. Unfortunately we have come full circle.
An example of the output from a recent workshop on the impacts of AI.

Given the popularity of Charlip’s book and the simplicity of the format, I’m sure that others have created similar activities, but after having used this “Fortunately/Unfortunately” activity to great success in multiple foresight-oriented workshops, I was surprised to learn that none of the participants or other facilitators had experienced it before. As a result, I’ve decided to write a little explanation here as well as provide some resources for facilitating this activity below.

How to Do It

To run a “Fortunately/Unfortunately” activity, provide participants with a scenario prompt or choice of prompts based on the topic you hope to examine. These prompts should be concise and evocative. They may come from earlier scenario development as in a foresight context or may be taken from contemporary news headlines or other sources.

Some examples I used in one workshop:

  • The Earth is running out of helium
  • 10MM dogs & cats are lost/stolen in the US annually
  • Invasive Aedes aegypti mosquitoes are spreading
  • 5.6MM children under age 18 have food allergies
  • IKEA is discontinuing the BLÅHAJ 🦈

Whatever the specific prompt scenario, encourage participants to write a story using the alternating sentence format to generate as many sequential fortunate or unfortunate effects or events as they can.

This activity is fairly accessible and may be most successful in smaller groups (3-5 people) with a shorter timeframe (10-25 minutes) as part of a larger workshop or discussion. I’ve used this format successfully both in online and in-person workshops.

I’ve included a Google Slides template for facilitating your own Fortunately/Unfortunately activity.

Variations

If you’re looking for ways to experiment with this activity format, here are some modification ideas to try:

  • Use different categories and qualifiers to steer exploration on the same prompt (cultural, environmental, technological, etc)
  • Pass the fortunately/unfortunately stories between different groups and have participants pick up where the last group left off
  • Switch the “fortunately” or “unfortunately” preface around on one of the sentences and see how that changes things
  • Explore “best case scenario” and “worst case scenario” versions of the same story
]]>