Understanding AI for LAFCO Agencies: Navigating the Future of Technology
Recently, you may have heard increasing chatter about Artificial Intelligence (AI) and its potential impact on various industries. As Executive Officers, staff, and commissioners of LAFCOs, it’s important to understand what AI is and how it might affect the industry. This article aims to provide an explanation of AI, particularly Large Language Models (LLMs), and their potential implications for LAFCO operations.
What is AI and How Does it Work?
First, let’s demystify what I mean by AI. In simple terms, AI refers to computer systems designed to perform tasks that typically require human intelligence. Large Language Models, a type of AI, are systems trained on vast amounts of text data, allowing them to understand and generate human-like responses. You might be surprised to learn that you’re likely already interacting with AI in your daily life, perhaps through your smartphone’s voice assistant or your email’s spam filter or predictive text appearing as you are typing in your word document or email. While I have not applied AI to work produced by South Fork Consulting, I have played around with its applications and found that, while it can introduce errors, there are opportunities for AI to help LAFCOs and their staff.
Possibilities of AI in LAFCO Work
In the context of LAFCO work, AI and LLMs could assist with tasks such as document review, data analysis, and report generation. For instance, these systems could help summarize lengthy municipal service reviews or sphere of influence studies, potentially saving time in the review process. They might also aid in analyzing historical data on population growth, service demands, and land use patterns to provide more accurate projections for boundary reviews and service planning. This can lead to more informed decisions about annexations, sphere of influence updates, and special district formations or dissolutions. The possibilities for AI as it continues to learn could potentially be endless.
As these LLMs advance, they can be trained to be better at producing documents that meet the needs of each LAFCO agency. They will likely allow LAFCOs to automate report generation, provide service demand forecasting for agencies, and project population growth more accurately through incorporation of multiple data sources (census data, local economic indicators, known and potential development projects, etc.) And while this is an exciting new chapter in humanity’s quest for ever expanding technology, it’s crucial to approach these possibilities with caution.
AI Challenges and Risks
While AI can process information quickly, it lacks the nuanced understanding and local knowledge that LAFCO officers, staff, and commissioners have within their agencies. The complexity of boundary reviews, service planning, and community dynamics requires human judgment that cannot be replicated by AI. Transparency and explainability pose additional challenges. Many AI systems, especially complex ones like LLMs, operate as “black boxes,” making it difficult to understand how they arrive at their conclusions. This lack of transparency can be particularly problematic for government agencies like LAFCOs, which are required to provide clear justifications for their decisions to the public and stakeholders.
It’s also crucial to understand that AI can sound intelligent or correct without actually being accurate. These systems are designed to generate plausible-sounding text based on patterns in their training data, but they don’t truly understand the content in the way humans do. They can make mistakes, present outdated information, or even generate entirely fictitious “facts” that sound convincing. For example, a lawyer recently used an LLM to generate an argument for court and submitted the brief without a review. Several of the cases used for the legal precedents in the argument weren’t real, the LLM misidentified judges, and included companies that didn’t exist. The incident made headlines and the law firm was fined. This phenomenon, sometimes called “AI delusion,” underscores the need for rigorous human oversight and verification of any AI-generated content.
If LAFCOs do consider integrating AI into their operations, it should be done with caution and through a carefully planned approach. This might include starting with small, low-risk projects, ensuring full transparency about AI use, maintaining strong human oversight, and investing in comprehensive training for staff. For example, a LAFCO agency could start with tasking an LLM to summarize long documents or review an application for completeness. Any use of AI would need to be checked for biases, errors, or incorrect information.
The Future is Already Here
Today, right now, consultants can use AI for summarization, data processing, document creation, and idea generation. Even if LAFCOs themselves don’t directly implement AI systems, they may interact with AI through their consultants’ work. LAFCOs should consider adding clauses to consultant contracts requiring disclosure of any AI use in their work for LAFCOs. Just as subcontractors are required to be listed in contracts, AI should too. This transparency can help ensure you are fully aware of how AI might be influencing the information and recommendations you receive.
No one knows what the future will hold. Major advancements in technology are always met with concern and skepticism. While it’s important to embrace the future, the use of AI in LAFCO operations requires careful consideration and a cautious approach for now. Collaboration will be key in navigating this new technology. Engaging with other LAFCOs and government agencies to share experiences, best practices, and lessons learned in AI implementation can help us all navigate this complex and somewhat exciting new chapter of the human experience.