The hype around Artificial Intelligence has moved from the theoretical to the tangible. With businesses moving more of their experiments towards production use cases. While media reports of 90% AI project failure may be exaggerated, a significant number of promising initiatives still falter before reaching production. My experience helping large enterprises shows this isn’t due to a lack of enthusiasm, funding or technology, but a failure to build the necessary strategic scaffolding. While I have seen much smaller numbers than 90% of these projects fail, we are seeing a few too many. Some projects finish on time, and budget, and deliver what was asked but not move into the production phase. The challenges we have identified are not a lack of funding, ambition or technology, it’s a failure to build the necessary strategic and technical scaffolding required to productionise AI. Moving from isolated pilot projects to scaled, AI is a journey and the people, and change aspects need to be well planned out. Scaling AI successfully has a number of prerequisites: Programme & Change Management When Cloud became a hot topic, large programmes of work were spun up with governance teams, and PMO offices, and migration were segmented into the 7 R’s, and then split further into waves. AI is going to be substantially more transformative than moving workloads to the cloud, and the impact on people is going to be far greater, but little thought has been given to the programme and change management aspects. This requires a fundamental shift in thinking from treating AI as a series of disjointed tech experiments to embedding it as a core, strategic capability encompassing the tech teams, change teams, and spinning up the required PMO office and developing a strategy to manage the change that incorporates your people. Cloud Foundations For our clients, this often means building out a proper resource hierarchy in the cloud, defining identity and access management protocols, implementing robust security controls, and establishing clear cost governance mechanisms. This gives everybody assurance that data will not be leaked, or used to train models on proprietary data, as well as being foundation for giving the right people the right access. One thing that is still unclear to execs is what the infrastructure cost of AI is going to be, in our experience it is often magnitudes cheaper than exec’s are estimating, but those savings are quickly swallowed up by the required change management. Use Case Identification A focus on solving critical business problems is something that technologies forget about. Aviato are sure that a disciplined framework for identifying and prioritizing use cases is what separates the projects that realise enterprise value from those that end up being scrapped. This does not need to be complex, and a bottom up approach seems to get the most traction, the employees on the coal face are acutely aware of what parts of their jobs they want to automate away. What we have seen work: Broad or Deep One area where expectations are often misaligned is around what the AI Implementation is supposed to do, Google Agentspace, or Glean is one example of a broad enterprise wide AI implementation that is familiar, it will summarise all of your companies knowledge and provide a Chat interface similar to Chat GPT or Gemini but trained on your company data. These agents are great at saving time for a broad number of use cases but are very unlikely to take autonomous actions. The other hand we have the deep agents, these are very specific, they will help Peter from the legal team review contracts faster, or Mary from security find security details in logs, these agents will more likely take autonomous actions, and be very specific to a role. When implementing a broad AI platform, a lot of people are expecting something that will take autonomous actions, and while Google’s Agentspace has a very exciting roadmap it is just not at the level of doing this deep work yet. Centralised Framework Once you have a production AI agent the job is not over, you now need a way to manage this, and bring a structured approach to optimising it. What happens when OpenAI or Google release a new model, do you switch immediately? Do the benchmarks they provide match your use case? In the same way as we run software updates for the newest version of Java, a structured approach to life cycling system prompts and agents is required, and validating these against metrics that matter to your use case, latency is key for a chat bot, accuracy is key for a software engineering agent, and cost is a consideration to all use cases. Google is definitely leading the way with their Vertex AI Evaluation tooling, but skipping over this and “YOLOing” changes to production agents is a problem waiting to happen. Conclusion Aviato are sure the future of the professional workforce is a partnership between humans and AI agents. However, this future won’t arrive by accident. It must be built with a disciplined, structured approach that treats AI not as a series of tech experiments, but as part of the core business strategy, and run as a transformation project.
Cloud
This is a selection of posts about partnership with Google Cloud, and how we can help you implement Google Cloud services to solve your business problems.
This is a parent catagory with subordinate catagories to cover off Cyber Security, Infrastructure, and our AI and ML practice.
Deploying ADK Agents to Agentspace
This post outlines the steps required to deploy an Agent Development Kit (ADK) Agent from Agent Engine to Agentspace. Hopefully Google publish some docs on how to do this and thanks to Andy Hood for figuring this out. If you do need help with this reach out via https://aviato.consulting Follow these instructions carefully to ensure a successful deployment. Note Both Agent Engine and AgentSpace have been recently renamed as part of Google’s AI branding, so you will still see references in the APIs to their previous names: Prerequisites Before beginning the deployment process, ensure you have the following: Instructions for developing and deploying the ADK Agent are outside of the scope of this document. Deployment Steps The deployment process involves several key steps: Step 1: ADK Agent deployed to Agent Engine LOCATION=us-central1PROJECT_ID=aviato-project-idTOKEN=$(gcloud auth print-access-token)curl -X GET “https://$LOCATION-aiplatform.googleapis.com/v1/projects/$PROJECT_ID/locations/$LOCATION/reasoningEngines” \ – header “Authorization: Bearer $TOKEN”Response (truncated):{“reasoningEngines”: [{“name”: “projects/123456789/locations/us-central1/reasoningEngines/123456789″,”displayName”: “ADK Short Bot”,”spec”: {…} Note: when obtaining the ID via the Google Cloud Console, the ID may have the Project Id in the resource name. The AgentSpace API appears to require the Project Number instead. Step 2: Obtain the Id of your AgentSpace application Option 1: Use the AgentSpace menu in the Google Cloud Console to obtain the ID of your AgentSpace application: Option 2: Use the AgentSpace List Engines REST API to list the current AgentSpace applications in your project: # Note AgentSpace currently only supports the global, us and eu multi-regionsLOCATION=globalPROJECT_ID=aviato-project-idTOKEN=$(gcloud auth print-access-token)curl -X GET “https://discoveryengine.googleapis.com/v1alpha/projects/$PROJECT_ID/locations/$LOCATION/collections/default_collection/engines” \ – header “Authorization: Bearer $TOKEN” \ – header “x-goog-user-project: $PROJECT_ID”Response (truncated):{“engines”: [{“name”: “projects/123456789/locations/global/collections/default_collection/engines/agentspace-andy_123456789″,”displayName”: “Agentspace – Andy”,”createTime”: “2025–06–05T22:55:44.459263Z”,…} Important Notes Step 3: Publish the Agent Engine ADK Agent to AgentSpace application The below requires the Project Number, e.g. 123456789, instead of the Project Id, e.g. aviato-project. To obtain the project number use: gcloud projects describe PROJECT_ID Use the AgentSpace Create Agent REST API to publish your Agent Engine ADK Agent to your AgentSpace application. In the body of the POST request, ensure that you replace the reasoningEngine name with the ID returned in Step 1. # Note AgentSpace currently only supports the global, us and eu multi-regionsLOCATION=globalPROJECT_ID=aviato-project-idTOKEN=$(gcloud auth print-access-token)# Use the AgentSpace Application ID returned in the previous stepAGENTSPACE_ID=agentspace-andy_1749164028618curl -X POST “https://discoveryengine.googleapis.com/v1alpha/projects/$PROJECT_ID/locations/$LOCATION/collections/default_collection/engines/$AGENTSPACE_ID/assistants/default_assistant/agents” \ – header “Authorization: Bearer $TOKEN” \ – header “x-goog-user-project: $PROJECT_ID” \ – data ‘{“displayName”: “My ADK Agent”,”description”: “Description of the ADK Agent”,”adkAgentDefinition”: {“tool_settings”: {“tool_description”: “Tool Description”},”provisionedReasoningEngine”: {“reasoningEngine”: “projects/123456789/locations/us-central1/reasoningEngines/123456789”}}}’ Response: {“name”: “projects/123456789/locations/global/collections/default_collection/engines/agentspace-andy_1749164028618/assistants/default_assistant/agents/123456789″,”displayName”: “My ADK Agent”,”description”: “Description of the ADK Agent”,”adkAgentDefinition”: {“toolSettings”: {“toolDescription”: “Tool description”},”provisionedReasoningEngine”: {“reasoningEngine”: “projects/123456789/locations/us-central1/reasoningEngines/123456789″}},”state”: “CONFIGURED”} Important Notes Step 4: Grant Required Permissions In your project, AgentSpace runs under the Google-provided Discovery Engine Service Account with a name such as: service-$PROJECT_NUMBER@gcp-sa-discoveryengine.iam.gserviceaccount.com By default, this service account only has the Discovery Engine Service Agent role. This is insufficient to invoke the Agent Engine ADK Agent and you may get the error “I’m sorry, it seems you are not allowed to perform this operation when invoking your ADK Agent in AgentSpace. If you receive this error, grant the Vertex AI User role to the service account: e.g PROJECT_ID=aviato-project-idDISCOVERY_ENGINE_SA=service-123456789@gcp-sa-discoveryengine.iam.gserviceaccount.comgcloud projects add-iam-policy-binding $PROJECT_ID \ – member=”serviceAccount:$DISCOVERY_ENGINE_SA” \ – role=”roles/aiplatform.user” Troubleshooting If you encounter any issues during deployment: By following these instructions, you should be able to successfully deploy your Agent Engine agent to Google Agentspace.
MCP and Agentic AI on Google Cloud Run
We’re moving from AI that primarily responds via text, to AI that manipulates thinngns. These “agentic AI” systems use tools to do that manipulatin. The way they interact with tools is via Anthropics Model Context Protocol (MCP), this open source standard for helping LLM’s connect and use external data sources, or tools. How and where you run these tools is the purpose of this article, the TL;DR is: “Google Cloud Run”. But read on if you want the details, and points to look out for. Why Cloud Run for Agentic AI? When it comes to deploying these sophisticated agentic AI Google Cloud Run emerges as a great choice, its serverless nature is super scalable, cost efficient, and requires no ops team to keep it running. Further it can easily connect to any Databases or LLM’s running on Google Vertex AI without leaving your network (VPC). What previously might have required dedicated SRE and DevOps teams can now be tackled by an individual developer, freeing up time to innovate on the actual AI Agent. Architecting Your Agent on Cloud Run So, what does a typical agentic AI architecture on Cloud Run look like? At its core, you’ll have: MCP Servers Cloud Run is a well known pattern for all of this, but with MCP being so new and the focus of this article it might be best to take a step back and explain what Model Context Protocol (MCP) does . MCP directly addresses the inability of an LLM to use tools; it provides a standardized, structured way for systems to expose their capabilities to language models. Here are some examples: Further the use of MCP lets us change our LLM as new ones are released to improve your agents without rebuilding from scratch, a great benefit of using VertexAI we can do this with a line of code. Practical MCP Server Deployment on Cloud Run Here’s a quick overview on how you can get your MCP servers up and running on Cloud Run for non production uses. Deployment from Container Images If your MCP server is already packaged as a container image (perhaps from Docker Hub), deploying it is straightforward. You’ll use the command: gcloud run deploy SERVICE_NAME – image IMAGE_URL – port PORT For instance, deploying a generic MCP container might look like: gcloud run deploy my-mcp-server – image us-docker.pkg.dev/cloudrun/container/mcp – port 3000 Deployment from Source If you are deploying a production use case, this is the recommended approach, if you have the source code for an MCP server (Perhaps from GitHub) you can deploy it directly. Simply clone the repository, navigate into its root directory, and use: gcloud run deploy SERVICE_NAME – source . Cloud Run will handle the building and deployment, or you can work this into a CI/CD pipeline for a more production ready use case. Cloud Run does not support MCP servers that rely on Standard Input/Output (stdio) transport. This constraint implicitly pushes MCP server development towards web-centric, network-addressable services, which aligns better with cloud-native architectures and scalability. State Management Strategies for Agentic AI on Cloud Run Fortunately, Google Cloud provides robust solutions for managing the various types of state your agentic AI systems will require: Short Term Memroy / Caching For data that needs fast access, like session information or frequently accessed data for an agent, connecting your Cloud Run service to Memorystore for Redis is an excellent option.2 Long-term Memory / Persistent Knowledge For storing conversational history, user profiles, or other forms of persistent agent knowledge, Firestore offers a scalable, serverless NoSQL database solution. If your agent deals with structured data or requires the powerful RAG capabilities discussed earlier, Cloud SQL for PostgreSQL or AlloyDB for PostgreSQL are ideal choices or one of the many that work on Google’s Vertex AI RAG Engine. Orchestration Framework Memory Many AI orchestration frameworks, such as LangChain, come with built-in memory modules. For example, LangChain’s ConversationBufferMemory can store conversation history to provide context across multiple turns. These often integrate with external stores for persistence. Table 1: State Management Options for Agentic Systems on Cloud Run Choosing the right state management approach depends heavily on the specific requirement: The Challenge of Stateful MCP Servers As highlighted, MCP servers using Streamable HTTP transport might need to maintain a persistent session context, especially to allow clients to resume interrupted connections. The core challenge here, (as of June 2025), is that many official MCP SDKs lack support for external session persistence, aka storing session state in a dedicated service like Redis. Instead, they often keep the session state in the memory of the server instance. This makes horizontal scaling problematic, if a client’s subsequent request is routed by a load balancer to a different instance from the one that initiated the session, the session context is lost, and the connection will likely fail. This limitation in current MCP SDKs points to a maturity gap in the ecosystem and until SDKs evolve to better support externalized state, designing MCP servers to be stateless is the more resilient cloud native pattern where feasible. Cloud Run Session Affinity to the Rescue? Cloud Run offers a feature called session affinity that can help mitigate this issue. When enabled, Cloud Run uses a session affinity cookie to attempt to route sequential requests from a particular client to the same revision instance. You can enable this with a gcloud command: gcloud run services update SERVICE – session-affinity Or via the Google Cloud Console or YAML config. However, it’s crucial to understand that session affinity on Cloud Run is “best effort”. If the targeted instance is terminated (due to scaling, etc) or becomes overwhelmed (reaching maximum request concurrency, etc), session affinity will be broken, and subsequent requests will be routed to a different instance. So if the in memory state is absolutely critical and irreplaceable, session affinity alone is not a foolproof guarantee of state preservation. Addressing SDK and Session Affinity Limitations Given these constraints, there are 2 practical approaches: Operationalizing AI Logging: Cloud Run integrates with Cloud Logging out of
Service Extensions for Google Cloud App LB’s
If you run a website or app on Google Cloud and you’re using their Application Load Balancer to distribute traffic but wish you had a way to: Add or modify HTTP headers: Insert new headers for specific customers. Or re-write client headers on the way to the back end. Implement custom security rules: Add your own logic to block malicious requests or filter sensitive data. Perform custom logging: Log user-defined headers or other custom data into Cloud Logging or other tools. Rewrite HTML on the fly: Dynamically adjust your website content based on user location, device, etc Script Injection: Rewrite HTML for integration with Analytics or reCAPTCHA Traditionally, you’d have to achieve this by setting up separate proxy servers or modifying your app. With Service Extensions, you can do all this directly within your Application Load Balancer. How Service Extensions work: They are mini apps: Service Extensions are written in WebAssembly (Wasm), these are super fast, and secure. They run at the edge: This means they on the load balancer, reducing any potential impact to latency. They’re fully managed: Google Cloud takes care of all the hard parts. Why would anyone use Service Extensions? Flexibility: Tailor your load balancer to your specific needs without complex workarounds. Performance: Improve response times by processing traffic at the edge. Security: Enhance your security posture with custom rules and logic. Efficiency: Reduce operational overhead by offloading tasks to the load balancer. How to get started: Check the docs from Google, start with Service Extensions Overview then Plugins Overview and How to create a plugin finally some Code Samples Also definitely worth checking out WASM if you have not already at https://webassembly.org/ Service Extensions sit in the Cloud Load Balancing processing path. Image to the left shows this.
Vendor Lock-in: We think its a myth.
The Myth Of Vendor Lock-in The cloud has revolutionized how businesses operate, but we often get stuck in weeks-long project delays trying to avoid vendor lock-in. This article highlights whether this is something you should be concerned about, or if your efforts are best focused elsewhere. I guess it is best to start on what vendor lock in actually is. Understanding Vendor Lock-in Vendor lock-in occurs when a customer becomes reliant on a specific vendor’s products or services, making it difficult or expensive to switch vendors. The business risk here is usually either: That one vendor could raise prices, and you would be stuck paying the higher price (VMware/Broadcom comes to mind) Vendor has multiple outages, or poor support (VMware/Broadcom comes to mind) The vendor goes bankrupt, or is acquired by a competitor, and your business along with it The Cloud Hyperscaler Landscape Cloud hyperscalers like AWS, Azure, and Google Cloud have significantly mitigated the risks of vendor lock-in. Here’s why: Open Standards, Open Source, and Interoperability: Hyperscalers increasingly embrace open standards and APIs, Containers, and Kubernetes is one example with every cloud having multiple ways to run standard docker containers, and these can be moved between clouds, with no changes. Each cloud does have proprietary services, especially when we look at databases, but the effort to migrate and modify these is typically way lower than it has been in the past. Using one of these databases to avoid vendor lock-in with AWS/GCP/Azure can also just mean you are locked into MongoDB, or an open source DB that is hard to move from. Bankruptcy: If any of these vendors does go bankrupt it will be a slow process, Google, Microsoft or Amazon are some of the wealthiest companies in the world, so I think we can discount this. Data Portability: Hyperscalers offer tools and services to simplify data migration and portability. While moving large datasets can still be complex, the process is becoming more manageable, hyperscalers will often fully or partially fund the migration from a competitor. In addition highly performant network connections between clouds are available or even physical devices to move the largest of datasets quickly. Market Competition: The intense competition among cloud hyperscalers drives down prices, there has only been a few times where some services increased in cost. This competition is not likely to reduce in the near term. Mitigating Vendor Lock-in Concerns While the risks of vendor lock-in are lower with cloud hyperscalers, if this is a concern there are a few steps to mitigate the effort if you ever do need to migrate: Design for Portability: Architect applications and data structures with portability in mind from the outset Avoid Proprietary Services: Minimize reliance on vendor-specific databases that lack equivalents on other platforms Conclusion The cloud hyperscaler era has resulted in strong competition which has significantly diminished the concerns of vendor lock-in. Open standards, data portability, and market competition have allowed businesses to focus less on lock-in and more on transforming their business. While some level of lock-in will always exist, it is about choosing where you are locked in, if you go all open source, and build your own servers you will be locked in to using this stack. We believe the focus should shift from fearing vendor lock-in to strategically leveraging the cloud’s capabilities to drive innovation and business growth.
Getting Started with GCP is easy…..but not so fast.
Getting Started with GCP is easy…..but not so fast. Transcript Google makes it easy to get started with Google Cloud but at the expense of some of the controls that large Enterprises need to have when they’re running workloads on any public Cloud now Google do this so that developers can very easily get started if they made it really hard to start using Google Cloud people would use one of the other clouds that was a little bit easier to use however when you start putting production workloads on there that might have customers information in them you need to revisit that security and put some controls around it setting this up the right way is not hard Google even released the code to build all the infrastructure and put it on GitHub you can easily find it if you Google Fast fabric the first result will be GitHub result for Google Cloud’s Professional Services team where they’ve put that code that you can run and enforce all of their best practices for you now if you need help running this and it can be a little bit complex or if you want any advice on how to get started with it hit me up I’m always happy to talk about this kind of stuff thanks
? AI Just Got a HUGE Upgrade (And You Need to Know Why)
? AI Just Got a HUGE Upgrade (And You Need to Know Why) Transcript for all those AI nerds there’s been some pretty interesting announcements from Google number one anthro pics Claude 3 is now generally available on vertex AI Gemini Pro 1.5 and Gemini 1.5 flash are also generally available we’re over 700,000 models on hugging phe so you can use any of the models on hugging face with vertex AI for those not familiar hugging face is kind of like a repository like git lab but for AI models so people taking off the shelf models or creating their own um modifying them and then uploading them to hug phase the next thing that’s super interesting is context cing so you can use context cing with Gemini Pro 1.5 and Gemini 1.5 flash models and this lets you past some of the tokens that you have uploaded so if you have uploaded um video and you want to ask multiple questions about it you don’t need to upload that video each time which is obviously going to be charged you can upload it once and ask multiple questions same thing if you have chat Bots with very long instructions um or you’ve got a large amount of documents and you’re asking different queries around document um the final use case I think was interesting is if you have a code repository and you’re looking to fix a lot of bugs upload it once C that context and then can do a lot of careers against it reducing both the cost and the latency to get those insights um if you need help with any of this feel free to reach out always happy to have a CH thank you
? Is Your Google Cloud Bill Out of Control? ?
? Is Your Google Cloud Bill Out of Control? ? Transcript so you started using cloud and your costs keep growing and growing every month it seems to be more and more money than you’re spending on cloud and you’ve realised it’s time to take a look and cut those costs down to something that’s more sensible if you’re using Google Cloud they’ve got the fin ops Hub and the billing manager where you can go and see where these costs are broken down they’re often broken down by project so you can kind of see where some of the hot spots are and to reduce that the next thing you should start looking at is Devon test workloads and people pay me to come in and consult and say hey do you really need your development workloads running 24/7 when your developers are only working 9 to 5 it’s pretty logical get that turned off when it’s not in use even better get those running on spot instances these are substantially cheaper but when Google have low capacity they will take them away from you that will kill the developer workflow but it will ensure that your developers are writing code that can tolerate failures which is key to running anything on cloud the next thing you want to do is you want to enhance the visibility you’re getting into where you’re spending money now this is done with labels so every project or every resource that you have running should have a label on it with an owner and that owner should get an invoice not an invoice but a report at the end of each month showing how much money they’ve spent that will Empower your team to understand that they might be spending money that they don’t know about and have a look and see if they can reduce that by themselves this is really simple with Google creating labels putting them on everything and then exporting all the billing data into B crew so you can slice it dice it run reports and figure out where you need to focus your cost saving another few things that often get missed is Right sizing computer machines so being a computer engine you can individually change your memory and CPU to right size it to your workload now a lot of people do this as a one-time exercise and they kind of guess it they never come back and revisit it there’s tons of reports in Google where you can go through have a look at these things and then save yourself considerable money just by getting rid of unnecessary resources that your machines aren’t using if you donate anyone have a look at this feel free to reach out thank you
How do you know if AI is actually answering your question, and not just spitting out nonsense?
How do you know if AI is actually answering your question, and not just spitting out nonsense? Transcript so I’m going to break down a few Concepts you might have been hearing when people are talking about AI the first one is retrieval augmented generation or rag it’s a bit of amouthful but it’s really simple if you ask an LL question so chat GPT or Google’s Gemini it’s going to respond based on what it’s being trained on which is the context of the entire internet but nothing specific to your business rag solves this problem by taking your business data uploading it into a database so that when you ask a question the question can retrieve data from the database based on your business and then formulate a response that’s grounded in that this reduces hallucinations or llms making up nonsense and make sure that it’s using data that is real from your business now the other concept we have is chunking if we’re taking documents and uploading them into the database we don’t want to upload entire documents cuz we’re not going to send entire documents to the llm very expensive so we chunk this you could chunk via paragraph but sometimes you need the paragraph surrounding that paragraph to get the full context or you can chunk via headings now different things are going to work for different businesses depending on how your data is structured by fining the way we chunk and store that in a database so that the LM can retrieve it and swapping out the llm model we can optimize for your business making sure that you get the best results possible for the best price possible whenever a new llm is released we can also test that very rapidly and see if that’s going to give you better results or a better price if you’re interested in learning more about this feel free to reach out happy to have a chat with anyone on these subjects thanks
Video Post: ? Stop Wasting Money! ? Easy Cost Cutting for Your Business!
Getting Started With Google Cloud Transcript getting started with Google Cloud can seem overwhelming at first as with any cloud there are a lot of services that you can use and each has configuration options that can get you into trouble when I worked at Google Cloud I helped some of the biggest brands in Australia set up their cloud environments and I’ll give you a few tips that I learnt from doing that the first thing you wanna do is enable some structure trading an organisation and then creating folders in the organisation to keep projects organised and allow to give groups of users permissions to do things to those projects for example putting all the development projects in a folder called development and giving developers access to those and then having all of the production projects in a production folder maybe without access for developers or for other groups of people once you have the folder set up you need to set up identity and access management so as I kind of touched on that’s creating a group putting developers in the group and then giving that group access to the folder that contains the projects that developers need to work on to do their jobs we may not wanna give them access to the production folder at all or maybe we only give them read only access this is a super simple example and we can nest folders and get much more complex with it and any environment that we’re talking about is gonna have more complexity than that this is a simple explanation now we wanna start talking about organisational policies we’ve got a group of developers that got access to their projects and development folder that we still don’t wanna do anything silly like putting a cloud storage bucket on the internet so that anyone can see what our files are even if those are development mocked data having a data breach is not gonna be good in the headlines there’s a ton of all policies and each one of them needs to be configured and this one example appear for the cloud storage bucket we may need an exception for the public facing internet to be on the internet once we have all this set up we kinda wanna make sure that we’re managing with code if a developer does request that a cloud storage bucket be put on the internet we wanna see who requested that and why and track those changes the logical step here is using infrastructure as code we use Terraform the same as the best practice at Google Cloud that Google had professional services used when I was there and we can do the same for your business in 5 days excluding any complex networking some people are spending much longer on this and it’s really not that complex if this sounds too complex do reach out we’ve done this when working at Google so we know the best practices and we know how to set you up securely so that your business can scale on Google Cloud thank you