
Story Time
The integration of Large Language Models (LLMs) in software development workflows has created unprecedented opportunities but also introduced significant technical challenges. Among these, the efficient sharing of resources between developers and LLM agents emerges as a critical bottleneck. Today’s developers increasingly find themselves needing to move beyond local development environments to containerized solutions that offer improved resource allocation, security, and scalability. While current approaches using containerization and security classifiers provide workable solutions, the field remains ripe for innovation as we balance performance requirements with robust security protections.
Strategy | Description |
---|---|
Differential Privacy | Incorporate noise into training data to protect individual data points, ensuring privacy without compromising model performance. |
Federated Learning | Train models across decentralized devices holding local data, reducing the need for centralized data collection and enhancing privacy. |
Trusted Execution Environments (TEE) | Utilize secure enclaves during training and inference to protect data and model integrity from unauthorized access. |
Model Slicing | Partition models into segments to enable secure distributed training, allowing different segments to be trained on separate devices or environments. |
Fine-Tuning Composition | Apply separate fine-tunings for different data silos, ensuring users access only authorized data segments, enhancing security in multi-tenant environments. |
Collaborative Edge Computing | Deploy models across edge devices to perform inference collaboratively, reducing latency and preserving data privacy by keeping data on local devices. |
Modular Policy Design | Develop simple, clear, and adjustable policies that separate enterprise-wide guidelines from department-specific ones, allowing flexibility and precision. |
Cost Management | Be mindful of the costs associated with advanced checks and optimizations, especially for large enterprises, to ensure sustainable operations. |
The Resource Dilemma in LLM-Based Development
Modern LLM applications demand substantial computational resources that often exceed what’s available on individual development machines. This disparity creates a fundamental tension: developers need responsive interactions with increasingly powerful models, but may lack the local hardware to support them. The challenge extends beyond mere processing power to include memory requirements, with many state-of-the-art models requiring several gigabytes of RAM for efficient operation.
When developers run LLM agents locally, they frequently encounter resource constraints that hinder productivity. Local GPUs may be insufficient or unavailable, forcing CPU-based execution that dramatically reduces response times. Memory limitations can prevent loading optimal models, requiring compromises in model size and capability. Additionally, running resource-intensive LLMs locally can make development machines unresponsive for other tasks, creating a frustrating experience.
This resource dilemma has accelerated the movement toward containerized solutions that enable developers to offload resource-intensive LLM operations to dedicated infrastructure while maintaining seamless interactions. However, this shift introduces new challenges in security, file sharing, and collaborative workflows that must be addressed for effective implementation.
Containerization: Transforming LLM Resource Sharing
Containerization has emerged as a powerful solution for addressing the resource challenges of LLM-based development. By packaging LLM applications with all their dependencies into lightweight, portable containers, developers can deploy these models across various environments—from cloud providers to edge devices—with consistent performance characteristics.
Docker and Kubernetes have become standard tools in this containerization ecosystem, offering powerful capabilities for managing containerized LLM applications. Docker provides the foundation for creating standardized container images, while Kubernetes enables orchestration at scale. Using these tools, developers can create portable LLM applications that run consistently across different computing environments1.
The containerization process begins with preparing the LLM application, typically as a REST API for inference, followed by creating a Dockerfile that defines the container image. This approach ensures that all dependencies, including specific Python versions and machine learning libraries, are bundled with the application. For example, a basic Dockerfile for an LLM inference service might include Python, FastAPI, and the necessary machine learning frameworks like transformers and PyTorch1.
Containerization delivers several key benefits for LLM applications. It provides portability, ensuring consistent execution across different computing environments, from development laptops to production servers. It enables seamless scalability through orchestration tools like Kubernetes, allowing applications to respond to varying load demands. Containers also offer isolation, preventing conflicts between dependencies that might otherwise occur in shared environments. Finally, containers provide efficiency through faster deployments and more lightweight resource usage compared to traditional virtual machines1.
Browser-Based Approaches to LLM Sharing
An emerging approach to solving the resource sharing challenge involves deploying LLMs directly in web browsers, creating a new paradigm for developer-agent interactions. Recent advances have made it possible to run increasingly powerful models directly in the browser, reducing the need for dedicated server infrastructure.
The Wllama project represents a significant breakthrough in this space by making Llama.cpp available in web browsers. This innovation has overcome a critical limitation that previously restricted browser-based models to under 2GB in size. Now, larger models can run directly in web browsers without requiring WebGPU support, making this approach compatible with Safari and Firefox in addition to Chrome-based browsers4.
This browser-based approach creates interesting possibilities for developer-agent interactions. Since these solutions run on the client’s CPU, they don’t require developers to maintain separate container infrastructure. However, they do face performance limitations compared to GPU-accelerated solutions. As one developer noted regarding Wllama, “Because it runs on the CPU it will be slower. But it will run!”4.
The browser-based landscape now features multiple complementary approaches. WebLLM offers superior performance through WebGPU acceleration but has limited browser compatibility and supports fewer models. Wllama provides broader compatibility across browsers and supports more model types but runs slower on CPU. Together, these technologies create flexible options for developers seeking to implement LLM-based workflows without dedicated container infrastructure4.
Security Challenges in Distributed LLM Development
As development shifts from local environments to distributed containerized settings, security emerges as a paramount concern. LLM applications often process sensitive data, and distributing these workloads across shared infrastructure introduces potential vulnerabilities that must be addressed through robust security measures.
Containerization, while solving resource allocation challenges, creates new attack surfaces that must be protected. Container images can contain vulnerabilities, orchestration systems might have misconfigurations, and the underlying infrastructure could be compromised. These concerns become particularly acute when containers process sensitive or regulated information such as medical records, financial data, or proprietary intellectual property embedded in custom machine learning models2.
The security challenges extend beyond infrastructure to include the models themselves. LLMs can potentially memorize training data, leading to risks of data leakage. When these models process sensitive information in containerized environments, traditional security boundaries may not provide sufficient protection. This has led to increasing interest in confidential computing approaches that provide stronger isolation guarantees.
Microsoft Defender for Cloud represents one approach to addressing container security concerns. It offers comprehensive protection for containerized assets, including Kubernetes clusters, nodes, workloads, and container images. The solution focuses on security posture management through continuous monitoring of cloud and Kubernetes APIs, vulnerability assessment of Kubernetes nodes and container registries, and providing remediation guidance5.
Confidential Computing: Enhancing Container Security for LLMs
As LLM development increasingly involves processing sensitive data, traditional security measures often prove insufficient. Confidential computing has emerged as a critical technology for enhancing the security of containerized LLM applications, particularly when deployed in shared infrastructure environments like public clouds.
The CNCF Confidential Containers (CoCo) project offers a promising approach for running confidential cloud-native solutions within familiar Kubernetes environments. CoCo provides a standardized framework for confidential computing at the pod level, simplifying its consumption in Kubernetes orchestration. This enables developers to deploy confidential container workloads using familiar tools and workflows without requiring extensive knowledge of the underlying confidential computing technologies2.
CoCo implements security through Trusted Execution Environments (TEEs) or “enclaves” that provide hardware-enforced isolation. Inside these protected environments, both the AI models and the data being processed remain encrypted and isolated from unauthorized access—including from cloud provider administrators. This approach is particularly valuable for AI workloads that process sensitive data such as medical, financial, or personal information2.
The implementation architecture involves several key components working together. When deployed, CoCo launches a confidential VM instead of a traditional VM, with an “enclave software stack” that includes image management, confidential data hub, and attestation agent components. Container images can be signed and/or encrypted, with cryptographic measurements ensuring that only properly attested enclaves can access secrets like encryption keys2.
A collaboration between Azure, Intel, and Red Hat demonstrates how these technologies can be combined to enhance AI workload security. Using Intel Trust Domain Extensions (Intel TDX), Azure confidential virtual machines, Red Hat OpenShift CoCo, and Red Hat OpenShift AI, this integration demonstrates how organizations can create secure end-to-end AI solutions that protect both models and data2.
LLM Classifiers: Current Approaches and Limitations
A critical component in secure LLM deployments is the classifier system that mediates interactions between users and models. These classifiers help prevent misuse and protect sensitive information, but current implementations reveal both promising capabilities and areas for improvement.
The Sombra LLM Classifier represents one approach to this challenge. Designed to improve data classification using advanced natural language processing, this service operates alongside the main application to analyze and classify data. Architecturally, it runs as a separate service attached to the primary application, allowing customers to deploy it within the same private network as their main service3.
This implementation highlights key considerations for LLM classifiers. The classifier runs a gunicorn server to process requests, performing computationally expensive operations that require substantial resources—typically an NVIDIA Ampere GPU such as the A10. The service communicates via a defined port (6081 by default), with options for HTTPS connections through mounted SSL certificates3.
Deployment follows typical containerized patterns, with options for pulling from private Docker registries and deploying within Kubernetes clusters. A sample Kubernetes configuration includes service definitions, deployment specifications with resource limits (including GPU allocation), and environment variable configuration3.
While current classifier approaches provide workable solutions, they face several limitations. Most classifiers operate as middleware that intercepts requests, potentially creating processing bottlenecks. Their effectiveness varies based on the underlying models and training data, and they may struggle with detecting subtle policy violations or novel attack vectors. Additionally, the computational requirements can be substantial, necessitating dedicated hardware that increases deployment costs.
Future Directions for Secure and Efficient LLM Resource Sharing
The landscape of LLM-based development continues to evolve rapidly, with several emerging trends pointing toward more secure and efficient resource sharing approaches. As the field matures, we can anticipate significant improvements in how developers interact with LLM agents across distributed environments.
Browser-based LLM execution represents a promising direction, reducing the complexity of deployment while maintaining accessibility. Recent advancements have broken previous size limitations, allowing larger models to run directly in browsers without specialized hardware acceleration4. As WebAssembly matures and browser capabilities expand, we may see increasingly sophisticated models running efficiently within browsers, potentially reducing the need for complex container orchestration in some use cases.
Confidential computing technologies will likely become more prevalent in LLM deployments, particularly for applications handling sensitive data. The standardization efforts underway through projects like CNCF Confidential Containers will make these advanced security approaches more accessible to developers without specialized security expertise2. As hardware support for trusted execution environments becomes more widespread, we can expect stronger security guarantees with reduced performance penalties.
Hybrid approaches that combine local and remote execution may offer the best balance of security, performance, and usability. Models could dynamically route requests based on sensitivity, computational requirements, and available resources. Less sensitive operations might execute locally or in the browser, while more sensitive or resource-intensive operations could leverage secure containerized environments with confidential computing protections.
Automated security scanning and verification of LLM applications will become increasingly important as these systems process more sensitive information. Tools that can verify the security properties of containerized LLM applications, identify potential vulnerabilities in models and infrastructure, and continuously monitor for emerging threats will be essential components of secure development workflows.
Conclusion
The intersection of LLM-based development and resource sharing presents both significant challenges and opportunities. While current containerization approaches offer viable solutions for deploying LLM applications across distributed environments, considerable work remains to optimize the security, efficiency, and usability of these systems.
Containerization technologies like Docker and Kubernetes provide strong foundations for portable, scalable LLM deployments, but require careful attention to security concerns. Confidential computing approaches such as CoCo offer promising enhancements for protecting sensitive data and models, though adoption remains in early stages. Browser-based execution creates interesting alternatives for certain use cases, particularly with recent advancements in handling larger models.
The path forward likely involves a combination of these approaches, with developers selecting the appropriate architecture based on their specific security requirements, performance needs, and infrastructure constraints. As the field matures, we can expect continued innovation in how developers share resources between themselves and LLM agents, ultimately creating more powerful and secure AI-enhanced development workflows.
For organizations implementing LLM-based development today, a thoughtful approach to containerization that incorporates emerging security technologies will provide the best foundation for future growth. By staying attentive to both practical deployment challenges and evolving security concerns, developers can harness the transformative potential of large language models while maintaining robust protection for sensitive data and intellectual property.
Citations:
- https://dev.to/nareshnishad/day-51-containerization-of-llm-applications-5622
- https://www.redhat.com/en/blog/enhancing-ai-workload-security-in-the-public-cloud
- https://docs.transcend.io/docs/articles/security/end-to-end-encryption/llm-classifier-on-k8s
- https://www.reddit.com/r/LocalLLaMA/comments/1cy6ifz/all_web_browsers_can_now_run_larger_models/
- https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-introduction
- https://blog.gitguardian.com/container-security-scanning-vulnerabilities-risks-and-tooling/
- https://openreview.net/forum?id=2G021ZqUEZ
- https://webcontainers.io/guides/browser-support
- https://www.reddit.com/r/LocalLLaMA/comments/1e3hknt/fast_loading_and_initialization_of_llms/
- https://arxiv.org/html/2502.13681v1
- https://dev.to/pavanbelagatti/a-step-by-step-guide-to-containerizing-and-deploying-machine-learning-models-with-docker-21al
- https://news.ycombinator.com/item?id=42381712
- https://sysdig.com/learn-cloud-native/container-security-best-practices/
- https://alberthoitingh.com/2020/12/11/using-sensitivity-labels-at-container-level/
- https://arxiv.org/html/2407.12866v1
- https://collabnix.com/running-firefox-in-docker-container/
- https://www.pugetsystems.com/labs/hpc/llm-server-setup-part-2-container-tools/
- https://developer.nvidia.com/blog/best-practices-for-securing-llm-enabled-applications/
- https://developer.chrome.com/docs/ai/cache-models
- https://www.goanywhere.com/products/zero-trust-file-transfer
- https://ploomber.io/blog/docker-gen/
- https://www.docker.com/blog/a-quick-guide-to-containerizing-llamafile-with-docker-for-ai-applications/
- https://www.skyflow.com/post/private-llms-data-protection-potential-and-limitations
- https://collabnix.com/how-to-containerise-a-large-language-modelllm-app-with-serge-and-docker/
- https://www.datacamp.com/tutorial/deploy-llm-applications-using-docker
- https://www.aquasec.com/cloud-native-academy/vulnerability-management/llm-security/
- https://wandb.ai/gladiator/LLMs-as-classifiers/reports/LLMs-as-machine-learning-classifiers–VmlldzoxMTEwNzUyNA
- https://github.com/ServiceNow/Fast-LLM/pkgs/container/fast-llm
- https://developer.nvidia.com/blog/rapidly-triage-container-security-with-the-vulnerability-analysis-nvidia-nim-agent-blueprint/
- https://cloud.google.com/vertex-ai/generative-ai/docs/open-models/use-hex-llm
- https://www.docker.com/blog/llm-docker-for-local-and-hugging-face-hosting/
- https://arxiv.org/html/2407.19354v1
- https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
- https://www.modular.com/blog/use-max-with-open-webui-for-rag-and-web-search
- https://www.qualys.com/apps/container-security/
- https://www.paloaltonetworks.com/cyberpedia/what-is-container-security
- https://massive.io/integrations/docker/
- https://www.gsa.gov/buy-through-us/purchasing-programs/requisition-programs/gsa-global-supply/national-stock-numbers/security-containers/types-of-security-containers
- https://github.com/mlc-ai/web-llm
- https://massive.io/newsroom/masv-announces-docker-based-accelerated-file-transfer-solution/
- https://www.splunk.com/en_us/blog/learn/container-security.html
- https://github.com/Skyvern-AI/skyvern
- https://www.practical-devsecops.com/top-container-security-tools/
- https://juicefs.com/en/blog/solutions/llm-storage-performance-cost-multi-cloud
- https://www.reddit.com/r/browsers/comments/1dy4fzj/perfect_browser_for_an_agent/
- https://kasmweb.com
- https://developer.nvidia.com/blog/mastering-llm-techniques-inference-optimization/
- https://forum.devolutions.net/topics/39024/containerized-browser-sessions
- https://aws.amazon.com/blogs/security/implement-effective-data-authorization-mechanisms-to-secure-your-data-used-in-generative-ai-applications-part-2/
- https://hub.docker.com
- https://www.lambdatest.com/web-technologies/container-chrome
- https://unit42.paloaltonetworks.com/making-containers-more-isolated-an-overview-of-sandboxed-container-technologies/
- https://docs.thousandeyes.com/product-documentation/global-vantage-points/enterprise-agents/installing/docker-agents/installing-enterprise-agents-with-docker
- https://developer.chrome.com/blog/cq-polyfill
Want more?
Securing File Sharing in AI Development: Addressing Copilot Security Concerns and Anthropic’s Development Container Solutions
The integration of AI assistants into development workflows has created powerful new capabilities, but it has also introduced significant security challenges around file sharing and data access. As these systems gain deeper access to codebases and documentation, organizations must implement robust security measures to protect sensitive information while maintaining productivity. The growing adoption of AI coding assistants like GitHub Copilot and Anthropic’s Claude has made secure file sharing between developers and AI agents an increasingly critical concern.
The Copilot Security Challenge: A “Ticking Time Bomb”
The security implications of AI assistants accessing shared files have become particularly evident with Microsoft’s Copilot integration in Microsoft 365. Hornetsecurity describes this situation as a “ticking time bomb” for security and compliance, highlighting how Copilot inherits the same document access permissions as the user1. This creates a concerning scenario where AI assistants can access, process, and potentially expose sensitive information without users fully understanding which documents are being used.
File sharing in business environments often happens “under the radar” with new SharePoint sites or Teams being created for projects, leading to potentially dangerous situations where sensitive documents and data could end up in the wrong hands1. The ease of sharing is deliberately designed to be frictionless to support modern collaborative workflows, but this approach conflicts with security best practices and compliance requirements.
The Copilot situation exacerbates these concerns because the AI assistant has access to all documents the user can access. Unlike earlier technologies like Microsoft Delve that simply suggested documents, Copilot actively uses accessible documents to answer prompts or create new content without necessarily revealing which specific documents it referenced1. This lack of transparency creates significant challenges for organizations trying to maintain proper data governance and compliance.
Development Containers as a Security Solution
Containerization offers a powerful approach to address these security challenges by creating isolated, controlled environments for AI interactions. Development containers provide consistent, secure environments that can be precisely configured to minimize security risks while maintaining productivity.
Anthropic has recognized these challenges and provides a development container reference implementation for Claude Code that incorporates enhanced security measures. This solution is designed for teams that need consistent, secure environments for AI-assisted development3. The container configuration works seamlessly with VS Code’s Remote – Containers extension and similar tools, making it accessible for developers already familiar with these environments.
The development container approach provides substantial security advantages through isolation and controlled access. By running AI interactions within a containerized environment, organizations can better control what data the AI assistant can access and what actions it can perform. This addresses one of the fundamental concerns with systems like Copilot – unintentionally exposing sensitive information through overly broad access permissions.
Anthropic’s Development Container Implementation
Anthropic’s Claude Code development container implementation demonstrates a comprehensive approach to secure AI interactions. The container uses a multi-layered security approach with sophisticated firewall configurations to restrict network access and isolate the development environment3. This architecture ensures security while maintaining the flexibility developers need for productive work.
The container implementation includes several key security features:
- Precise access control that restricts outbound connections to specifically whitelisted domains
- A default-deny policy that blocks all other external network access
- Startup verification to validate firewall rules when the container initializes
- Isolation that creates a secure development environment separated from the main system3
With these enhanced security measures in place, the container enables developers to run Claude with reduced permission prompts using the --dangerously-skip-permissions
flag for unattended operation3. While Anthropic notes that no system is completely immune to all attacks, this approach provides substantial protections compared to running AI assistants with unrestricted access to file systems and networks.
The development container is built on Node.js 20 with essential development dependencies and includes developer-friendly tools such as git, ZSH with productivity enhancements, and fzf3. It’s designed to work seamlessly across macOS, Windows, and Linux development environments, making it accessible for diverse development teams.
Computer Use Capabilities and Container Security
Beyond the Claude Code development container, Anthropic also provides a computer use demo that showcases how containerization can secure more advanced AI capabilities. This demo uses Docker to create a containerized environment where Claude can perform computer operations while maintaining security boundaries2.
The computer use demo includes a detailed explanation of unique risks associated with allowing AI models to interact with computing environments and the internet. Anthropic recommends several precautions:
- Using dedicated virtual machines or containers with minimal privileges
- Avoiding giving the model access to sensitive data
- Limiting internet access to an allowlist of domains
- Requiring human confirmation for consequential decisions2
This implementation demonstrates how Docker containers can provide a secure foundation for more advanced AI capabilities. The container includes all necessary dependencies and tools, along with Anthropic’s computer use agent loop that allows Claude to interact with the containerized environment2. This approach enables powerful AI capabilities while maintaining important security boundaries.
The Model Context Protocol and Docker Integration
Anthropic has recently taken containerization for AI applications further with the Model Context Protocol (MCP), which provides standardized interfaces for LLM applications to integrate with external data sources and tools5. This protocol enables AI applications to retrieve data from external sources, interact with third-party services, and even access local filesystems – all capabilities that require strong security controls.
Docker has emerged as an ideal solution for packaging and distributing MCP servers, which can be challenging to set up consistently across different environments. By containerizing MCP implementations, developers can ensure consistent operation across team members’ machines and deployments5. This approach simplifies the complexity of building secure AI applications while maintaining robust security boundaries.
The use of Docker with MCP provides critical capabilities for AI development:
- Tool discovery that helps LLMs identify available tools
- Tool invocation that enables precise execution with appropriate context and arguments
- Consistent environments across different computing platforms
Best Practices for Secure File Sharing with AI Assistants
Drawing from these implementations and security considerations, several best practices emerge for organizations looking to secure file sharing between developers and AI assistants:
Implement Strict Permission Controls
The security challenges with Microsoft Copilot demonstrate the importance of implementing granular permission controls for AI systems. Organizations should carefully review and manage document sharing practices, particularly when those documents might be accessed by AI assistants. Default permissions should follow the principle of least privilege, granting access only to information that is absolutely necessary for the task at hand.
Leverage Containerization for Isolation
Containerization provides a powerful security boundary for AI interactions. By running AI assistants in containers with carefully controlled access to files and networks, organizations can significantly reduce the risk of data leakage or unauthorized access. Anthropic’s development container implementation offers a valuable reference model for this approach, with its multi-layered security design and default-deny network policies.
Consider Alternative Secure File Sharing Solutions
For organizations that need dedicated file sharing capabilities, secure client portals like Copilot (the file sharing service, not Microsoft’s AI) provide encrypted, password-protected file sharing with customizable access controls4. These services can complement containerized development environments by providing secure channels for sharing specific files with AI assistants while maintaining control over what information is accessible.
Monitor and Audit AI Interactions
Regardless of the security measures implemented, organizations should maintain comprehensive monitoring and auditing of AI interactions with shared files. This visibility helps identify potential security issues and ensures compliance with organizational policies and regulatory requirements. Regular security reviews should include assessments of how AI assistants are accessing and using shared information.
Conclusion
The security challenges around file sharing with AI assistants like Microsoft Copilot highlight the need for robust security measures in AI-assisted development. Unmanaged file permissions combined with AI systems that can access and process shared documents create significant risks for data leakage and compliance violations.
Containerization offers a powerful solution to these challenges, with Anthropic’s development container implementation demonstrating how isolated environments can enable productive AI interactions while maintaining strong security boundaries. By combining containerization with careful permission management and monitoring, organizations can harness the power of AI assistants while protecting sensitive information.
As AI capabilities continue to evolve, secure file sharing between developers and AI agents will remain a critical consideration. The approaches pioneered by Anthropic, including development containers and the Model Context Protocol, provide valuable reference implementations for organizations seeking to balance innovation with security in their AI-assisted development workflows.
Citations:
- https://www.hornetsecurity.com/en/blog/copilot-security/
- https://github.com/anthropics/anthropic-quickstarts/blob/main/computer-use-demo/README.md
- https://docs.anthropic.com/s/claude-code-security
- https://www.copilot.com/blog/secure-file-sharing-with-clients
- https://www.docker.com/blog/the-model-context-protocol-simplifying-building-ai-apps-with-anthropic-claude-desktop-and-docker/
- https://learn.microsoft.com/en-us/copilot/security/privacy-data-security
- https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview
- https://www.prompt.security/blog/claude-computer-use-a-ticking-time-bomb
- https://learn.microsoft.com/en-us/copilot/microsoft-365/microsoft-365-copilot-architecture-data-protection-auditing
- https://www.youtube.com/watch?v=Iabue7wtE4g
- https://openmined.org/blog/secure-enclaves-for-ai-evaluation/
- https://docs.anthropic.com/en/docs/build-with-claude/computer-use
- https://docs.anthropic.com/en/docs/agents-and-tools/computer-use
- https://aws.amazon.com/blogs/aws/anthropics-claude-3-7-sonnet-the-first-hybrid-reasoning-model-is-now-available-in-amazon-bedrock/
- https://www.reddit.com/r/ClaudeAI/comments/1ge1eh0/cline_now_uses_anthropics_new_computer_use/
- https://composio.dev/blog/claude-computer-use/
- https://learn.microsoft.com/en-us/copilot/security/faq-security-copilot
- https://learn.microsoft.com/en-us/copilot/microsoft-365/microsoft-365-copilot-privacy
- https://github.com/anthropics/claude-code/blob/main/.devcontainer/Dockerfile
- https://www.copilot.com/apps/directory/files-app
- https://www.youtube.com/watch?v=5-aDYL1O8L4
- https://www.coreview.com/blog/m365-copilot-security-risks
- https://hub.docker.com/layers/composio/anthropic-computer/dev/images/sha256-19ea4be03ee6b4833ae8986775fb821cbd37637e9b4e55e1c374bdf9219b63b5
- https://www.reddit.com/r/CopilotPro/comments/1ebz5iw/can_anyone_provide_clarity_on_copilot_data_privacy/
- https://www.linkedin.com/posts/justin-muller_how-to-deploy-a-secure-instance-of-anthropic-activity-7267711360937943041-bUo-