- The Cloud computing NA 2024 event is being held in Salt Lake City, and the attendees are thanked for coming, with a special mention of the enthusiasm around last-minute tickets and registrations (00:07:36).
- The volunteers, org team, and venue staff are thanked for their efforts in making the event possible, with a mention of the venue's amenities and the efforts of the org team in accommodating the attendees (00:09:00).
- The program committee is thanked for their critical role in reviewing proposals and creating the schedule, with a mention of the narrow timeline they had to work with (00:11:13).
- The sponsors, including Microsoft, are thanked for their support, with a mention of the importance of funding for community events and the benefits of sponsoring the event (00:12:44).
- The event is revealed to be the 10th edition of Rejekts, and the founders of the event, Chris, Lexi, and Andy, are thanked for their contributions (00:14:48).
- The AV team is thanked for their behind-the-scenes work in ensuring a seamless experience for the attendees and speakers (00:15:38).
- Duffy Kie, Field CTO at Cisco, discusses a talk called "Malicious Compliance Automated" which focuses on automating ways to fool or change what vulnerability scanners can see (00:17:11).
- The talk is based on a journey of a developer who has finished an application but has 24 hours to meet the deadline and is faced with thousands of vulnerabilities (00:19:02).
- Duffy Kie introduces the doer slim project, an open-source project that can shrink container images, reduce the attack surface, and debug minimal container images (00:21:40).
- The project is used to shrink an application container, resulting in a significant reduction in vulnerabilities, from 3,232 to 5, with none being critical (00:24:48).
- The talk also discusses the use of an assistant to ask about flags and parameters, and how it can provide a short overview and parameters, but may not always be perfect (00:23:49).
- Minification (programming) removes unnecessary files, making tools that focus on the OS blind to metadata, as seen in the case of Claire and OSV scanner, which rely on ETC OS release for distro metadata (00:26:26).
- The removal of basic OS information and modification of the Docker (software) Slim image resulted in a significant change, knocking out 400 Meg of vulnerable stuff that was matching on vulnerability scanners (00:30:07).
- The use of an additional flag allows for the hiding of vulnerabilities, making the image appear compliant, but still containing vulnerabilities (00:28:42).
- The obfuscation process changes the binaries, making them different in size, hash, and content, and also modifies the signatures and version signatures to evade vulnerability scanners (00:31:43).
- The modification of package metadata, such as rewriting node package metadata, can also evade vulnerability scanners by changing the version numbers (00:34:47).
- Further mutations can be made, such as changing package names or removing them completely, making it difficult for scanners to detect vulnerabilities (00:36:07).
- The state-of-the-art in vulnerability scanning has changed, with some scanners doing more work than previously, but there is still a need to drive more change in this area to improve the detection of vulnerabilities in images (00:36:33).
- The current status quo in vulnerability scanning is not good enough, and vendors need to be pushed to improve their tools to detect vulnerabilities even when traditional metadata is not present (00:37:21).
- Vendors have been receptive to fixing this problem, but it's been perceived as a compliance issue rather than a security issue, and it's been a challenge to get them to prioritize it (00:39:51).
- The project "mint" is trying to do static and dynamic analysis to determine what packages are being used by an application, but it requires knowledge of the application and customization of flags to get full coverage (00:42:01).
- The speaker, Llan Evanson, is discussing his journey with Kubernetes over the past 10 years, and how it has changed the way he works, starting with his first experience with Docker (software) (00:44:41).
- The early days of Cloud computing involved scripts and playbooks that were hard to debug, leading to a search for better tools and the discovery of Kubernetes at a meetup in San Francisco (00:47:08).
- The creators of Kubernetes made key decisions, such as creating the Cloud Native Computing Foundation and open-sourcing Kubernetes in a bare-bones state, allowing the community to build and contribute to it (00:48:00).
- The community played a crucial role in the success of Kubernetes, with weekly meetings and a focus on collaboration, which helped to build a strong ecosystem around the platform (00:48:24).
- Kubernetes reached production readiness with version 1.0 in July 2015, despite initial limitations, and has since become a widely adopted platform for building and deploying applications (00:49:13).
- The community has continued to evolve and improve Kubernetes, integrating new ideas and features, and has enabled the development of AI workloads and other innovative applications (00:52:30).
- Kubernetes has become a fundamental part of the cloud-native landscape, powering a large percentage of the internet and fueling the next wave of innovation (00:53:21).
- The future of Kubernetes will involve building and solving new challenges, but the community's ability to collaborate and adapt will be key to its continued success (00:53:43).
- The speaker compares the process of deploying a container image to Frodo's journey in The Lord of the Rings, highlighting the challenges and complexities involved (00:55:31).
- The speaker discusses the importance of considering multi-platform builds, reproducibility, and supply chain security when deploying container images (00:58:01).
- The speaker introduces an example application, a simple Go application, to demonstrate the challenges of building container images for multiple architectures (00:58:51).
- The speaker discusses three techniques for building images for multiple architectures: emulation, cross-compilation, and dedicated runners, highlighting the trade-offs of each approach (01:02:26).
- The speaker notes that emulation is the easiest approach but is slow and not suitable for continuous integration (CI) tasks, while cross-compilation is faster but more complex to implement (01:04:00).
- Using buildex and XX can simplify the process of building Docker (software) images for multiple architectures by handling dependencies for the target architecture (01:06:26).
- Dedicated runners can be used to run builds on machines with the appropriate architecture, making the build process simpler (01:06:55).
- GitHub Actions can be used to create a multi-platform build by running separate builds and then creating a final stage that combines the results (01:07:20).
- Sigstore and cosign can be used to sign images by creating a temporary certificate associated with an OIDC identity, eliminating the need to create and secure private keys (01:09:47).
- Cosign can be used to verify signatures and ensure that images have not been tampered with (01:11:56).
- Pinning dependencies to specific versions or digests can help ensure reproducibility and prevent issues caused by changes in dependencies (01:13:30).
- Tools like crane and frisbee can be used to find the digest of an image or action, and tools like digest can be used to update image references to use digests (01:15:35).
- To manage Common Vulnerabilities and Exposures, use minimal images, reduce dependencies, and keep everything up to date, as outdated dependencies can lead to vulnerabilities (01:16:50).
- Tools like Dependabot, Renovate, and GitHub Actions can help with updating images and dependencies (01:19:32).
- Salsa is a standard that describes multiple levels of assurance for builds and talks about things like provenance signing and tamper-proofing (01:20:16).
- To keep things reproducible and help with debugging, sign images and use Pin and everything (01:21:40).
- The Kubernetes release process is a collaborative effort that involves a global community delivering new features, bug fixes, and security updates multiple times a year (01:50:40).
- The release cycle goes through different stages like planning, development, alpha, beta, stabilization, and final release (01:50:50).
- Kubernetes release engineering involves planning, development, and testing phases, with the help of tools like Prow, an automated testing system, to ensure high standards are met (01:50:59).
- Challenges in the release process include complexity of the ecosystem, coordination among teams, rapid changes and updates, and security and compliance concerns (01:52:46).
- Efficient solutions for a sustainable release process include optimizing resource allocation, container image optimization, leveraging node efficiencies, policy-driven resource limits, and image promotion strategies (01:55:47).
- Automation tools play a crucial role in resolving challenges and making the release process sustainable, with Kubernetes having one of the most robust and automated processes in the open-source community (02:02:07).
- Tools for efficient release processes include release branch management with CR, continuous integration and testing with Prow, image building and promotion with the K container image promoter, documentation and release note automation with the release notes command, and real-time monitoring and coordination with gab boards (02:03:03).
- Multi-stage builds can reduce image sizes by up to 60%, lowering bandwidth and storage consumption, and reducing overall energy use (02:09:07).
- The speaker used Alpine as a reference in their presentation, but did not intentionally choose it due to its networking and DNS issues (02:10:57).
- Josh Lee will be talking about building open source observability stacks from raw telemetry, but will not be covering purses as mentioned in his abstract (02:19:00).
- The current status quo in observability often involves using Prometheus (software), but it is not the end goal and has its limitations (02:20:29).
- Observability goals include knowing when something is broken before customers do, diagnosing infrastructure interactions, validating changes in migrations, and analyzing performance issues (02:22:13).
- The platform uses ClickHouse, a SQL-compatible, non-transactional database that is massively scalable and fast at ingestion and querying (02:25:42).
- ClickHouse uses a variation on B-tree called a merge tree, which allows for fast ingestion and querying of data (02:27:30).
- ClickHouse is useful for observability due to its fast writes, time-friendly data storage, and cost-effective data cleanup (02:28:33).
- The platform's original plan for observability included using ClickHouse, object storage, and tools like Quirion and Grafana (02:24:27).
- A platform supports a ClickHouse backend out of the box, allowing users to query data from various sources and visualize it using familiar APIs, including OpenTelemetry. (02:29:40)
- The platform uses ClickHouse for Telemetry storage, which provides cost-effective and disaster-recovery-friendly storage, and is exploring the use of OpenTelemetry exporters to pump data from Apache Kafka and other sources into ClickHouse. (02:29:45)
- The platform's observability stack includes application Software development kit, host/node agents, collection/sampling/processing tools, analysis UI, and alerting, with the goal of providing a cost-effective and disaster-recovery-friendly solution. (02:31:12)
- The platform's version two goals include achieving a more lightweight and distributed architecture, with Telemetry stored externally, and working through cost control and usability issues. (02:33:22)
- ClickHouse provides built-in metrics, open Telemetry-compatible tracing, and logs for queries and background operations, which can be used to understand the system's behavior and provide visibility into the application. (02:36:50)
- The company uses SQL queries to monitor and manage their database, and they have exporters for metrics at the Kubernetes layer, which are used to create dashboards for customers (02:37:55).
- They provide dashboards to their customers to help them optimize their queries, as ClickHouse can be efficient but has a learning curve, and queries can become expensive in terms of compute (02:38:40).
- The company has a list of important alerts, including crash loot back off, Prometheus (software) is down, Cube job failed, Cube Damon set roll out stuck, and Cube persistent volume usage, which help them identify and address issues (02:39:50).
- They use EBPF for network monitoring and cost optimization, as it provides visibility into network requests and can map services talking to each other (02:41:28).
- The company recommends using ebpf tracing and tools like coot and quering for observability and cost optimization (02:42:47).
- They have community resources available, including a ClickHouse function reference, the Open Exportal client, and the Alterb Slack community (02:43:26).
- The company's journey to observability is ongoing, and they recommend starting with what you have, focusing on known issues, and adapting to changing needs (02:44:33).
- The speaker maintains a Grafana plugin and contributes to an open-source project, but not heavily to Loki, and wants to keep "dog fooding" the plugin they maintain (02:46:41).
- Coot is recommended as a simple and quick way to integrate eBPF, which has an agent and a backend, supports OpenTelemetry, and uses BPF to create a graph (02:47:19).
- ClickHouse is preferred over Elasticsearch because it's easier to work with, more efficient, and cost-effective for storing raw data at scale, and it has features like token filter indexes and relational stuff with SQL (02:48:18).
- The discussion is about debugging Kubernetes environments, and the biggest pain points include complex dependency chains, bloated images, tracking updates, and consistent dependencies (04:33:21).
- Reducing noise in software images by removing unnecessary dependencies and packages can make debugging easier and improve security (04:36:12).
- Using distroless images can help minimize the number of vulnerabilities in an image, as they contain fewer packages and dependencies (04:37:17).
- Vulnerability scanners can help identify and stay on top of new vulnerabilities, with over 60-100 new vulnerabilities introduced daily (04:39:42).
- Multi-stage builds can be used to deploy different versions of an image to different environments, such as production and development (04:42:50).
- Distroless images can make it easier to deploy to environments and minimize the footprint in vulnerable spaces (04:42:42).
- Multi-stage builds can be used to build a binary and then use a minimal base image for deployment, reducing the number of vulnerabilities (04:45:27).
- A multi-stage build process can be used to reduce the size of the final image by only including essential binaries and libraries in the second stage, with Wolfie Linux being a suitable option as a secure base image (04:46:06).
- Wolfie Linux is a minimal Linux distribution designed for containers, with a small size and few packages, and is used as the base image for Chainuard images (04:46:09).
- Chainuard images, such as the one based on Wolfie, have fewer packages and dependencies compared to other common base images like Ubuntu, and can be used as a secure option for production environments (04:47:36).
- Using Wolfie images for debug images and Chainuard images for production environments can provide a secure and minimal setup, with the ability to generate an SBOM and track dependencies (04:49:32).
- Keeping track of dependencies and generating an SBOM can help with Software supply chain security, and tools like vulnerability scanners and Wolfie can aid in this process (04:51:39).
- Automating checks, cross-referencing SBOMs with vulnerability databases, and controlling the number of dependencies can help stay ahead of security vulnerabilities (04:53:22).
- Using secure packages with simple supply chains, like those provided by Wolfie, can make debugging easier and reduce the risk of vulnerabilities (04:53:44).
- Vulnerability scanning can help reduce administrative fatigue by weeding out non-essential alerts, and setting a threshold for severity levels can make reports more manageable (04:55:01).
- Using a container scanner that is configurable can help identify and prioritize vulnerabilities (04:56:56).
- Debugging tools like cctl debug can help debug applications in the same context as the main container, and using the correct security context is essential (04:57:22).
- Reducing noise and minimizing the footprint of images can make for better, faster, and more secure images (04:58:52).
- Tools like Mir D and telepresence can simulate the production environment locally, allowing developers to debug and test more efficiently (05:01:27).
- Dive is a good inspection tool for images, providing high-level insights into what's in an image and what matters (05:02:28).
- Inspector Gadget is a set of tools and framework for data collection and system inspection on Kubernetes clusters and Linux hosts using EBPF, allowing for observability and other use cases (05:14:31).
- eBPF programs run in the context of the Linux kernel, providing high performance and efficiency, and are general-purpose, but can be challenging to create and manage, especially for non-kernel developers (05:11:40).
- Inspector Gadget aims to lower the barrier to using eBPF by providing a simple way to collect and export data to observability tools, and supports features like automatic enrichment, WebAssembly modules, and multiple modes of operation (05:14:40).
- The tool builds and packages eBPF programs into Open Container Initiative images called "gadgets," distributes and manages them across the cluster, and collects and exports data to observability tools with a simple command (05:15:42).
- Inspector Gadget can be used for various use cases, including observability tooling, command-line usage, and even embedding via a Go (programming language) library (05:14:58).
- A Kubernetes deployment is demonstrated using a YAML manifest file to deploy an Inspector Gadget, which is a tool that provides information on processes being executed in the cluster (05:18:59).
- The gadget is configured to monitor all namespaces, filter by process name (bash), and expose metrics to Prometheus (software) (05:19:27).
- The metrics are then used to configure alerts, such as one that fires when a process is executed (05:22:56).
- Gadget architecture is explained, consisting of EBPF programs, a WM module for post-processing, and metadata YAML that defines gadget capabilities (05:24:05).
- A second demonstration shows how to collect network latency using a Profile Transmission Control Protocol runtime gadget, which is deployed using a script and CLI syntax (05:26:13).
- The gadget outputs a histogram to the terminal and is also configured to export metrics to Prometheus, which is then visualized in a Grafana dashboard (05:26:59).
- A demonstration was given on using gadgets to analyze latency in connections, introducing a 10 millisecond latency using TC with a Network scheduler to show its effect on the H map (05:28:29).
- The straight TCP drop gadget was used to determine why TCP packets were being dropped by the kernel, showing that the reason corresponded to the TC Qdisc configuration (05:29:33).
- The idea behind creating gadgets is that they can be used in many different ways, adding value to the framework and allowing developers to focus on business logic (05:30:35).
- A gadget called Trace Loop was introduced, which records activity performed by different pods or containers on the system, similar to a flight recorder, to help understand what happened when an application crashes (05:32:43).
- The Trace Loop gadget was used to analyze a crashing application, showing the system calls made by the container, including a failed attempt to open a non-existing file, which likely caused the crash (05:34:50).
- The goal is to implement Trace Loop as a plugin for Headlamp (outdoor), allowing users to activate it and receive notifications when an application crashes, with low overhead (05:35:48).
- Inspector Gadget is open to conversations and expanding use cases, with a 1.0 release planned for mid-next year, and feedback is welcome (05:36:25).
- A tool was created to store crash data in a ring buffer, allowing for efficient retrieval of information when a node crashes, with minimal overhead from the BPF program (05:38:03).
- The tool uses a ring buffer to store crash data, which remains on the node until it is retrieved, and can be overwritten if not pulled out in time (05:40:26).
- The tool is designed to be general-purpose and can be built upon, with the goal of making it efficient and useful for users (05:41:09).
- A new project called Percy was created to provide visualization and dashboarding for observability, filling a gap in the Cloud Native Computing Foundation landscape (05:44:22).
- Percy focuses on providing visualization for metrics, integrating with Prometheus (software), and is fully open-source, allowing users to have choice and flexibility (05:45:07).
- The project emphasizes "dashboard as code," allowing for automation and scalability in dashboard deployment (05:45:39).
- Percy is designed to be Kubernetes-native, with features like operators, container images, and CRDs, and includes a plugin infrastructure for customization (05:46:44).
- A project for a dashboard standard and API was discussed at the Prometheus conference (PromCon) in 2023, and it was suggested that the project should be part of the Cloud Native Computing Foundation (CNCF) sandbox (05:47:50).
- The project was submitted to the CNCF sandbox in August and was approved within two weeks, which is a relatively quick process (05:48:20).
- The project has a command-line tool called "Percy CLI" that allows users to create, delete, and get things, and it uses standardized JSON (05:49:54).
- The project is still in its early stages, with the latest version being 0.4.9.0, and it did not initially have a UI (05:51:36).
- The project is integrating the PromLens code base, which is a Southern Poverty Law Center that helps users learn Prometheus (software) Query Language (PromQL) (05:53:41).
- The project has features such as gauge charts, variables, and thresholds, and it is also integrating an Explorer on the front end of the UI (05:53:07).
- The project allows users to create documentation panels with markdown, which can be useful for troubleshooting and on-call purposes (05:55:36).
- Percy supports multiple gauges, tables, and bar charts, allowing for more complex and interesting visualizations (05:56:06).
- Tracing has been integrated into Percy by Red Hat, using Graus Tempo Trace query, and is now part of the project (05:56:48).
- The project is open-source and allows users to contribute and build their own dashboards, with a comprehensive set of features and a lightweight design (06:01:25).
- A workshop is available that guides users through building Percy from scratch, and it can be done in a container or source install, with options for Docker (software) or Podman (06:01:50).
- The project repository is online at shell.org, and Percy is now part of the Cloud Native Computing Foundation's sandbox, with a presence on social media and a Prometheus (software) Workshop available (06:02:44).
- A dashboard can be customized by resizing and rearranging elements, and also by changing settings such as the color scheme and adding descriptions (06:05:07).
- To import a dashboard as code, a CLI tool is used to apply a file generated from a Grafana dashboard (06:06:39).
- The benefits of using a dashboard as code approach, like the one implemented by Perse, include avoiding massive JSON files and making it easier to manage and maintain dashboards (06:07:37).
- The table widget in Perse allows for creating dashboard hierarchies and navigation, and it is possible to add links to other dashboards (06:08:38).
- The Perse project is not meant to be a replacement for Grafana, but rather a base toolkit that can be embedded in other projects, with the goal of standardizing observability dashboards and visualizations (06:10:11).
- The project is driven by community contributions and has a roadmap that is heavily influenced by what people want to work on (06:09:03).
- The Command-line interface tool can be used to build and run a dashboard from source, and it is possible to create a standardized chunk of code that can be pushed to a dashboard (06:13:09).
- The speaker mentions that they have been doing workshops for a while and notes that some attendees might not want to go through all the steps, so they provide completely done files linked inside the workshop versions for those who want to skip certain parts (06:14:52).
- Marcus Noble, a platform engineer at Giant Swarm, introduces himself and talks about his experience with Kubernetes, having worked on release engineering, Compact disc, and core Kubernetes development (06:34:16).
- Marcus discusses dynamic admission control, including validating admission webhooks and mutating admission webhooks, which are used for defaulting logic, policy enforcement, best practices, and problem mitigation (06:35:53).
- He mentions that while webhooks in Kubernetes are powerful, they come with some risks, but there is a safer alternative: validating admission policies, which were introduced in Kubernetes version 1.26 as an alpha feature (06:39:18).
- Cell is an expression language with a syntax similar to C-based languages, designed to be run as an embedded language within other tooling, and is good for simple expressions. (06:41:11)
- Cell has a concept of cost per operation, which can be calculated ahead of time, allowing the API server to prevent lockups and infinite loops. (06:42:47)
- A validating admission policy can be created to block the use of the latest tag in images, with a failure policy set to fail and match constraints set to apply to deployments, daemon sets, or stateful sets. (06:43:42)
- The policy expression must match against what is allowed, not what is being blocked, and can use built-in functions such as "all" and "ends with". (06:45:14)
- A policy binding is required for the policy to take effect, and can be applied to specific namespaces or resources. (06:47:47)
- The API server can block deployments with a human-readable error message if the latest tag is not allowed, preventing pods from going down (06:49:35).
- Context variables such as object, oldObject, and request can be used in API calls, allowing for comparisons and preventing certain changes (06:50:06).
- Parameters can be defined using custom resources, enabling policies to be dynamically used in different ways, such as in different namespaces (06:51:22).
- Variables are reusable cell expressions that can simplify validation expressions, but their names must be cell-valid and cannot contain hyphens (06:53:17).
- Audit annotations can be added to policies, providing extra metadata that shows up in the audit log when a policy fails (06:54:17).
- Message expressions allow for dynamic error messages using cell expressions, providing actionable information to users (06:55:23).
- Mutating admission policies are being developed, introducing new resources and patch strategies, but are not yet finalized (06:56:11).
- The desired future features for cloud native include generative policy, resource lookups for existing resources within a cluster, and policy exceptions, which are not currently possible with Cell due to cost implications and other limitations (06:59:00).
- Validating admission policies are safer than web hooks and can be used now, with the recommendation to start replacing web hooks with in-process policies if possible (07:00:07).
- The current implementation of policies is Kubernetes-centric, but it is theoretically possible to adapt the model to non-Kubernetes workloads and environments, such as ECS, by building support for these policies in the respective systems (07:03:58).
- The speaker, Lee Capil, introduces himself and explains his background in working on Kubernetes and continuous delivery, and his current work at a startup called Flocks, which focuses on software environments (07:07:28).
- NYX is a package manager that lets users build software on top of various tools, libraries, and frameworks, and it has a different philosophy about how to package code and configuration (07:09:09).
- NYX separates code, configuration, and libraries into individual pieces and builds them in a way that allows for a fully capable dependency graph on the file system (07:11:46).
- NYX is more than just a package manager, with a contributor base of about 7500 people and the largest package ecosystem in the world, with over 20,000 packages (07:13:28).
- Cloud native people may not need to package software in this way, but using NYX can provide benefits such as native support for Macs, which cannot currently run Open Container Initiative containers (07:16:13).
- NYX packages can be used to install software on a system, and the flock environment makes it easy to list and manage the software being used from the NYX packages ecosystem (07:18:03).
- The workflow involves installing system stuff to a folder, similar to how software engineers install packages, and using a manifest and lock file to manage dependencies and versions (07:19:31).
- The NYX store is used to cache dependencies, which can be more granular than container layers, and allows for better dependency caching and reuse (07:25:47).
- The NYX store can be used across different environments, such as MacOS and Linux, and can resolve dependencies using the same hash and package name (07:27:19).
- NYX can also be run on Android, using a single-user mode installation, and can access the same software as on other platforms (07:28:35).
- The environment can be activated using the flocks activate command, which allows for the use of services such as Prometheus (software) and Grafana (07:29:19).
- NYX host manager is a DaemonSet that runs on every node of a Kubernetes cluster, allowing users to shell in and run NYX commands, and uses a volume mount from host data NX directly from the host (07:31:04).
- NYX cache can be used as an Open Container Initiative registry, allowing users to push binary builds to it, and can be used with containerd to construct layers directly from a NYX store (07:32:55).
- NYX snapshot is a direct integration with containerd's snapshot plugins ecosystem, allowing users to stream container images from a NYX store, and can be used to create OCI manifests that can be understood by containerd (07:34:42).
- The goal is to make NYX as usable as brew or git, and to create a better developer experience by factoring dependencies cleanly and automatically from the beginning (07:38:36).
- NYX can be used in various environments, including Windows Subsystem for Linux, MacBook, and Android containers, and can help improve the build graph and cash hit ratio (07:38:54).
- Virtualization is a technique to simulate physical hardware, with the first generation hypervisor developed by IBM in the 1970s (07:48:17).
- A hypervisor has total control over the hardware, enabling it to manage and share hardware resources across multiple operating systems (07:48:29).
- The three core principles for a hypervisor are fidelity, safety, and performance, as formally proposed by Jared Poac and Robert P. Goldberg in 1973 (07:48:46).
- Containers emerged in the 2010s as a different virtualization technology, driven by the popularity of the public cloud and its demands for more consolidation, density, and ease of software deployment and portability (07:53:47).
- Containers use three Linux kernel features: namespaces, cgroups, and capabilities, to provide a lightweight and resource-efficient solution for virtualization (07:54:28).
- WebAssembly is a compiler target, allowing code to be compiled to it just like x86 64 architectures, and is the new kit on the block in virtualization technology (07:56:58).
- Web Assembly is a harder abstraction layer that abstracts over hardware, providing a layer of abstraction between the code and the hardware, with core principles of being fast, safe, and language-independent (07:57:23).
- Web Assembly has evolved since its introduction in 2017, with the addition of Web Assembly components, which provide higher-level types and true language independence (07:59:01).
- The comparison of Virtual Machines (Virtual machine), containers, and Web Assembly is based on categories such as security, speed, resources required, cold start, portability, and cross-guest communication (08:00:22).
- Web Assembly shines in terms of resources required, as it can run multiple instances with a smaller footprint compared to VMs and containers (08:02:54).
- Web Assembly also excels in cold start and portability, with its ability to stream compilation and architecture independence (08:03:32).
- Hyperlite is a library that leverages hypervisors to create OS-less micro-VMs, allowing for fast and safe execution of untrusted code (08:05:38).
- Hyperlite executes arbitrary code safely and quickly by spinning up a micro VM that has no OS and executes functions at a granular level. (08:06:55)
- The demo showcases the difference in response times between cold and warm endpoints, with the warm endpoint pulling from a pool of pre-instantiated sandboxes, resulting in faster response times. (08:08:18)
- The key takeaways from the talk include WebAssembly being a compiler target that provides a harder abstraction layer, standardizing communication between components, and being suitable for projects that require fast cross-communication and resource efficiency. (08:13:39)
- Web Assembly (WASM) components can be packaged as Open Container Initiative artifacts and used similarly to containers, with projects like Rong allowing direct pulling from OCI registries and deployment with Kubernetes (08:16:27).
- Hyperlite can be used as a normal library, embedded into applications to deploy micro-Virtual machine, and is meant to be used as something embeddable, different from containers and gvisor (08:17:04).
- Microsoft Azure is already using Web Assembly underneath the hood, with services like AKS supporting WASM execution, and Microsoft is working on running WASM within Hyperlite for better isolation (08:19:16).
- The talk "Platform Engineering Love Security" discusses the real security complexity in the cloud-native world, platform engineering, and actionable best practices to shift security down to the platform, not left to developers (08:23:46).
- A security governance framework involves six main pillars: security review, threat model, security assessment, penetration testing, scorecard, and cloud governance board review, to ensure security and compliance of cloud services (08:25:14).
- A security governance framework is necessary to release a cloud pattern, which can be complex and painful without automation (08:27:45).
- The Cloud Native Computing Foundation landscape for security and compliance includes various products, making it challenging to choose the right one due to overlapping security capabilities (08:28:12).
- The CNCF releases a 4C security model, a four-pillar model from code to cloud, to help split architecture and identify security controls (08:28:34).
- Kubernetes security challenges include identity management, misconfiguration, and the need for automation (08:29:01).
- Platform engineering aims to provide a better developer experience by shifting down to the platform, abstracting and standardizing infrastructure, and automating best practices (08:31:28).
- The goal of platform engineering is to enable developers to focus on coding and committing to their Git repository, while the platform engineer handles infrastructure and security (08:32:56).
- Six actionable best practices for platform engineering include standardized configuration, scaling security best practices, reducing the attack surface, and preventing privilege escalation (08:34:34).
- Security engineers should focus on standardized configuration, scaling security best practices, and reducing the attack surface to prevent drift and ensure compliance (08:34:42).
- Scanning everything, including infrastructure as code and code source, is essential to identify misconfigurations and vulnerabilities (08:35:52).
- Compliance control and runtime security monitoring are crucial to ensure adherence to regulations and detect potential security threats (08:36:47).
- To prevent misconfiguration with security controls, shift left by giving early feedback to developers through tools like OPA policy, allowing them to catch and fix issues before they reach the first gate (08:38:19).
- Platform engineers can simplify developers' lives by exposing interfaces and capabilities, automating and standardizing Compact disc pipelines, and providing templates and documentation (08:39:13).
- Platform engineers can inject complexity and security features at various layers, such as scanning containers and packages, signing provenance, and enforcing policies, without requiring developers to handle these tasks (08:41:44).
- Using orchestrators like Crossplane, critics, or Radius can facilitate coordination and deployment between applications and infrastructure, abstracting away complexity for developers (08:44:33).
- Abstraction layers, such as Dapper, can allow developers to describe what they want to deploy without writing Kubernetes manifests or Docker (software) files, and a portal can centralize documentation, scaffolding, and onboarding (08:45:15).
- There's a growing collaboration between stakeholders, including security, platform, and network teams, to work on segmentation and network policies (08:47:29).
- Network policies are becoming more upstream to facilitate collaboration between admin, security, and platform engineers (08:47:35).
- A shift is occurring from kuties itself, which is seen as a positive change (08:47:45).