20 Sep 2023
Meeting agenda: https://docs.google.com/document/d/1ushaVqAKYnZ2VN_aa3GyKlS4kEd6bSug13xaXOakAQI/edit#bookmark=id.mc1yty2f8e9v
- 7 participants
- 50 minutes
11 Sep 2023
A Kubernetes community meeting about Image Builder: a tool for building Kubernetes virtual machine images across multiple infrastructure providers.
- 3 participants
- 22 minutes
27 Jul 2023
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 5 participants
- 20 minutes
20 Jul 2023
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 7 participants
- 31 minutes
12 Jul 2023
Meeting agenda: https://docs.google.com/document/d/1ushaVqAKYnZ2VN_aa3GyKlS4kEd6bSug13xaXOakAQI/edit#bookmark=id.sx9vlhps99mn
- 16 participants
- 45 minutes
28 Jun 2023
meeting notes https://docs.google.com/document/d/1ushaVqAKYnZ2VN_aa3GyKlS4kEd6bSug13xaXOakAQI/edit#heading=h.pxsq37pzkbdq
- 11 participants
- 42 minutes
15 Jun 2023
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 10 participants
- 42 minutes
11 May 2023
Cluster API e2e test deep dive session discussing CAPI issue 8641
- 6 participants
- 44 minutes
27 Apr 2023
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 4 participants
- 15 minutes
30 Mar 2023
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 7 participants
- 25 minutes
9 Mar 2023
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 6 participants
- 36 minutes
1 Mar 2023
- 11 participants
- 32 minutes
23 Feb 2023
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 4 participants
- 16 minutes
15 Feb 2023
- 8 participants
- 36 minutes
8 Feb 2023
- 8 participants
- 51 minutes
14 Dec 2022
- 12 participants
- 57 minutes
16 Nov 2022
Bordier Butter in Brittany, France produces up to 380 tons of butter per year that gets shipped all over the world, to individuals and restaurants alike. They produce different kinds of butter including seaweed butter, yuzu butter, smoked salt butter, and more.
For more food and restaurant news, sign up for our newsletters: https://trib.al/wqZ0q3s
Credits:
Producer: Carla Francescutti
Field Producers/Directors: Anna Muckerman, Mohamed Ahmed
Camera: Anna Muckerman, Mohamed Ahmed
Editors: Anna Muckerman, Mohamed Ahmed
Executive Producer: Stephen Pelletteri
Development Producer: Ian Stroud
Supervising Producer: Stefania Orrù
Audience Development: Terri Ciccone, Frances Dumlao, Avery Dalal
0:00 - Overview
0:36 - Liquid cream process
1:43 - Buttermilk
2:40 - Cleaning
3:07 - Kneading
5:18 - Bordier History
5:59 - Quality Control
7:18 - Compound Butters
7:54 - Shaping & Packaging
10:09 - Cooking With Butter
11:50 - Legacy
----------------------------------------------------------------------------------------------------------
For more episodes of 'Vendors', click here: https://trib.al/YDlwIxY
Eater is the go-to resource for food and restaurant obsessives with hundreds of episodes and new series, featuring exclusive access to dining around the world, rich culture, immersive experiences, and authoritative experts. Binge it, watch it, crave it.
Subscribe to our YouTube Channel now! http://goo.gl/hGwtF0
For more food and restaurant news, sign up for our newsletters: https://trib.al/wqZ0q3s
Credits:
Producer: Carla Francescutti
Field Producers/Directors: Anna Muckerman, Mohamed Ahmed
Camera: Anna Muckerman, Mohamed Ahmed
Editors: Anna Muckerman, Mohamed Ahmed
Executive Producer: Stephen Pelletteri
Development Producer: Ian Stroud
Supervising Producer: Stefania Orrù
Audience Development: Terri Ciccone, Frances Dumlao, Avery Dalal
0:00 - Overview
0:36 - Liquid cream process
1:43 - Buttermilk
2:40 - Cleaning
3:07 - Kneading
5:18 - Bordier History
5:59 - Quality Control
7:18 - Compound Butters
7:54 - Shaping & Packaging
10:09 - Cooking With Butter
11:50 - Legacy
----------------------------------------------------------------------------------------------------------
For more episodes of 'Vendors', click here: https://trib.al/YDlwIxY
Eater is the go-to resource for food and restaurant obsessives with hundreds of episodes and new series, featuring exclusive access to dining around the world, rich culture, immersive experiences, and authoritative experts. Binge it, watch it, crave it.
Subscribe to our YouTube Channel now! http://goo.gl/hGwtF0
- 3 participants
- 12 minutes
10 Nov 2022
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 9 participants
- 56 minutes
9 Nov 2022
- 9 participants
- 49 minutes
18 Oct 2022
- 5 participants
- 40 minutes
13 Oct 2022
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 7 participants
- 51 minutes
6 Oct 2022
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 6 participants
- 54 minutes
29 Sep 2022
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 6 participants
- 46 minutes
15 Sep 2022
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 5 participants
- 33 minutes
12 Sep 2022
In this video we take a deep dive into how the Kubernetes Cluster Autoscaler performs its balancing similar node groups feature with the Cluster API cloud provider implementation.
- 5 participants
- 31 minutes
8 Sep 2022
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 5 participants
- 13 minutes
6 Sep 2022
- 4 participants
- 48 minutes
31 Aug 2022
- 7 participants
- 30 minutes
31 Aug 2022
- 4 participants
- 41 minutes
18 Aug 2022
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 5 participants
- 28 minutes
4 Aug 2022
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 7 participants
- 49 minutes
21 Jul 2022
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 5 participants
- 24 minutes
13 Jul 2022
- 9 participants
- 23 minutes
8 Jun 2022
- 3 participants
- 40 minutes
26 May 2022
SIG Cluster Lifecycle - Cluster API Azure Office Hours - 20220526
- 6 participants
- 17 minutes
25 May 2022
- 13 participants
- 41 minutes
12 May 2022
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 8 participants
- 33 minutes
11 May 2022
- 12 participants
- 24 minutes
11 May 2022
- 3 participants
- 52 minutes
14 Apr 2022
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 6 participants
- 12 minutes
13 Apr 2022
- 12 participants
- 46 minutes
16 Mar 2022
- 3 participants
- 20 minutes
9 Mar 2022
- 11 participants
- 35 minutes
23 Feb 2022
- 4 participants
- 31 minutes
22 Feb 2022
- 11 participants
- 1:01 hours
16 Feb 2022
- 3 participants
- 27 minutes
15 Feb 2022
Meeting minutes https://docs.google.com/document/d/1ushaVqAKYnZ2VN_aa3GyKlS4kEd6bSug13xaXOakAQI/edit#
- 5 participants
- 41 minutes
9 Feb 2022
- 14 participants
- 45 minutes
3 Feb 2022
A Kubernetes community meeting about the Azure provider for Cluster API.
Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 6 participants
- 20 minutes
25 Jan 2022
- 6 participants
- 47 minutes
14 Jan 2022
Code walkthrough on how to implement API conversion.
The notes shown in the video can be found here: https://hackmd.io/aYLvrb_jTXiwg_MnpD804w
The notes shown in the video can be found here: https://hackmd.io/aYLvrb_jTXiwg_MnpD804w
- 5 participants
- 1:10 hours
5 Jan 2022
- 4 participants
- 55 minutes
17 Nov 2021
Meeting minutes https://docs.google.com/document/d/1LdooNTbb9PZMFWy3_F-XAsl7Og5F2lvG3tCgQvoB5e4/edit#
- 12 participants
- 31 minutes
11 Nov 2021
A Kubernetes community meeting about the Azure provider for Cluster API.
Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 4 participants
- 12 minutes
10 Nov 2021
- 2 participants
- 17 minutes
27 Oct 2021
- 2 participants
- 31 minutes
26 Oct 2021
- 4 participants
- 14 minutes
19 Oct 2021
- 2 participants
- 25 minutes
5 Oct 2021
- 6 participants
- 54 minutes
21 Sep 2021
- 7 participants
- 59 minutes
15 Sep 2021
- 3 participants
- 30 minutes
14 Sep 2021
- 3 participants
- 12 minutes
7 Sep 2021
- 5 participants
- 25 minutes
31 Aug 2021
- 5 participants
- 15 minutes
19 Aug 2021
- 7 participants
- 15 minutes
10 Aug 2021
- 5 participants
- 31 minutes
27 Jul 2021
- 9 participants
- 59 minutes
13 Jul 2021
- 6 participants
- 29 minutes
7 Jul 2021
- 2 participants
- 22 minutes
6 Jul 2021
- 4 participants
- 10 minutes
24 Jun 2021
Cluster API Provider Azure office hours for 2021-06-24
Meeting agenda: http://bit.ly/k8s-capz-agenda
Join https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle if you don't have edit access to the meeting agenda.
Meeting agenda: http://bit.ly/k8s-capz-agenda
Join https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle if you don't have edit access to the meeting agenda.
- 5 participants
- 25 minutes
9 Jun 2021
- 2 participants
- 57 minutes
1 Jun 2021
- 6 participants
- 59 minutes
27 May 2021
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 8 participants
- 42 minutes
26 May 2021
- 3 participants
- 54 minutes
18 May 2021
- 3 participants
- 19 minutes
13 May 2021
Cluster API Provider Azure office hours for 2021-05-13
Meeting agenda: http://bit.ly/k8s-capz-agenda
Join https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle if you don't have edit access to the meeting agenda.
Meeting agenda: http://bit.ly/k8s-capz-agenda
Join https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle if you don't have edit access to the meeting agenda.
- 5 participants
- 10 minutes
12 May 2021
- 3 participants
- 35 minutes
11 May 2021
- 7 participants
- 35 minutes
4 May 2021
- 5 participants
- 40 minutes
29 Apr 2021
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 5 participants
- 36 minutes
28 Apr 2021
- 5 participants
- 20 minutes
20 Apr 2021
- 5 participants
- 23 minutes
15 Apr 2021
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
- 9 participants
- 20 minutes
14 Apr 2021
- 3 participants
- 47 minutes
13 Apr 2021
- 4 participants
- 54 minutes
6 Apr 2021
- 5 participants
- 46 minutes
31 Mar 2021
- 4 participants
- 58 minutes
23 Mar 2021
- 10 participants
- 55 minutes
17 Mar 2021
- 3 participants
- 55 minutes
9 Mar 2021
- 5 participants
- 33 minutes
4 Mar 2021
A Kubernetes community meeting about the Azure provider for Cluster API. Cluster API brings familiar, declarative APIs to Kubernetes cluster creation, configuration, and management.
We would love for you to join us! Follow along and set discussion topics at:
http://bit.ly/k8s-capz-agenda
We would love for you to join us! Follow along and set discussion topics at:
http://bit.ly/k8s-capz-agenda
- 9 participants
- 35 minutes
2 Mar 2021
- 5 participants
- 47 minutes
23 Feb 2021
- 6 participants
- 27 minutes
22 Feb 2021
SIG Cluster Lifecycle - Cluster API Provider AWS Office Hours - 20210222
- 4 participants
- 23 minutes
16 Feb 2021
- 5 participants
- 21 minutes
9 Feb 2021
- 8 participants
- 36 minutes
8 Feb 2021
SIG Cluster Lifecycle Cluster API Provider AWS Office Hours 20210208
- 5 participants
- 19 minutes
3 Feb 2021
- 5 participants
- 57 minutes
2 Feb 2021
- 4 participants
- 21 minutes
1 Feb 2021
etcdadm office hours
Agenda: https://docs.google.com/document/d/1b_J0oBvi9lL0gsPgTOrCw1Zlx3e7BYEuXnB3d2S15pA/edit#heading=h.e59q52mi9zxu
Agenda: https://docs.google.com/document/d/1b_J0oBvi9lL0gsPgTOrCw1Zlx3e7BYEuXnB3d2S15pA/edit#heading=h.e59q52mi9zxu
- 4 participants
- 40 minutes
25 Jan 2021
SIG Cluster Lifecycle - Cluster API Provider AWS Office Hours - 20210125
- 4 participants
- 13 minutes
20 Jan 2021
- 2 participants
- 56 minutes
12 Jan 2021
- 9 participants
- 46 minutes
11 Jan 2021
SIG Cluster Lifecycle - Cluster API Provider AWS Office Hours - 20210111
- 4 participants
- 24 minutes
5 Jan 2021
- 5 participants
- 57 minutes
14 Dec 2020
SIG Cluster Lifecycle - Cluster API Provider AWS Office Hours - 20201214
- 4 participants
- 16 minutes
11 Nov 2020
- 7 participants
- 56 minutes
10 Nov 2020
- 5 participants
- 59 minutes
10 Nov 2020
- 5 participants
- 31 minutes
3 Nov 2020
- 7 participants
- 27 minutes
28 Oct 2020
- 4 participants
- 11 minutes
27 Oct 2020
- 4 participants
- 1:04 hours
27 Oct 2020
- 3 participants
- 5 minutes
20 Oct 2020
- 4 participants
- 53 minutes
20 Oct 2020
- 7 participants
- 31 minutes
14 Oct 2020
- 5 participants
- 56 minutes
13 Oct 2020
Recording of the cluster addons biweekly meeting held on 20201013
- 4 participants
- 10 minutes
6 Oct 2020
- 7 participants
- 48 minutes
30 Sep 2020
- 5 participants
- 52 minutes
29 Sep 2020
Meeting Notes: https://docs.google.com/document/d/10_tl_SXcFGb-2109QpcFVrdrfnVEuQ05MBrXtasB0vk/edit#heading=h.pdzq6mdlevb5
- 5 participants
- 24 minutes
22 Sep 2020
Recording of the sig-cluster-lifecycle biweekly meeting held on 20200922
- 5 participants
- 18 minutes
16 Sep 2020
- 5 participants
- 59 minutes
15 Sep 2020
- 4 participants
- 30 minutes
1 Sep 2020
- 5 participants
- 18 minutes
25 Aug 2020
- 8 participants
- 50 minutes
11 Aug 2020
- 7 participants
- 50 minutes
6 Aug 2020
Milestone planning for Kubernetes Cluster API Provider AWS on 06/08/2020 at 1800 UTC
- 4 participants
- 43 minutes
5 Aug 2020
- 6 participants
- 57 minutes
4 Aug 2020
Meeting notes: https://docs.google.com/document/d/10_tl_SXcFGb-2109QpcFVrdrfnVEuQ05MBrXtasB0vk/edit#heading=h.q9sl7lw9laa3
- 3 participants
- 11 minutes
2 Aug 2020
- 5 participants
- 55 minutes
28 Jul 2020
- 5 participants
- 33 minutes
22 Jul 2020
- 4 participants
- 59 minutes
21 Jul 2020
- 3 participants
- 12 minutes
8 Jul 2020
- 7 participants
- 57 minutes
7 Jul 2020
Meeting Notes: https://docs.google.com/document/d/10_tl_SXcFGb-2109QpcFVrdrfnVEuQ05MBrXtasB0vk/edit#heading=h.ddrn7k8vehon
- 4 participants
- 26 minutes
30 Jun 2020
- 6 participants
- 36 minutes
24 Jun 2020
- 8 participants
- 56 minutes
23 Jun 2020
Meeting notes: https://docs.google.com/document/d/10_tl_SXcFGb-2109QpcFVrdrfnVEuQ05MBrXtasB0vk/edit#heading=h.jhnwwzui8o8g
- 8 participants
- 59 minutes
17 Jun 2020
- 6 participants
- 58 minutes
16 Jun 2020
- 12 participants
- 54 minutes
10 Jun 2020
- 4 participants
- 59 minutes
9 Jun 2020
- 6 participants
- 17 minutes
20 May 2020
- 5 participants
- 56 minutes
13 May 2020
- 7 participants
- 57 minutes
12 May 2020
- 4 participants
- 9 minutes
5 May 2020
- 8 participants
- 43 minutes
29 Apr 2020
- 4 participants
- 54 minutes
28 Apr 2020
- 6 participants
- 26 minutes
15 Apr 2020
- 6 participants
- 59 minutes
14 Apr 2020
- 7 participants
- 54 minutes
8 Apr 2020
- 6 participants
- 58 minutes
3 Apr 2020
- 10 participants
- 52 minutes
1 Apr 2020
- 9 participants
- 1:01 hours
31 Mar 2020
Meeting notes: https://docs.google.com/document/d/10_tl_SXcFGb-2109QpcFVrdrfnVEuQ05MBrXtasB0vk/edit#heading=h.yugdr2ba2zfv
- 7 participants
- 1:04 hours
25 Mar 2020
- 5 participants
- 50 minutes
18 Mar 2020
- 7 participants
- 54 minutes
11 Mar 2020
- 8 participants
- 59 minutes
4 Mar 2020
- 8 participants
- 58 minutes
3 Mar 2020
- 7 participants
- 43 minutes
19 Feb 2020
- 7 participants
- 57 minutes
18 Feb 2020
- 6 participants
- 53 minutes
17 Feb 2020
- 7 participants
- 24 minutes
17 Feb 2020
Ad-hoc meeting to discuss work to be done on the CAPI operator
- 7 participants
- 52 minutes
12 Feb 2020
- 5 participants
- 55 minutes
11 Feb 2020
- 12 participants
- 37 minutes
5 Feb 2020
- 5 participants
- 59 minutes
4 Feb 2020
- 5 participants
- 38 minutes
29 Jan 2020
- 11 participants
- 58 minutes
28 Jan 2020
Kubernetes SIG Cluster Lifecycle meeting for January, 28 2020
- 13 participants
- 44 minutes
22 Jan 2020
- 8 participants
- 56 minutes
21 Jan 2020
- 8 participants
- 29 minutes
15 Jan 2020
- 9 participants
- 59 minutes
8 Jan 2020
- 9 participants
- 58 minutes
11 Dec 2019
- 13 participants
- 54 minutes
10 Dec 2019
Meeting Notes: https://docs.google.com/document/d/10_tl_SXcFGb-2109QpcFVrdrfnVEuQ05MBrXtasB0vk/edit#heading=h.u7mt91ljn6o5
- 3 participants
- 21 minutes
27 Nov 2019
- 8 participants
- 54 minutes
26 Nov 2019
- 6 participants
- 57 minutes
13 Nov 2019
- 10 participants
- 60 minutes
12 Nov 2019
- 7 participants
- 53 minutes
6 Nov 2019
- 8 participants
- 56 minutes
4 Nov 2019
0:07 - Introduction
3:11 - The SIG Cluster Lifecycle SIG, agenda document, Mailing list
6:28 - The kubeadm release cycle, KEPs
14:38 - The kubeadm issue tracker (k/kubeadm), labels, deprecation policies
41:30 - OWNER files, sending PRs, test-grid
57:36 - The kubeadm source tree
3:11 - The SIG Cluster Lifecycle SIG, agenda document, Mailing list
6:28 - The kubeadm release cycle, KEPs
14:38 - The kubeadm issue tracker (k/kubeadm), labels, deprecation policies
41:30 - OWNER files, sending PRs, test-grid
57:36 - The kubeadm source tree
- 1 participant
- 1:17 hours
30 Oct 2019
- 6 participants
- 31 minutes
29 Oct 2019
- 4 participants
- 51 minutes
23 Oct 2019
- 10 participants
- 59 minutes
16 Oct 2019
- 8 participants
- 58 minutes
15 Oct 2019
- 6 participants
- 51 minutes
9 Oct 2019
- 9 participants
- 55 minutes
1 Oct 2019
- 6 participants
- 50 minutes
25 Sep 2019
- 10 participants
- 57 minutes
19 Sep 2019
- 9 participants
- 54 minutes
17 Sep 2019
https://docs.google.com/document/d/10_tl_SXcFGb-2109QpcFVrdrfnVEuQ05MBrXtasB0vk/edit
Thanks Cornelia Davis for recording
Thanks Cornelia Davis for recording
- 7 participants
- 23 minutes
11 Sep 2019
- 6 participants
- 51 minutes
4 Sep 2019
- 7 participants
- 48 minutes
3 Sep 2019
- 5 participants
- 20 minutes
28 Aug 2019
- 6 participants
- 56 minutes
21 Aug 2019
- 5 participants
- 58 minutes
20 Aug 2019
- 5 participants
- 47 minutes
14 Aug 2019
- 11 participants
- 54 minutes
6 Aug 2019
- 8 participants
- 1:05 hours
31 Jul 2019
- 7 participants
- 56 minutes
24 Jul 2019
- 6 participants
- 51 minutes
17 Jul 2019
- 11 participants
- 58 minutes
9 Jul 2019
- 6 participants
- 47 minutes
3 Jul 2019
- 8 participants
- 58 minutes
26 Jun 2019
- 5 participants
- 49 minutes
25 Jun 2019
- 6 participants
- 38 minutes
19 Jun 2019
NOTE: parts of the VOD are corrupted, try skipping them.
agenda: https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit#
agenda: https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit#
- 11 participants
- 54 minutes
12 Jun 2019
- 8 participants
- 56 minutes
11 Jun 2019
- 5 participants
- 41 minutes
28 May 2019
Cluster Addons meeting 2019-05-28. https://docs.google.com/document/d/10_tl_SXcFGb-2109QpcFVrdrfnVEuQ05MBrXtasB0vk/edit#
- 4 participants
- 36 minutes
23 May 2019
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
Deep Dive: Cluster Lifecycle SIG (Kubeadm) - Fabrizio Pandini & Lubomir I. Ivanov, VMware
The Cluster Lifecycle SIG is the Special Interest Group that is responsible for building the user experience for deploying Kubernetes clusters. Our objective is to simplify creation, configuration, upgrade, downgrade, and teardown of Kubernetes clusters and their components. In this deep dive, we will take a look at recent changes in kubeadm, examine how kubeadm is going to implement support for high availability clusters, and finally peek through the window to see what will come next. We’ll reserve time to talk about how to get involved with SIG Cluster Lifecycle and kubeadm, for your questions, concerns, and feature requests!
https://sched.co/MPj5
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
Deep Dive: Cluster Lifecycle SIG (Kubeadm) - Fabrizio Pandini & Lubomir I. Ivanov, VMware
The Cluster Lifecycle SIG is the Special Interest Group that is responsible for building the user experience for deploying Kubernetes clusters. Our objective is to simplify creation, configuration, upgrade, downgrade, and teardown of Kubernetes clusters and their components. In this deep dive, we will take a look at recent changes in kubeadm, examine how kubeadm is going to implement support for high availability clusters, and finally peek through the window to see what will come next. We’ll reserve time to talk about how to get involved with SIG Cluster Lifecycle and kubeadm, for your questions, concerns, and feature requests!
https://sched.co/MPj5
- 8 participants
- 41 minutes
14 May 2019
Cluster Addons meeting 2019-05-14. https://docs.google.com/document/d/10_tl_SXcFGb-2109QpcFVrdrfnVEuQ05MBrXtasB0vk/edit#
- 5 participants
- 56 minutes
30 Apr 2019
Cluster Addons meeting 2019-04-30. https://docs.google.com/document/d/10_tl_SXcFGb-2109QpcFVrdrfnVEuQ05MBrXtasB0vk/edit#
- 7 participants
- 55 minutes
12 Apr 2019
Greetings newcomers!
We've decided to hold our first sig-cluster-lifecycle new contributor session. The goal of the meeting will be to outline several of the sub-projects of the SIG and where they fit in the stack.
We will also outline the scope of the tools and some core first principals that the SIG adheres to. From there we will discuss how we operate and how you can engage and help contribute.
All skill levels welcome! You can find more about SIG Cluster Lifecycle here: https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle
We've decided to hold our first sig-cluster-lifecycle new contributor session. The goal of the meeting will be to outline several of the sub-projects of the SIG and where they fit in the stack.
We will also outline the scope of the tools and some core first principals that the SIG adheres to. From there we will discuss how we operate and how you can engage and help contribute.
All skill levels welcome! You can find more about SIG Cluster Lifecycle here: https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle
- 13 participants
- 60 minutes
5 Apr 2019
Cluster Addons meeting 2019-04-05. https://docs.google.com/document/d/10_tl_SXcFGb-2109QpcFVrdrfnVEuQ05MBrXtasB0vk/edit#
- 21 participants
- 53 minutes
6 Mar 2019
Meeting Notes: https://docs.google.com/document/d/1Ys-DOR5UsgbMEeciuG0HOgDQc8kZsaWIWJeKJ1-UfbY/edit#heading=h.rwck60ac93yg
- 10 participants
- 28 minutes
27 Feb 2019
Meeting Notes: https://docs.google.com/document/d/1Ys-DOR5UsgbMEeciuG0HOgDQc8kZsaWIWJeKJ1-UfbY/edit#heading=h.rowj34kleg2z
- 13 participants
- 48 minutes
20 Feb 2019
Meeting Notes: https://docs.google.com/document/d/1Ys-DOR5UsgbMEeciuG0HOgDQc8kZsaWIWJeKJ1-UfbY/edit#heading=h.7a1zt1i34tv9
- 18 participants
- 57 minutes
13 Feb 2019
Meeting Notes: https://docs.google.com/document/d/1Ys-DOR5UsgbMEeciuG0HOgDQc8kZsaWIWJeKJ1-UfbY/edit#heading=h.rm2b4redfsar
- 11 participants
- 35 minutes
6 Feb 2019
Meeting Notes: https://docs.google.com/document/d/1Ys-DOR5UsgbMEeciuG0HOgDQc8kZsaWIWJeKJ1-UfbY/edit#heading=h.1knsflv47tux
- 18 participants
- 33 minutes
23 Jan 2019
Meeting Notes: https://docs.google.com/document/d/1Ys-DOR5UsgbMEeciuG0HOgDQc8kZsaWIWJeKJ1-UfbY/edit#heading=h.1easfdkp14us
- 15 participants
- 57 minutes
16 Jan 2019
Meeting Notes: https://docs.google.com/document/d/1Ys-DOR5UsgbMEeciuG0HOgDQc8kZsaWIWJeKJ1-UfbY/edit#heading=h.t969yu41wiix
- 9 participants
- 56 minutes
9 Jan 2019
Meeting Notes: https://docs.google.com/document/d/1Ys-DOR5UsgbMEeciuG0HOgDQc8kZsaWIWJeKJ1-UfbY/edit#heading=h.72kntbx8l6rh
- 7 participants
- 35 minutes
19 Dec 2018
Meeting notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.sp1hfrs780gt
- 9 participants
- 1:39 hours
12 Dec 2018
In Person HA Enablement Meeting:
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit?usp=sharing
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit?usp=sharing
- 14 participants
- 1:54 hours
5 Dec 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.sp1hfrs780gt
Highlights:
- Adding automated checks to the code repository
- Local defaulting
- Cluster autoscaler backend
- 2019 roadmap
- How to contribute?
- Planning for webhook development
- Update on breakout session next week
Highlights:
- Adding automated checks to the code repository
- Local defaulting
- Cluster autoscaler backend
- 2019 roadmap
- How to contribute?
- Planning for webhook development
- Update on breakout session next week
- 11 participants
- 60 minutes
28 Nov 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.yaovy3j8you4
Highlights:
- Scope of clusterctl: adding functionality vs. using kubectl or bespoke tooling
- HA support in clusterctl
- ProviderID in machine {spec|status}
- When can the API be considered stable?
- Machine phases PR
Highlights:
- Scope of clusterctl: adding functionality vs. using kubectl or bespoke tooling
- HA support in clusterctl
- ProviderID in machine {spec|status}
- When can the API be considered stable?
- Machine phases PR
- 7 participants
- 57 minutes
21 Nov 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.m14nsau0a37c
Highlights:
- Namespacing the machine annotation
- Renaming ProviderConfig to ProviderSpec
- Machine Phases
- Splitting machines into two yaml documents
- Excluding resources when pivoting a cluster
- How do addons / cluster bundles relate to the cluster API?
- What’s current consensus of how detailed the Cluster kind should be?
- Release status
Highlights:
- Namespacing the machine annotation
- Renaming ProviderConfig to ProviderSpec
- Machine Phases
- Splitting machines into two yaml documents
- Excluding resources when pivoting a cluster
- How do addons / cluster bundles relate to the cluster API?
- What’s current consensus of how detailed the Cluster kind should be?
- Release status
- 12 participants
- 57 minutes
20 Nov 2018
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.517coa2gca4m
Highlights:
- 1.13 release notes and blog post
- Version skew and upgrade order documentation
- LTS working group
- Sending out surveys
Highlights:
- 1.13 release notes and blog post
- Version skew and upgrade order documentation
- LTS working group
- Sending out surveys
- 5 participants
- 37 minutes
14 Nov 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.nmlfae4ffj1s
Highlights:
- Discussed the proposal for common provisioning logic
Highlights:
- Discussed the proposal for common provisioning logic
- 6 participants
- 39 minutes
7 Nov 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.5c21ziyjlves
Highlights:
- Review the proposal for provisioning logic
- Tests for machine set and machine deployment controller deleted during CRD migration
- Switching the yaml parser to the new location
- Provider ID in machine status
- Renaming provider config to provider spec
- Requiring doc updates for API changes
- Adding context to actuator methods
- Using kubelet args to set labels and taints on nodes
Highlights:
- Review the proposal for provisioning logic
- Tests for machine set and machine deployment controller deleted during CRD migration
- Switching the yaml parser to the new location
- Provider ID in machine status
- Renaming provider config to provider spec
- Requiring doc updates for API changes
- Adding context to actuator methods
- Using kubelet args to set labels and taints on nodes
- 9 participants
- 56 minutes
31 Oct 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.klks7imc1cg0
Highlights:
- PR to add machine phases
- Support for static IPs
- Adding gitbook documentation
- Dependence on NodeRefs: managed vs. unmanaged clusters
- PR to add initial support for phases (alpha command)
- Adding a provider Id to machine status
Highlights:
- PR to add machine phases
- Support for static IPs
- Adding gitbook documentation
- Dependence on NodeRefs: managed vs. unmanaged clusters
- PR to add initial support for phases (alpha command)
- Adding a provider Id to machine status
- 9 participants
- 1:03 hours
24 Oct 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.vtryp5oq72xt
Highlights:
- Support for phases in clusterctl
- Renaming ProviderConfig to ProviderSpec
- Demo of GitBook for Cluster API
- Updating provider-skeleton to CRD’s and move into kubernetes-sigs?
- Literal struct fields can not be defaulted via webhook, make them pointers
- Replacing glog
Highlights:
- Support for phases in clusterctl
- Renaming ProviderConfig to ProviderSpec
- Demo of GitBook for Cluster API
- Updating provider-skeleton to CRD’s and move into kubernetes-sigs?
- Literal struct fields can not be defaulted via webhook, make them pointers
- Replacing glog
- 16 participants
- 60 minutes
17 Oct 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.lpxy92305xqu
Highlights:
- When to apply addons to clusters
- Upgrades of cluster API clusters
- Proposal for upstreaming provisioning scripts
- Assumptions in machine actuator
Highlights:
- When to apply addons to clusters
- Upgrades of cluster API clusters
- Proposal for upstreaming provisioning scripts
- Assumptions in machine actuator
- 11 participants
- 1:01 hours
26 Sep 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.r801lpnsj9zd
Highlights:
- Representing target cluster status
- Update on CRD migration
- Addon management
- Proposal of Machine States and Phases
Highlights:
- Representing target cluster status
- Update on CRD migration
- Addon management
- Proposal of Machine States and Phases
- 4 participants
- 37 minutes
19 Sep 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.h4j1jyuvtsrl
Highlights:
- Demo of MCM+Autoscaler
- Cluster API provider for Digital Ocean is finished!
- Update on CRD migration
- Longer term goal for the clusterctl CLI
- Any feedback from folks running clusters and machines in multiple namespaces?
Highlights:
- Demo of MCM+Autoscaler
- Cluster API provider for Digital Ocean is finished!
- Update on CRD migration
- Longer term goal for the clusterctl CLI
- Any feedback from folks running clusters and machines in multiple namespaces?
- 9 participants
- 55 minutes
12 Sep 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.9tb0lzmd3t0e
Highlights:
- New Zoom Link
- Update on CRD migration for cluster-api repo
- Update on CRD migration for cluster-api-provider-gcp repo
- MachineClass
- Reconciling Machines with Nodes
- API guarantees when running external controllers
- Machine Phases / States
Highlights:
- New Zoom Link
- Update on CRD migration for cluster-api repo
- Update on CRD migration for cluster-api-provider-gcp repo
- MachineClass
- Reconciling Machines with Nodes
- API guarantees when running external controllers
- Machine Phases / States
- 8 participants
- 53 minutes
5 Sep 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.7tfb6scheyxs
Highlights:
- Moving from api aggregation to CRDs (with help from the authors of kubebuilder)
- Creating a new release of clusterctl
- Discussion on MachineClass
- Scale down strategy for MachineSet (and cluster autoscaler integration)
- Splitting the Machine API from the Cluster API
Highlights:
- Moving from api aggregation to CRDs (with help from the authors of kubebuilder)
- Creating a new release of clusterctl
- Discussion on MachineClass
- Scale down strategy for MachineSet (and cluster autoscaler integration)
- Splitting the Machine API from the Cluster API
- 10 participants
- 56 minutes
22 Aug 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.v50hoocd695
Highlights:
- Pivoting by default?
- Moving to CRDs
- Making timeouts configurable
- Adding node conditions to machine status
- Steps to create a new cluster api provider
- Renaming external to bootstrap (and internal to target)
- Transferring ownership of cluster-api-provider-skeleton
- Re-initiate discussion of MachineClass
- Implementor office hours (and conflicts with aws implementors meeting)
- Schedule for alpha
Highlights:
- Pivoting by default?
- Moving to CRDs
- Making timeouts configurable
- Adding node conditions to machine status
- Steps to create a new cluster api provider
- Renaming external to bootstrap (and internal to target)
- Transferring ownership of cluster-api-provider-skeleton
- Re-initiate discussion of MachineClass
- Implementor office hours (and conflicts with aws implementors meeting)
- Schedule for alpha
- 13 participants
- 49 minutes
8 Aug 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.aux3ws54yyjb
Highlights:
- AWS Cluster API Implementation kickoff meeting is today
- Prototype MIG implementation under development
- Plans for extensibility of machine-controllers for out-of-tree provider support
- Should the apiserver port be part of the cluster spec?
- Scope of sig-cluster-lifecycle and how it relates to configuration of networking & storage
- etcd lifecycle management
- Cluster objects in multiple namespaces
- Using cluster api in eksctl, where AWS manages the control plane
Highlights:
- AWS Cluster API Implementation kickoff meeting is today
- Prototype MIG implementation under development
- Plans for extensibility of machine-controllers for out-of-tree provider support
- Should the apiserver port be part of the cluster spec?
- Scope of sig-cluster-lifecycle and how it relates to configuration of networking & storage
- etcd lifecycle management
- Cluster objects in multiple namespaces
- Using cluster api in eksctl, where AWS manages the control plane
- 13 participants
- 58 minutes
25 Jul 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.6vxzlo2hz75h
Highlights:
- Pruning dependencies
- Requirements for an external cluster
- Provider code has been removed from the main repo
- Why pivot the controller stack into the cluster? What about leaving it running?
- Using the Cluster API with hosted solutions like GKE, EKS, AKS
- Using ASG / MIG
- Provider Implementers' Office Hours Slots
Highlights:
- Pruning dependencies
- Requirements for an external cluster
- Provider code has been removed from the main repo
- Why pivot the controller stack into the cluster? What about leaving it running?
- Using the Cluster API with hosted solutions like GKE, EKS, AKS
- Using ASG / MIG
- Provider Implementers' Office Hours Slots
- 12 participants
- 48 minutes
24 Jul 2018
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.slrrqfmrca78
Highlights:
- New Cluster API Provider repositories
- Update on Cluster API alpha release
- PSA regarding 1.12 execution
- SIG Charter
- v1beta1 changes for config
- CRI and working with sig node
- Docker version for Ubuntu 18.04
Highlights:
- New Cluster API Provider repositories
- Update on Cluster API alpha release
- PSA regarding 1.12 execution
- SIG Charter
- v1beta1 changes for config
- CRI and working with sig node
- Docker version for Ubuntu 18.04
- 8 participants
- 45 minutes
18 Jul 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.8nb4w63jh60f
Highlights:
- Official azure provider
- Issues for new contributors
- Vote on creation of https://github.com/kubernetes-sigs/cluster-api-provider-aws
- Congrats on creating https://github.com/kubernetes-sigs/cluster-api-provider-openstack!
- Multiple implementations for each provider in the same repo?
- Vote on creation of https://github.com/kubernetes-sigs/cluster-api-provider-gcp
- Rename ProviderConfig to ProviderSpec
- Update on the PR backlog
- Discussion of how to get kubeconfig for a cluster (ssh or not)
- Alpha exit criteria
- Introduction to new folks from Redhat
- Packet provider
- Provider Implementers Office Hours
Highlights:
- Official azure provider
- Issues for new contributors
- Vote on creation of https://github.com/kubernetes-sigs/cluster-api-provider-aws
- Congrats on creating https://github.com/kubernetes-sigs/cluster-api-provider-openstack!
- Multiple implementations for each provider in the same repo?
- Vote on creation of https://github.com/kubernetes-sigs/cluster-api-provider-gcp
- Rename ProviderConfig to ProviderSpec
- Update on the PR backlog
- Discussion of how to get kubeconfig for a cluster (ssh or not)
- Alpha exit criteria
- Introduction to new folks from Redhat
- Packet provider
- Provider Implementers Office Hours
- 15 participants
- 54 minutes
17 Jul 2018
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.9i0mvlglyeh5
Highlights:
- kubeadm / package support for other versions of Ubuntu
- Update on kube-deploy repository
- Transfer of ownership to kubernetes-sigs for https://github.com/detiber/cluster-api-provider-aws
- sig-charter
- State of testing and the cluster directory
- SIG sessions at Kubecon
- kubeadm-dind-cluster update
Highlights:
- kubeadm / package support for other versions of Ubuntu
- Update on kube-deploy repository
- Transfer of ownership to kubernetes-sigs for https://github.com/detiber/cluster-api-provider-aws
- sig-charter
- State of testing and the cluster directory
- SIG sessions at Kubecon
- kubeadm-dind-cluster update
- 8 participants
- 37 minutes
11 Jul 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.a0wiv5tuwnd6
Highlights:
- Alignment of cluster and machine actuator interfaces
- Bootstrapping an AWS provider implementation
- Update on creating the openstack repo
- clusterctl support for out-of-tree providers
- Deep dive session at Kubecon China / NA?
- Review of aggregate apiserver vs. CRDs
- Naming of ProviderConfig
- Issue triage for alpha milestone
- Office hours for provider implementors
- Repurposing code from the cross-cloud CNCF project
- Support for multiple masters
Highlights:
- Alignment of cluster and machine actuator interfaces
- Bootstrapping an AWS provider implementation
- Update on creating the openstack repo
- clusterctl support for out-of-tree providers
- Deep dive session at Kubecon China / NA?
- Review of aggregate apiserver vs. CRDs
- Naming of ProviderConfig
- Issue triage for alpha milestone
- Office hours for provider implementors
- Repurposing code from the cross-cloud CNCF project
- Support for multiple masters
- 14 participants
- 56 minutes
10 Jul 2018
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.py47inrndi23
Highlights:
- Brief discussion of SIG Charter
- Registering for SIG sessions at Kubecon China and Kubecon NA
- kubeadm API transition (v1alpha3 or v1beta1)
- Control plane component timeouts are an issue on slower devices when using kubeadm
- Discussion around what the SIG plans on supporting for different container runtimes
Highlights:
- Brief discussion of SIG Charter
- Registering for SIG sessions at Kubecon China and Kubecon NA
- kubeadm API transition (v1alpha3 or v1beta1)
- Control plane component timeouts are an issue on slower devices when using kubeadm
- Discussion around what the SIG plans on supporting for different container runtimes
- 7 participants
- 41 minutes
20 Jun 2018
Link to document: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#
- 6 participants
- 34 minutes
19 Jun 2018
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.pzy4m7bi9skj
Highlights:
- 1.11 release status
- Need to define a support matrix for kubeadm upgrades
- Test status
- Where to document alpha features
- Pending docs PR
Planning for 1.12 will start next week.
Highlights:
- 1.11 release status
- Need to define a support matrix for kubeadm upgrades
- Test status
- Where to document alpha features
- Pending docs PR
Planning for 1.12 will start next week.
- 7 participants
- 52 minutes
13 Jun 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.xil9madvokmo
Highlights:
- First release of clusterctl
- Status of clusterctl documentation
- Roles in the Machine object
- Specifying per Machine kubelet configuration
- Adding some networking configuration to machine status
- Integrate better with the IAAS control planes (e.g. GKE, AKE, EKS)?
- How does someone new try out the cluster api?
Highlights:
- First release of clusterctl
- Status of clusterctl documentation
- Roles in the Machine object
- Specifying per Machine kubelet configuration
- Adding some networking configuration to machine status
- Integrate better with the IAAS control planes (e.g. GKE, AKE, EKS)?
- How does someone new try out the cluster api?
- 10 participants
- 57 minutes
12 Jun 2018
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.k6sq8orkf4wg
Highlights:
- etcd issues
- Documentation for 1.11
- Kubeadm upgrade tests
- Cluster API alpha exit criteria
- Cloud provider documentation
- Themes for 1.11 release notes
Highlights:
- etcd issues
- Documentation for 1.11
- Kubeadm upgrade tests
- Cluster API alpha exit criteria
- Cloud provider documentation
- Themes for 1.11 release notes
- 11 participants
- 55 minutes
6 Jun 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.7u36zfdwfnqk
Highlights:
- Deleting "deployer" directories in favor of standardizing on clusterctl
- Building the first release of clusterctl
- SSH provider for already provisioned machines
- Continued discussion about how to run multiple clusters in a single namespace
Highlights:
- Deleting "deployer" directories in favor of standardizing on clusterctl
- Building the first release of clusterctl
- SSH provider for already provisioned machines
- Continued discussion about how to run multiple clusters in a single namespace
- 11 participants
- 51 minutes
30 May 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.7s0qcdkp4xc8
Highlights:
- Base images moving from alpine to debian
- API cleanup, in particular removing container runtime from the machine spec
- Proposal to add network address to machine
- Plan for cluster-wide actuators
- Machine to Cluster references
- Renaming the terraform provider to vSphere provider
Highlights:
- Base images moving from alpine to debian
- API cleanup, in particular removing container runtime from the machine spec
- Proposal to add network address to machine
- Plan for cluster-wide actuators
- Machine to Cluster references
- Renaming the terraform provider to vSphere provider
- 11 participants
- 44 minutes
23 May 2018
10:03:16 From Kris Rousey : bad things happen when I try to record
10:13:06 From Cluster Ops : Can somebody volunteer to take notes here please?
10:31:34 From Hardik Dodiya : http://gardener.cloud/ , we use master cluster to manage worker clusters, works great !!
10:39:46 From Daniel Lipovetsky : Can someone please post a link to the kubeadm etcd spec being discussed?
10:39:52 From Cluster Ops : yes
10:40:04 From Cluster Ops : Also curious
10:42:44 From luxas : https://docs.google.com/document/d/1Sk2VW7IKaLjrjf_4a1NEU0l7X0ysZoXq9lVBvH-ruYY/edit#
10:42:52 From luxas : (also in meeting notes)
10:48:42 From Kris Rousey : krousey, kris_nova, robert bailey, and rodrigo are the admins if this break you... I will post something to slack with this regard
10:48:53 From Cluster Ops : okay thanks krousey
10:13:06 From Cluster Ops : Can somebody volunteer to take notes here please?
10:31:34 From Hardik Dodiya : http://gardener.cloud/ , we use master cluster to manage worker clusters, works great !!
10:39:46 From Daniel Lipovetsky : Can someone please post a link to the kubeadm etcd spec being discussed?
10:39:52 From Cluster Ops : yes
10:40:04 From Cluster Ops : Also curious
10:42:44 From luxas : https://docs.google.com/document/d/1Sk2VW7IKaLjrjf_4a1NEU0l7X0ysZoXq9lVBvH-ruYY/edit#
10:42:52 From luxas : (also in meeting notes)
10:48:42 From Kris Rousey : krousey, kris_nova, robert bailey, and rodrigo are the admins if this break you... I will post something to slack with this regard
10:48:53 From Cluster Ops : okay thanks krousey
- 9 participants
- 50 minutes
23 May 2018
Quick demo of a Kubicorn cluster on AWS with a cluster API controller courtesy of Kris Nova
- 1 participant
- 31 minutes
9 May 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.xh5qa37052mc
Highlights:
- Recap from KubeCon
- Introducing Tencent
- Process for changing deployer images
- Cluster API Validation
- MachineDeployment PR
- Moving cloud provider code out of tree
- Validating ProviderConfig
Highlights:
- Recap from KubeCon
- Introducing Tencent
- Process for changing deployer images
- Cluster API Validation
- MachineDeployment PR
- Moving cloud provider code out of tree
- Validating ProviderConfig
- 12 participants
- 38 minutes
7 May 2018
Want to view more sessions and keep the conversations going? Join us for KubeCon + CloudNativeCon North America in Seattle, December 11 - 13, 2018 (http://bit.ly/KCCNCNA18) or in Shanghai, November 14-15 (http://bit.ly/kccncchina18).
What Does “Production Ready” Really Mean for a Kubernetes Cluster? - Lucas Käldström, Individual (Advanced Skill Level)
How would you describe and set up a “production ready” Kubernetes cluster? How are the buzzword terms “production ready” and “highly available” defined anyway? Can a cluster be created so that it’s end-to-end secured, has no single points of failure, is upgradable without control plane downtime and is conformant? If you have access to automated infrastructure, e.g. via a Cluster API controller, you should be able to do CI testing of your cluster, as well as CD of new configuration and versions. Some call this pattern “GitOps”; to write the desired cluster state declaratively and let a controller reconcile the cluster state. By the end of this talk, you should be able to tell: - What you may consider a “production ready” cluster to be and identify the moving parts - How to secure cluster component traffic - How to minimize failure points - How to manage clusters using the Cluster API
"About Lucas
Lucas is a passionate Kubernetes Maintainer and Certified Kubernetes Administrator that is excited about all things cloud native. Lucas has been engaged in Kubernetes work for about two years now and been involved in work like porting Kubernetes to multiple platforms, getting Minikube off the ground, being a core contributor in SIG Cluster Lifecycle and maintaining kubeadm. Besides Upper Secondary School Lucas runs a consulting company for Cloud Native tech programming tasks and runs the official CNCF & Kubernetes meetup in Finland. Lucas has been speaking at KubeCon in Berlin and Austin previously, and was awarded the Top Cloud Native Ambassador of 2017 together with Sarah Novotny."
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
Join us for KubeCon + CloudNativeCon in San Diego November 18 - 21. Learn more at https://bit.ly/2XTN3ho. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
What Does “Production Ready” Really Mean for a Kubernetes Cluster? - Lucas Käldström, Individual (Advanced Skill Level)
How would you describe and set up a “production ready” Kubernetes cluster? How are the buzzword terms “production ready” and “highly available” defined anyway? Can a cluster be created so that it’s end-to-end secured, has no single points of failure, is upgradable without control plane downtime and is conformant? If you have access to automated infrastructure, e.g. via a Cluster API controller, you should be able to do CI testing of your cluster, as well as CD of new configuration and versions. Some call this pattern “GitOps”; to write the desired cluster state declaratively and let a controller reconcile the cluster state. By the end of this talk, you should be able to tell: - What you may consider a “production ready” cluster to be and identify the moving parts - How to secure cluster component traffic - How to minimize failure points - How to manage clusters using the Cluster API
"About Lucas
Lucas is a passionate Kubernetes Maintainer and Certified Kubernetes Administrator that is excited about all things cloud native. Lucas has been engaged in Kubernetes work for about two years now and been involved in work like porting Kubernetes to multiple platforms, getting Minikube off the ground, being a core contributor in SIG Cluster Lifecycle and maintaining kubeadm. Besides Upper Secondary School Lucas runs a consulting company for Cloud Native tech programming tasks and runs the official CNCF & Kubernetes meetup in Finland. Lucas has been speaking at KubeCon in Berlin and Austin previously, and was awarded the Top Cloud Native Ambassador of 2017 together with Sarah Novotny."
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
Join us for KubeCon + CloudNativeCon in San Diego November 18 - 21. Learn more at https://bit.ly/2XTN3ho. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
- 1 participant
- 35 minutes
2 May 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.pwt481usyw83
This was a short meeting due to many folks attending KubeCon EU this week.
Topics:
- Cascading deletes of machines
- Status of MachineDeployment controller
- ProviderStatus on machines
- Status of migration to the cluster-api repository
This was a short meeting due to many folks attending KubeCon EU this week.
Topics:
- Cascading deletes of machines
- Status of MachineDeployment controller
- ProviderStatus on machines
- Status of migration to the cluster-api repository
- 6 participants
- 16 minutes
25 Apr 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#
Highlights:
- Discussion about new clusterctl tool and development location
- Presentation on configurable machine installation/setup on GCE
Highlights:
- Discussion about new clusterctl tool and development location
- Presentation on configurable machine installation/setup on GCE
- 10 participants
- 38 minutes
4 Apr 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#
Highlights:
- Kubeadm GA with Cluster API testing
- New import alias
- Machine Deployments progress
- CloudProvider migration effort
- Milestone updates
Highlights:
- Kubeadm GA with Cluster API testing
- New import alias
- Machine Deployments progress
- CloudProvider migration effort
- Milestone updates
- 5 participants
- 18 minutes
21 Mar 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.dqa4ycwj7avf
Highlights:
- Plans for the new repository (owners, migrating code, migrating issues)
- MachineClasses
- Breaking the machine controller's dependency on Google Actuator
- MachineDeployment status
Highlights:
- Plans for the new repository (owners, migrating code, migrating issues)
- MachineClasses
- Breaking the machine controller's dependency on Google Actuator
- MachineDeployment status
- 10 participants
- 58 minutes
14 Mar 2018
Link to doc: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#
10:04:15 From Jason DeTiberus : I'd like it if we could get rid of the prefixes in favor of tags, but I'm not willing to fight that battle myself :)
10:16:52 From ctracey : Should we be taking a cue from the current work to extract cloud providers from kubernetes itself…will we fall into the same trap if compiled directly into the cluster-api?
10:17:03 From Cluster Ops : ^ that is such a good question
10:04:15 From Jason DeTiberus : I'd like it if we could get rid of the prefixes in favor of tags, but I'm not willing to fight that battle myself :)
10:16:52 From ctracey : Should we be taking a cue from the current work to extract cloud providers from kubernetes itself…will we fall into the same trap if compiled directly into the cluster-api?
10:17:03 From Cluster Ops : ^ that is such a good question
- 6 participants
- 24 minutes
13 Mar 2018
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.xsikgkn152he
Highlights:
- Rolling etcd back to 3.1.12 for the 1.10 release
- 1.9 to 1.10 upgrade tests
- HA upgrade doc
- Code freeze Wed
- Future discussions needed on kubelet dynamic config & CoreDNS
- Deleting old getting started guides
Highlights:
- Rolling etcd back to 3.1.12 for the 1.10 release
- 1.9 to 1.10 upgrade tests
- HA upgrade doc
- Code freeze Wed
- Future discussions needed on kubelet dynamic config & CoreDNS
- Deleting old getting started guides
- 6 participants
- 46 minutes
7 Mar 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#
- 10 participants
- 42 minutes
28 Feb 2018
Link to meeting: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#
- 16 participants
- 45 minutes
28 Feb 2018
- 8 participants
- 36 minutes
14 Feb 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.xvr33m5suu00
Highlights:
- CRDs vs aggregated APIs with @erictune
- Splitting api groups
- Type for provider config
- Terminal vs. transient errors
- Container runtime in machine spec
Highlights:
- CRDs vs aggregated APIs with @erictune
- Splitting api groups
- Type for provider config
- Terminal vs. transient errors
- Container runtime in machine spec
- 12 participants
- 59 minutes
13 Feb 2018
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.rv6m9fojzdg3
Highlights:
- Open issues for 1.10
- kubeadm flags
- kubeadm HA
- CRDs vs api aggregation
- sub projects
- status of k8s anywhere
Highlights:
- Open issues for 1.10
- kubeadm flags
- kubeadm HA
- CRDs vs api aggregation
- sub projects
- status of k8s anywhere
- 7 participants
- 34 minutes
7 Feb 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.fr5wd1ldzmbj
Highlights:
- Discussion on CRDs vs API Aggregation
- Review of alpha milestone issues
- Overlap / integration with the Cluster Registry effort
- Moving machine & cluster API to different groups
- Documentation updates
- Improving test coverage
Highlights:
- Discussion on CRDs vs API Aggregation
- Review of alpha milestone issues
- Overlap / integration with the Cluster Registry effort
- Moving machine & cluster API to different groups
- Documentation updates
- Improving test coverage
- 8 participants
- 50 minutes
7 Feb 2018
Meeting Notes: https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit#heading=h.q2tayno78vgq
Highlights:
- etcd UX
- kubeadm HA upgrade instructions
- Punting on self-hosting?
Highlights:
- etcd UX
- kubeadm HA upgrade instructions
- Punting on self-hosting?
- 4 participants
- 51 minutes
6 Feb 2018
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.zgqqzba6ssut
Highlights:
- Demo & discusssion of https://github.com/kopeio/etcd-manager
- kubeadm on GKE
- Splitting kubeadm out of the main repository split
- Status of kubeadm moving to GA
- Publishing the sig mission statement
- Flag proliferation and cleanup
Highlights:
- Demo & discusssion of https://github.com/kopeio/etcd-manager
- kubeadm on GKE
- Splitting kubeadm out of the main repository split
- Status of kubeadm moving to GA
- Publishing the sig mission statement
- Flag proliferation and cleanup
- 8 participants
- 59 minutes
30 Jan 2018
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.fb897yipgfuc
- 8 participants
- 39 minutes
24 Jan 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.io2hjkir5u89
Highlights:
- Namespacing for api objects
- Summary of the node-controller-manager design review
- Discussion about managed machine states
- Status of PRs for Machines & MachineSets
- Validation custom provider configs
Highlights:
- Namespacing for api objects
- Summary of the node-controller-manager design review
- Discussion about managed machine states
- Status of PRs for Machines & MachineSets
- Validation custom provider configs
- 10 participants
- 56 minutes
24 Jan 2018
Meeting Notes: https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit#heading=h.ngmmgzhwdlbf
Highlights:
- Issue triage for the 1.10 milestone
Highlights:
- Issue triage for the 1.10 milestone
- 6 participants
- 52 minutes
23 Jan 2018
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.guuszpv89vya
Highlights:
- Reviewed the pull request backlog for kubeadm
- Discussed UX workflow for multi-master clusters
- SIG survey
- KubeCon sessions
- First KEP for cluster lifecycle
Highlights:
- Reviewed the pull request backlog for kubeadm
- Discussed UX workflow for multi-master clusters
- SIG survey
- KubeCon sessions
- First KEP for cluster lifecycle
- 9 participants
- 56 minutes
17 Jan 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.gltopy3z23bu
Highlights:
- Design review for the node-controller-manager is next Monday
- New Slack channel for cluster-api discussions
- Reducing duplicate code (generated clients) in the git repo
- PRs for Machine / MachineSet
- Namespacing for Machines
- States for managed machines
Highlights:
- Design review for the node-controller-manager is next Monday
- New Slack channel for cluster-api discussions
- Reducing duplicate code (generated clients) in the git repo
- PRs for Machine / MachineSet
- Namespacing for Machines
- States for managed machines
- 8 participants
- 32 minutes
17 Jan 2018
Meeting Notes: https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit#heading=h.k21262mvpkl8
Highlights:
- Updates from CoreOS about the stability of running self-hosted etcd
- Discussion on how to handle secrets for a self-hosted apiserver
Highlights:
- Updates from CoreOS about the stability of running self-hosted etcd
- Discussion on how to handle secrets for a self-hosted apiserver
- 5 participants
- 28 minutes
16 Jan 2018
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.hmxea1qykdh
Highlights:
- Old meeting notes have been archived at https://docs.google.com/document/d/17J496IR2tXKw7k97fxwz2KUWOf9rpBD3pIEsmDiJQSw/edit
- Cleanup of the cluster directory is proceeding, including a PR out to delete all of the salt configs
- Check out the support for multiple clusters in kubeadm-dind
- PR & Issue triage should happen today
Highlights:
- Old meeting notes have been archived at https://docs.google.com/document/d/17J496IR2tXKw7k97fxwz2KUWOf9rpBD3pIEsmDiJQSw/edit
- Cleanup of the cluster directory is proceeding, including a PR out to delete all of the salt configs
- Check out the support for multiple clusters in kubeadm-dind
- PR & Issue triage should happen today
- 6 participants
- 19 minutes
10 Jan 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.evqj64lfwx9m
Highlight:
- Presentation for a similar effort as ClusterAPI https://github.com/gardener/node-controller-manager
- Q & A, discussion around implementation strategy with node controller manager project.
- Discussion around single machine vs scale group
-
Highlight:
- Presentation for a similar effort as ClusterAPI https://github.com/gardener/node-controller-manager
- Q & A, discussion around implementation strategy with node controller manager project.
- Discussion around single machine vs scale group
-
- 12 participants
- 51 minutes
10 Jan 2018
Meeting Notes: https://docs.google.com/document/d/16CEsBSSGm3sMpvB_cFnKnqqi1OxhIcyX3lVwBpIyMHc/edit#heading=h.65cg9rbfsql9
Highlights:
- HA
- Cert rotation
- Self hosting
Highlights:
- HA
- Cert rotation
- Self hosting
- 7 participants
- 48 minutes
9 Jan 2018
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.jr94l93nzpz8
Highlights:
- Followups on action items from last meeting
- Cloud provider working group overview
- kubeadm GA list
- 1.10 planning
Highlights:
- Followups on action items from last meeting
- Cloud provider working group overview
- kubeadm GA list
- 1.10 planning
- 7 participants
- 59 minutes
3 Jan 2018
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.xmmltqf4u77o
Highlights:
- How do we expect to reuse common code across environments?
- Resource scoping - global or namespaced?
- Cluster bootstrapping
- How to pivot machine controllers or run multiple machine controllers for a cluster
- Encouraging new contributions
- 2018 roadmap
Highlights:
- How do we expect to reuse common code across environments?
- Resource scoping - global or namespaced?
- Cluster bootstrapping
- How to pivot machine controllers or run multiple machine controllers for a cluster
- Encouraging new contributions
- 2018 roadmap
- 11 participants
- 39 minutes
2 Jan 2018
Meeting Notes: https://docs.google.com/document/d/1deJYPIF4LmhGjDVaqrswErIrV7mtwJgovtLnPCDxP7U/edit#heading=h.28g6lc7lhcgu
Highlights:
- Should we save content that was previously on the wiki?
- Adding more folks to GitHub teams for cluster lifecycle
- Should we support hyperkube?
- Plan to use certification as a means to cull the number of turn-up guides in the official Kubernetes documentation
- Please review the kubeadm GA document
- Next week we will do 1.10 planning
- Please review the contributing to the sig document
- Discussion about `kubeadm --join --master`
Highlights:
- Should we save content that was previously on the wiki?
- Adding more folks to GitHub teams for cluster lifecycle
- Should we support hyperkube?
- Plan to use certification as a means to cull the number of turn-up guides in the official Kubernetes documentation
- Please review the kubeadm GA document
- Next week we will do 1.10 planning
- Please review the contributing to the sig document
- Discussion about `kubeadm --join --master`
- 8 participants
- 55 minutes
20 Dec 2017
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.cmfhryt158zg
Highlights:
- Brainstormed about the 2018 Roadmap for Cluster API
- Update on MachineSets
- Update on AWS implementation
Highlights:
- Brainstormed about the 2018 Roadmap for Cluster API
- Update on MachineSets
- Update on AWS implementation
- 7 participants
- 46 minutes
19 Dec 2017
Meeting Notes: https://docs.google.com/document/d/17J496IR2tXKw7k97fxwz2KUWOf9rpBD3pIEsmDiJQSw/edit#heading=h.n1as6vm6zx8z
Highlights:
- How to run kubeadm e2e tests
- Path forward on https://github.com/kubernetes/kubernetes/pull/56956
- KubeCon retrospective
- KEPs for sig cluster lifecycle
- Change the Wed morning meeting to office hours for kubeadm
- Status of kubeadm adoption working group (fold into office hours)
- 1.9 planning retrospective
Highlights:
- How to run kubeadm e2e tests
- Path forward on https://github.com/kubernetes/kubernetes/pull/56956
- KubeCon retrospective
- KEPs for sig cluster lifecycle
- Change the Wed morning meeting to office hours for kubeadm
- Status of kubeadm adoption working group (fold into office hours)
- 1.9 planning retrospective
- 7 participants
- 59 minutes
15 Dec 2017
Building a Cluster Management API using Kubicorn [A] - Robert Bailey, Google & Kris Nova, Heptio
Kris Nova (Heptio) and Robert Bailey (Google) join forces and begin the difficult task of looking into the future of the infrastructure layer of Kubernetes. We start the talk with a brief summary of the state of infrastructure today and explain the differences between “infrastructure as code” and “infrastructure as software”. We look at how the lack of definition in the most fundamental layer of the stack has fragmented our community and caused problems with adoption of Kubernetes.
We propose a new way of representing infrastructure (the cluster API) for the Kubernetes community and take a deep dive into its implementation in kubicorn. We look at the structure of the cluster API and share valuable insight on how we took lessons from other areas of Kubernetes to form what it is today. Furthermore we look at the power of having a declarative approach to infrastructure as we start to treat the infrastructure layer the same as the application layer.
The audience will walk away with a clear understanding of the infrastructure layer, as well as a new way of thinking about the infrastructure in the future via the cluster API.
About Robert Bailey
Robert is a lead for the cluster lifecycle SIG and has been working on Kubernetes for more than 3 years. He was one of the founding members of the Google Container Engine team. Prior to Kubernetes, he was a Site Reliability Engineer helping teams at Google launch new products and services.
About Kris Nova
Kris Nova is an Advocacy Boss for Heptio with an emphasis in containers and the Linux operating system. She lives and breathes open source. She believes in advocating for the best interest of the software, and keeping the design process open and honest. She is a backend infrastructure engineer, with roots in Linux, and C. She has a deep technical background in the Go programming language, and has authored many successful tools in Go. She is a Kubernetes maintainer, and the creator of kubicorn, a successful Kubernetes infrastructure management tool. She organizes a special interest group in Kubernetes, and is a leader in the community. Kris understands the grievances with running cloud native infrastructure via a distributed cloud native application, and is authoring an O'Reilly book on the topic called Cloud Native Infrastructure.
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
Kris Nova (Heptio) and Robert Bailey (Google) join forces and begin the difficult task of looking into the future of the infrastructure layer of Kubernetes. We start the talk with a brief summary of the state of infrastructure today and explain the differences between “infrastructure as code” and “infrastructure as software”. We look at how the lack of definition in the most fundamental layer of the stack has fragmented our community and caused problems with adoption of Kubernetes.
We propose a new way of representing infrastructure (the cluster API) for the Kubernetes community and take a deep dive into its implementation in kubicorn. We look at the structure of the cluster API and share valuable insight on how we took lessons from other areas of Kubernetes to form what it is today. Furthermore we look at the power of having a declarative approach to infrastructure as we start to treat the infrastructure layer the same as the application layer.
The audience will walk away with a clear understanding of the infrastructure layer, as well as a new way of thinking about the infrastructure in the future via the cluster API.
About Robert Bailey
Robert is a lead for the cluster lifecycle SIG and has been working on Kubernetes for more than 3 years. He was one of the founding members of the Google Container Engine team. Prior to Kubernetes, he was a Site Reliability Engineer helping teams at Google launch new products and services.
About Kris Nova
Kris Nova is an Advocacy Boss for Heptio with an emphasis in containers and the Linux operating system. She lives and breathes open source. She believes in advocating for the best interest of the software, and keeping the design process open and honest. She is a backend infrastructure engineer, with roots in Linux, and C. She has a deep technical background in the Go programming language, and has authored many successful tools in Go. She is a Kubernetes maintainer, and the creator of kubicorn, a successful Kubernetes infrastructure management tool. She organizes a special interest group in Kubernetes, and is a leader in the community. Kris understands the grievances with running cloud native infrastructure via a distributed cloud native application, and is authoring an O'Reilly book on the topic called Cloud Native Infrastructure.
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
- 5 participants
- 34 minutes
15 Dec 2017
Self-Hosted Kubernetes: How and Why [I] - Diego Pontoriero, CoreOS
How Kubernetes is deployed and managed has changed since the first release of the project. From configuration management systems and unit files to deploying Kubernetes using Kubernetes, a lot has changed. Self-hosted Kubernetes has many benefits as a deployment option, and this talk will highlight those benefits, as well as explain the history and nuances of making self-hosted Kubernetes possible.
In this talk I will describe what self-hosted Kubernetes means, why it exists, how it came into existence, and what you need to know if you're running a self-hosted cluster. Many tools now deploy self-hosted clusters including bootkube and kubeadm, so knowledge of how this works can be very important for anybody running a Kubernetes cluster.
What are the benefits of self-hosting? How does it work? What do I need to know if I'm administering a self-hosted cluster?
All those questions and more will be discussed in detail in this talk. In addition, I will discuss how various projects and products take advantage of the many benefits of self-hosting, such as Tectonic.
About Diego Pontoriero
Diego Pontoriero is a Software Engineer on the Tectonic team at CoreOS, where he works on software that deploys, manages, and upgrades self-hosted Kubernetes clusters. Prior to CoreOS Diego worked at Google building a video-based learning platform, a mobile phone carrier, and a petabyte-scale data warehouse.
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
How Kubernetes is deployed and managed has changed since the first release of the project. From configuration management systems and unit files to deploying Kubernetes using Kubernetes, a lot has changed. Self-hosted Kubernetes has many benefits as a deployment option, and this talk will highlight those benefits, as well as explain the history and nuances of making self-hosted Kubernetes possible.
In this talk I will describe what self-hosted Kubernetes means, why it exists, how it came into existence, and what you need to know if you're running a self-hosted cluster. Many tools now deploy self-hosted clusters including bootkube and kubeadm, so knowledge of how this works can be very important for anybody running a Kubernetes cluster.
What are the benefits of self-hosting? How does it work? What do I need to know if I'm administering a self-hosted cluster?
All those questions and more will be discussed in detail in this talk. In addition, I will discuss how various projects and products take advantage of the many benefits of self-hosting, such as Tectonic.
About Diego Pontoriero
Diego Pontoriero is a Software Engineer on the Tectonic team at CoreOS, where he works on software that deploys, manages, and upgrades self-hosted Kubernetes clusters. Prior to CoreOS Diego worked at Google building a video-based learning platform, a mobile phone carrier, and a petabyte-scale data warehouse.
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
- 6 participants
- 34 minutes
15 Dec 2017
kubeadm Cluster Creation Internals: From Self-Hosting to Upgradability and HA [A] - Lucas Käldström, Student
kubeadm is the Kubernetes tool that helps you set up a Kubernetes cluster quickly and easily. kubeadm is different from other Kubernetes setup tools in that it doesn’t assume or depend on any special infrastructure. It assumes that you have one or more machine available and those machines can connect to each other via the network.
The master plan is to make kubeadm work both as the “fast path” to getting a best-practice Kubernetes cluster with a couple of easy-to-remember commands and as a toolbox for higher-level solutions like GKE, kops and Tectonic.
But how does kubeadm actually set up a cluster? How is it so easy to add a node with the Bootstrap Token? How does it self-host the control plane? How does it upgrade clusters smoothly with only one command? What is the plan for achieving HA without relying on any external infrastructure?
After this talk, you will be able to describe how:
kubeadm runs the different tasks in different stages
the network traffic between the cluster components flow
self-hosting of the control plane works
the Bootstrap Token works
the `kubeadm upgrade` command works
kubeadm will support multiple masters that are dynamically rotated
you can extend kubeadm to build your higher-level Kubernetes deployment tool
About Lucas Käldström
Lucas is a passionate Kubernetes Maintainer and CNCF Ambassador that is excited about all things cloud native. Lucas has been engaged in Kubernetes work for about two years now and been involved in work like porting Kubernetes to multiple platforms, getting Minikube off the ground, being a core contributor in SIG Cluster Lifecycle and maintaining kubeadm. Besides Upper Secondary School Lucas runs a consulting company for Cloud Native tech programming tasks.
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
kubeadm is the Kubernetes tool that helps you set up a Kubernetes cluster quickly and easily. kubeadm is different from other Kubernetes setup tools in that it doesn’t assume or depend on any special infrastructure. It assumes that you have one or more machine available and those machines can connect to each other via the network.
The master plan is to make kubeadm work both as the “fast path” to getting a best-practice Kubernetes cluster with a couple of easy-to-remember commands and as a toolbox for higher-level solutions like GKE, kops and Tectonic.
But how does kubeadm actually set up a cluster? How is it so easy to add a node with the Bootstrap Token? How does it self-host the control plane? How does it upgrade clusters smoothly with only one command? What is the plan for achieving HA without relying on any external infrastructure?
After this talk, you will be able to describe how:
kubeadm runs the different tasks in different stages
the network traffic between the cluster components flow
self-hosting of the control plane works
the Bootstrap Token works
the `kubeadm upgrade` command works
kubeadm will support multiple masters that are dynamically rotated
you can extend kubeadm to build your higher-level Kubernetes deployment tool
About Lucas Käldström
Lucas is a passionate Kubernetes Maintainer and CNCF Ambassador that is excited about all things cloud native. Lucas has been engaged in Kubernetes work for about two years now and been involved in work like porting Kubernetes to multiple platforms, getting Minikube off the ground, being a core contributor in SIG Cluster Lifecycle and maintaining kubeadm. Besides Upper Secondary School Lucas runs a consulting company for Cloud Native tech programming tasks.
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
- 4 participants
- 37 minutes
13 Dec 2017
Meeting Notes: https://docs.google.com/document/d/16CEsBSSGm3sMpvB_cFnKnqqi1OxhIcyX3lVwBpIyMHc/edit#heading=h.6bl6aiqaw2a
Highlights:
- Plans for self hosting
- GA to kubeadm
Highlights:
- Plans for self hosting
- GA to kubeadm
- 6 participants
- 58 minutes
13 Dec 2017
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.lsmwz6edy5vg
Highlights:
- Moving the meeting time an hour earlier starting next week!
- Feedback on the cluster api from Kubecon
- Discussion about whether we should use cloud provider machine groups (ASG, MIG) for MachineSets
- API aggregation vs. CRDs
Highlights:
- Moving the meeting time an hour earlier starting next week!
- Feedback on the cluster api from Kubecon
- Discussion about whether we should use cloud provider machine groups (ASG, MIG) for MachineSets
- API aggregation vs. CRDs
- 17 participants
- 58 minutes
12 Dec 2017
Meeting Notes: https://docs.google.com/document/d/17J496IR2tXKw7k97fxwz2KUWOf9rpBD3pIEsmDiJQSw/edit#heading=h.4d67o86ks7yr
Highlights:
- Reviewed the proposal for a new benchmarking tool
- Update on gcr.io support of manifest lists
- Running kubeadm tests on AWS
- 1.9 release status
- KubeCon summary
Highlights:
- Reviewed the proposal for a new benchmarking tool
- Update on gcr.io support of manifest lists
- Running kubeadm tests on AWS
- 1.9 release status
- KubeCon summary
- 11 participants
- 40 minutes
29 Nov 2017
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.ketwyhmq5g4
Highlights:
- Discussed whether the machine API should include optional references to configmaps vs. embedded strings
- Discussed docs for cloud providers who want to support Cluster API
- Worked through a Kubernetes Enhancement Proposal (KEP) for the machine API
- Discussed the status of the control plane API
Highlights:
- Discussed whether the machine API should include optional references to configmaps vs. embedded strings
- Discussed docs for cloud providers who want to support Cluster API
- Worked through a Kubernetes Enhancement Proposal (KEP) for the machine API
- Discussed the status of the control plane API
- 7 participants
- 55 minutes
28 Nov 2017
Meeting Notes: https://docs.google.com/document/d/17J496IR2tXKw7k97fxwz2KUWOf9rpBD3pIEsmDiJQSw/edit#heading=h.o52g9rnenj3v
Highlights:
- Reminders about upcoming KubeCon sessions
- Next week's meeting is cancelled due to overlap with the contributor summit
- 1.9 release status
- 1.10 planning session using the new Kubernetes Enhancement Proposal (KEP) process
Highlights:
- Reminders about upcoming KubeCon sessions
- Next week's meeting is cancelled due to overlap with the contributor summit
- 1.9 release status
- 1.10 planning session using the new Kubernetes Enhancement Proposal (KEP) process
- 7 participants
- 59 minutes
22 Nov 2017
Kuberentes
SIG Cluster Lifecycle
Cluster API Breakout Session
2017/11/22
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#
SIG Cluster Lifecycle
Cluster API Breakout Session
2017/11/22
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#
- 5 participants
- 27 minutes
15 Nov 2017
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.n38b7dggmouq
Highlights:
- Presentation about https://github.com/kube-node followed by a demo and a long discussion
Highlights:
- Presentation about https://github.com/kube-node followed by a demo and a long discussion
- 7 participants
- 58 minutes
8 Nov 2017
Kuberentes
SIG Cluster Lifecycle
Cluster API Breakout Session
2017/11/08
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#
SIG Cluster Lifecycle
Cluster API Breakout Session
2017/11/08
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#
- 7 participants
- 44 minutes
7 Nov 2017
Meeting Notes: https://docs.google.com/document/d/17J496IR2tXKw7k97fxwz2KUWOf9rpBD3pIEsmDiJQSw/edit#heading=h.1ci4n8ah9w90
Highlights:
- Discussion about what the sig should be doing / reworking the sig charter
- Signing up for a SIG session at KubeCon
- Getting started guides
- IPv6 support in Kubernetes / kubeadm
- Survey for when do 1.10 planning
- Kubeadm reference doc PR
Highlights:
- Discussion about what the sig should be doing / reworking the sig charter
- Signing up for a SIG session at KubeCon
- Getting started guides
- IPv6 support in Kubernetes / kubeadm
- Survey for when do 1.10 planning
- Kubeadm reference doc PR
- 9 participants
- 59 minutes
1 Nov 2017
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.1xxkmbrspdw0
Summary:
- Reminder about presenting the cluster API to sig-cluster-ops tomorrow
- Discussion about the machines API
- Discussion about the control plane API
Summary:
- Reminder about presenting the cluster API to sig-cluster-ops tomorrow
- Discussion about the machines API
- Discussion about the control plane API
- 5 participants
- 53 minutes
31 Oct 2017
Meeting Notes: https://docs.google.com/document/d/17J496IR2tXKw7k97fxwz2KUWOf9rpBD3pIEsmDiJQSw/edit#heading=h.jjllw2gnm0ai
Highlights:
- 1.9 feature freeze
- 1.10 planning
- doc update
- cluster api update
Highlights:
- 1.9 feature freeze
- 1.10 planning
- doc update
- cluster api update
- 5 participants
- 18 minutes
25 Oct 2017
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.nb9mbe929fm9
Highlights:
- Discussed the proposed APIs for Machines & the control plane
- Jacob did a demo of the scaffolding he's built around the proposed Machines API
Highlights:
- Discussed the proposed APIs for Machines & the control plane
- Jacob did a demo of the scaffolding he's built around the proposed Machines API
- 4 participants
- 36 minutes
18 Oct 2017
Meeting Notes: https://docs.google.com/document/d/16ils69KImmE94RlmzjWDrkmFZysgB2J4lGnYMRN89WM/edit#heading=h.w4xw7yvzdgs
Highlights:
- Reviewed the pitch deck for the cluster api (join the sig cluster lifecycle mailing list to get access)
- Process going forward
- Overlap with the cluster registry
- Reviewed an early version of the proposed node api
Highlights:
- Reviewed the pitch deck for the cluster api (join the sig cluster lifecycle mailing list to get access)
- Process going forward
- Overlap with the cluster registry
- Reviewed an early version of the proposed node api
- 10 participants
- 58 minutes
18 Oct 2017
Meeting Notes: https://docs.google.com/document/d/16CEsBSSGm3sMpvB_cFnKnqqi1OxhIcyX3lVwBpIyMHc/edit#heading=h.w4xw7yvzdgs
- 5 participants
- 53 minutes
11 Oct 2017
Meeting Notes: https://docs.google.com/document/d/16CEsBSSGm3sMpvB_cFnKnqqi1OxhIcyX3lVwBpIyMHc/edit#heading=h.ouxlycri8nmz
Discussed status of self hosting dependencies: daemonset surge updates and kubelet checkpointing.
Discussed status of self hosting dependencies: daemonset surge updates and kubelet checkpointing.
- 3 participants
- 17 minutes
4 Oct 2017
Meeting Notes: https://docs.google.com/document/d/16CEsBSSGm3sMpvB_cFnKnqqi1OxhIcyX3lVwBpIyMHc/edit#heading=h.f6td0r73moxc
Highlights:
- Short meeting today!
- Brief updates on blockers for self-hosting
- Discussed where to find the HA proposals to review for next week
Highlights:
- Short meeting today!
- Brief updates on blockers for self-hosting
- Discussed where to find the HA proposals to review for next week
- 5 participants
- 10 minutes
26 Sep 2017
Meeting Notes: https://docs.google.com/document/d/17J496IR2tXKw7k97fxwz2KUWOf9rpBD3pIEsmDiJQSw/edit#heading=h.d8erdtdrdkzg
Highlights:
- Starting the 1.9 planning process
- 1.8 blog post ready for review
- DNS in kubeadm
- Using dynamic kubelet configuration in kubeadm
- Adding contributors for the kubeadm repository
- Upgrade tests for 1.8
- Testing
Highlights:
- Starting the 1.9 planning process
- 1.8 blog post ready for review
- DNS in kubeadm
- Using dynamic kubelet configuration in kubeadm
- Adding contributors for the kubeadm repository
- Upgrade tests for 1.8
- Testing
- 10 participants
- 58 minutes
9 Aug 2017
Meeting Notes: https://docs.google.com/document/d/16CEsBSSGm3sMpvB_cFnKnqqi1OxhIcyX3lVwBpIyMHc/edit#heading=h.fmpkgj4c8u4h
The agenda for the meeting was a deep dive into the checkpointing proposal, with representatives present from sig-auth and sig-node in addition to sig-cluster-lifecycle.
The agenda for the meeting was a deep dive into the checkpointing proposal, with representatives present from sig-auth and sig-node in addition to sig-cluster-lifecycle.
- 8 participants
- 55 minutes
8 Aug 2017
Meeting Notes: https://docs.google.com/document/d/17J496IR2tXKw7k97fxwz2KUWOf9rpBD3pIEsmDiJQSw/edit#heading=h.bf0asbysznm2
Highlights:
- Update on progress towards 1.8 features
- Discussed how we should be represented in SIG-PM
- Check out https://github.com/heptio/sonobuoy
- Demo of the kubeadm upgrade command
- How to get started contributing to the SIG
Highlights:
- Update on progress towards 1.8 features
- Discussed how we should be represented in SIG-PM
- Check out https://github.com/heptio/sonobuoy
- Demo of the kubeadm upgrade command
- How to get started contributing to the SIG
- 12 participants
- 51 minutes
1 Aug 2017
Meeting Notes: https://docs.google.com/document/d/17J496IR2tXKw7k97fxwz2KUWOf9rpBD3pIEsmDiJQSw/edit#heading=h.xy0lixlihmgr
Highlights:
- kubeadm adoption working group has created of list of blockers
- Certificates and certificate authorities (how many do we need?)
- Cluster API: new breakout meeting will be scheduled
- Should we promote the kubeadm API to v1beta1
- Feature freeze today!
- kops tests failing?
Highlights:
- kubeadm adoption working group has created of list of blockers
- Certificates and certificate authorities (how many do we need?)
- Cluster API: new breakout meeting will be scheduled
- Should we promote the kubeadm API to v1beta1
- Feature freeze today!
- kops tests failing?
- 11 participants
- 57 minutes
25 Jul 2017
Meeting Notes: https://docs.google.com/document/d/17J496IR2tXKw7k97fxwz2KUWOf9rpBD3pIEsmDiJQSw/edit#heading=h.uyqetzfp6u9a
Highlights:
- Deadline for feature issues is next week.
- Upgrading to etcd3.1.10 for the 1.8 release
- Working with sig-node and sig-apps on checkpointing & daemonset upgrade strategies to enable self hosting
- kubeadm adoption group formed with the initial meeting immediately following this meeting
- Thinking about more official structure around sig membership and committers
- kubeadm can now consume CI images
- kubeadm pgrade proposal out for review
- Folks are starting to work on a common cluster API; more details will be sent to the SIG mailing list
Highlights:
- Deadline for feature issues is next week.
- Upgrading to etcd3.1.10 for the 1.8 release
- Working with sig-node and sig-apps on checkpointing & daemonset upgrade strategies to enable self hosting
- kubeadm adoption group formed with the initial meeting immediately following this meeting
- Thinking about more official structure around sig membership and committers
- kubeadm can now consume CI images
- kubeadm pgrade proposal out for review
- Folks are starting to work on a common cluster API; more details will be sent to the SIG mailing list
- 14 participants
- 56 minutes
14 Nov 2015
Want to view more sessions and keep the conversations going? Join us for KubeCon + CloudNativeCon North America in Seattle, December 11 - 13, 2018 (http://bit.ly/KCCNCNA18) or in Shanghai, November 14-15 (http://bit.ly/kccncchina18)
SIG Cluster Lifecycle Intro – Justin Santa Barbara, FathomDB & Lucas Käldström (Any Skill Level)
The Cluster Lifecycle SIG is the Special Interest Group that is responsible for building the user experience for deploying and upgrading Kubernetes clusters. Our mission is examining how we should change Kubernetes to make it easier to operate. Since the group's formation we have primarily focused on creating kubeadm, a streamlined installer tool and building block to simplify the installation and upgrade experience, and enhance kops, the easiest OSS way to get a production-grade Kubernetes cluster up and running in AWS. We have recently begun building a Cluster API to provide an abstraction of machines across different deployment environments along with a common control plane configuration. In this introduction session, we will present the SIG's mission statement, review recent accomplishments, and discuss our future plans, where you are very welcome to contribute to the discussion. We will also focus on how new contributors can get involved in helping shape the future of Kubernetes' cluster lifecycle management.
About Justin
Justin is one of the kubernetes sig-aws leads and started the kops project, so loves to talk about how to install and operate kubernetes, or on all things kubernetes-on-AWS or on other clouds (particularly GCP, having just joined Google!)
About Lucas
Lucas is a passionate Kubernetes Maintainer and Certified Kubernetes Administrator that is excited about all things cloud native. Lucas has been engaged in Kubernetes work for about two years now and been involved in work like porting Kubernetes to multiple platforms, getting Minikube off the ground, being a core contributor in SIG Cluster Lifecycle and maintaining kubeadm. Besides Upper Secondary School Lucas runs a consulting company for Cloud Native tech programming tasks and runs the official CNCF & Kubernetes meetup in Finland. Lucas has been speaking at KubeCon in Berlin and Austin previously, and was awarded the Top Cloud Native Ambassador of 2017 together with Sarah Novotny.
Want to view more sessions and keep the conversations going? Join us for KubeCon + CloudNativeCon North America in Seattle, December 11 - 13, 2018 (http://bit.ly/KCCNCNA18) or in Shanghai, November 14-15 (http://bit.ly/kccncchina18)
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
SIG Cluster Lifecycle Intro – Justin Santa Barbara, FathomDB & Lucas Käldström (Any Skill Level)
The Cluster Lifecycle SIG is the Special Interest Group that is responsible for building the user experience for deploying and upgrading Kubernetes clusters. Our mission is examining how we should change Kubernetes to make it easier to operate. Since the group's formation we have primarily focused on creating kubeadm, a streamlined installer tool and building block to simplify the installation and upgrade experience, and enhance kops, the easiest OSS way to get a production-grade Kubernetes cluster up and running in AWS. We have recently begun building a Cluster API to provide an abstraction of machines across different deployment environments along with a common control plane configuration. In this introduction session, we will present the SIG's mission statement, review recent accomplishments, and discuss our future plans, where you are very welcome to contribute to the discussion. We will also focus on how new contributors can get involved in helping shape the future of Kubernetes' cluster lifecycle management.
About Justin
Justin is one of the kubernetes sig-aws leads and started the kops project, so loves to talk about how to install and operate kubernetes, or on all things kubernetes-on-AWS or on other clouds (particularly GCP, having just joined Google!)
About Lucas
Lucas is a passionate Kubernetes Maintainer and Certified Kubernetes Administrator that is excited about all things cloud native. Lucas has been engaged in Kubernetes work for about two years now and been involved in work like porting Kubernetes to multiple platforms, getting Minikube off the ground, being a core contributor in SIG Cluster Lifecycle and maintaining kubeadm. Besides Upper Secondary School Lucas runs a consulting company for Cloud Native tech programming tasks and runs the official CNCF & Kubernetes meetup in Finland. Lucas has been speaking at KubeCon in Berlin and Austin previously, and was awarded the Top Cloud Native Ambassador of 2017 together with Sarah Novotny.
Want to view more sessions and keep the conversations going? Join us for KubeCon + CloudNativeCon North America in Seattle, December 11 - 13, 2018 (http://bit.ly/KCCNCNA18) or in Shanghai, November 14-15 (http://bit.ly/kccncchina18)
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
- 9 participants
- 39 minutes
14 Nov 2015
Want to view more sessions and keep the conversations going? Join us for KubeCon + CloudNativeCon North America in Seattle, December 11 - 13, 2018 (http://bit.ly/KCCNCNA18) or in Shanghai, November 14-15 (http://bit.ly/kccncchina18).
SIG Cluster Lifecycle: kubeadm Deep Dive – Alexander Kanevskiy, Intel, Timothy St. Clair, Heptio, & Luke Marsden (Intermediate Skill Level)
About Alexander
Alexander has 20 years of experience in area of creating Linux distributions for different market segments, SCM, Infrastructure, Release Engineering, Continuous Integration & Delivery and various build systems. Previous projects: BlackCat Linux, ASPLinux, OpenWall, Maemo, MeeGo, Tizen, Yocto Currently employed by Intel, Open Source Technology Center in Cloud Technologies group as Architect.
About Luke
Luke is the CEO and Founder at Dotmesh. He is also a Kubernetes SIG lead for SIG-cluster-lifecycle, where he was involved in developing the first version of kubeadm. He previously worked on Developer Experience at Weaveworks, where he spoke and taught at conferences, meetups and trainings on cloud native topics such as container networking, monitoring with Prometheus, continuous delivery and OpenTracing. Before that he was the CTO and Founder at ClusterHQ, where he got involved right at the start of the Docker and Kubernetes journey, collaborating closely with Docker and others to develop the first Docker volume plugin mechanism and build the first implementation of container persistence, Flocker.
About Timothy
Timothy St. Clair is a Staff Software Engineer at Heptio and is a core contributor to the Kubernetes project, a Steering Committee member, and a lead on SIG-Cluster-Lifecycle. Timothy is also a PMC member Apache Mesos Project, and has worked on the development and integration of various open source distributed systems projects, including Kubernetes, Mesos, Tachyon, Hadoop, and Condor.
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
SIG Cluster Lifecycle: kubeadm Deep Dive – Alexander Kanevskiy, Intel, Timothy St. Clair, Heptio, & Luke Marsden (Intermediate Skill Level)
About Alexander
Alexander has 20 years of experience in area of creating Linux distributions for different market segments, SCM, Infrastructure, Release Engineering, Continuous Integration & Delivery and various build systems. Previous projects: BlackCat Linux, ASPLinux, OpenWall, Maemo, MeeGo, Tizen, Yocto Currently employed by Intel, Open Source Technology Center in Cloud Technologies group as Architect.
About Luke
Luke is the CEO and Founder at Dotmesh. He is also a Kubernetes SIG lead for SIG-cluster-lifecycle, where he was involved in developing the first version of kubeadm. He previously worked on Developer Experience at Weaveworks, where he spoke and taught at conferences, meetups and trainings on cloud native topics such as container networking, monitoring with Prometheus, continuous delivery and OpenTracing. Before that he was the CTO and Founder at ClusterHQ, where he got involved right at the start of the Docker and Kubernetes journey, collaborating closely with Docker and others to develop the first Docker volume plugin mechanism and build the first implementation of container persistence, Flocker.
About Timothy
Timothy St. Clair is a Staff Software Engineer at Heptio and is a core contributor to the Kubernetes project, a Steering Committee member, and a lead on SIG-Cluster-Lifecycle. Timothy is also a PMC member Apache Mesos Project, and has worked on the development and integration of various open source distributed systems projects, including Kubernetes, Mesos, Tachyon, Hadoop, and Condor.
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
- 7 participants
- 33 minutes