youtube image
From YouTube: Kubernetes SIG On-Prem Meeting 20170301

Description

Agenda/minutes:
https://docs.google.com/document/d/1AHF1a8ni7iMOpUgDMcPKrLQCML5EMZUAwP4rro3P6sk/edit#heading=h.nrh4k3ck5icu

Mailing list:
https://groups.google.com/forum/#!forum/kubernetes-sig-on-prem

Note from the meeting:
Agenda:
Demo/Presentation: K8s via Kargo using Digital Rebar on Hybrid by Rob Hirschfeld (RackN) & Greg Althaus (RackN)

## Demo:
https://github.com/digitalrebar/digitalrebar
OSS project called Digital Rebar: http://rebar.digital
Deploying kubernetes via Kargo; on-prem and cloud
Bare metal, bios discovery, lay down OS, etc. send workloads which are kubernetes deployments
Digital Rebar is a bunch of containers all managed together;
https://rackn.files.wordpress.com/2016/02/hwmaas.jpg

## Federation state on premise (mattymo)
What is current state?

https://github.com/kubernetes/kubernetes/issues/40536

What gaps need to be closed to get decent UX?

https://github.com/kubernetes/kubernetes.github.io/issues/2197
https://github.com/kubernetes/kubernetes/issues/39271

Second Agenda Item:
Federation state on premise
Are members aware of federtion on prem? Requires external IP controlling for controlling and connect to an outside DNS.
Kubefed needs to be updated to support but it’s not in the syntax, but there is provider there.
Should we use NodePort, Mirantis ExternalIP for deploying controlling the federation API server.
How are you solving this problem now?
Most are not running federation on-prem?
Toy playing using Kargo to do cross-cluster stuff, but with regard to networking this is abysmal and indicates a “don’t do that.”
The most likely use case for private/public. Local bare metal and once you reach limit you do on-demand on your cloud-vendor and use cloud to do up-down scaling. You don’t want latency between on-prem and cloud and with cni you should be able to do ingress between clusters. It just doesn’t work yet unless you point both on-prem and cloud to Google.

Group to discuss more next week