►
From YouTube: Webinar: What’s New in Linkerd 2.7
Description
Linkerd 2.7 significantly advances the promise of zero trust security in Kubernetes. This security-themed release adds support for integrating Linkerd’s mutual TLS infrastructure with external certificate issuers such as Vault and cert-manager, improves gitops workflows by allowing Linkerd manifests to be generated without secrets and makes it easy to rotate TLS credentials. The features in this release improve dashboard performance, usability of Helm charts and much more.
Presenter: Oliver Gould, Lead Creator of Linkerd and CTO @Buoyant
A
Alright
think
we're
gonna
go
ahead
and
get
started.
I'd
like
to
thank
everyone
who
is
joining
us
today.
Welcome
to
today's
CN
CF
webinar.
What's
new
in
Liberty
2.7
I'm,
Taylor
Wagner,
the
Operations
Analyst
here
are
scenes
yes
and
I'll,
be
the
host
today
we'd
like
to
welcome
our
presenter
Oliver
Gould
the
lead
creator
of
link
Rd
and
CTR
buoyance
before
we
get
going
I'd
like
to
go
over
a
few
housekeeping
items.
A
First,
as
an
attendee
of
this
webinar,
you
are
not
able
to
speak
out
loud,
but
you
can
communicate
with
us
via
the
chat
in
the
Q&A
box.
We
ask
that
if
you
have
questions
that
you
ask
in
the
Q&A
box
rather
than
the
shout
window,
so
please
direct
your
questions
there.
This
is
an
official
webinar,
the
CNCs
and,
as
such,
is
subject
to
the
CNCs
code
of
conduct.
So
please
don't
add
anything
to
the
chat
or
to
the
questions
that
would
be
in
violation
of
the
code
of
conduct.
B
Okay,
so
background
of
the
project
we've
been
working
on
this
actually
before
we
started
working
on
linker
T.
So
my
background
was
a
infrastructure
engineer
at
Twitter
working
on
much
of
the
same
problems,
but
in
a
library
based
approach,
rather
than
an
application
based
approach
and
that
led
us
to
start
working
on
linker
D
in
early
2016
and
about
a
year
later
we
donated
after
getting
some
production
users.
B
We
donated
the
project
to
20
to
the
CNCs,
which
has
been
a
great
home
for
the
project
over
the
past
three
years
now
and
then
little
in
the
end
of
2018.
We
did
a
big
overhaul
of
the
project
moving
sun
setting
a
lot
of
the
GBM
stuff
that
we
started
with
moving
to
a
new
approach
discussing
mostly
today.
B
So
let's
start
with
what
linker
D
does
or
why
anyone
might
use
it
so
kubernetes
and
in
general,
the
kind
of
cloud
mean
approach,
gives
you
lots
of
control
and
visibility
into
your
workloads
and
whether
they're
running
and
whether
they're
not
and
where
they
come
from
and
a
lot
of
the
kind
of
container
side
of
things.
What
they
don't
give.
B
B
The
other
main
draw
for
for
link
rudy
is
its
automatic
security
feature
set,
and
so
we
do
things
like
transparent,
MPLS
and
a
whole
lot
there.
That
is
really
valuable
and
works
to
some
of
the
Box.
Excuse
me
I
work
in
San,
Francisco
and
there's
a
additionally
all
that
has
to
work
reliably.
We
can't,
if
you're,
introducing
something
like
a
service
mesh.
It
can't.
B
B
B
I
guess
there's
still
people
out
there
who
are
getting
around
this,
but
we've
been
working
this
for
a
couple
years
now,
but
it
really
comes
around
at
the
micro
service
approach
and
so
the
the
distinguishing
factor
from
microservices
as
a
singer.
She
treat
from
eiko
services
that
there
are
small
components
that
communicate
over
the
network,
usually
in
some
sort
of
RPC
fashion
and
with
the
service
mesh
comes
in,
is
that
we
add
a
sidecar
proxy.
B
This
looks
somewhat
like
this
and,
of
course
this
is
an
evolving
picture,
but
we
have
a
set
of
control,
plane,
components
that
are
all
Ringo.
We
did
that,
so
we
can
be
tightly
coupled
with
kubernetes.
Api
is
like
Clank
Oh.
We
vendor
we
currently
vendor
things
like
Prometheus
and
Ravana.
So
we
can
have
an
out-of-the-box
experience
that
just
works,
and
then
we've
written
a
data
plane
proxy
in
rust-
and
this
has
been
a
big
part
of
our
investment
over
the
past
few
years-
is
to
build
a
service
Metro
proxy.
B
That's
purpose
fit
for
this
use
case,
and
then
we
use
things
like
G
RPC
for
all
of
the
communication
within
the
system.
So
this
is
really
a
cognitive
approach
as
best
we
can
and
as
we
Ewan
into
this,
we
went
into
this
with
a
lot
of
lessons
from
Lincoln
e1,
and
those
lessons
were
that
this
had
to
be
a
kind
of
zero
config
experience.
This
needs
to
be
something
that
you
can
drop
into
your
kinetics
cluster
without
changing
the
application
and
start
getting
value
out
of
it.
This
can't
be
a
six
months
to
a
year.
B
We
have
to
go,
adopt
a
service
mesh.
You
know
journey.
This
needs
to
be
an
incremental
thing,
where
we
can
start
to
get
visibility
and
start
to
enhance
security.
While
we
figure
out
all
the
things
that
we
want
to
do
can
do
with
a
service
mesh
part
of
that
being
a
kind
of
lightweight
addition
to
your
cluster
means
that
it
has
to
be
really
low
overhead.
We
don't
want
to
be
adding
a
lot
of
memory,
overhead
or
CPU
overhead,
and
especially
latency.
We.
A
B
A
B
B
What's
the
mean
for
your
application,
and
so,
first
and
foremost,
what
we
find
is
that
load
balancing
is
the
sharpest
tool
in
the
shed,
so
I,
don't
know
if
familiar
with
kubernetes
little
balancing
approaches,
but
typically
the
way
services
work
is
that
this
is
all
managed
through
IP
tables,
which
means
we
do
connection
level
load
balancing,
which
is
really
not
an
efficient
way
to
do.
Load
balancing,
especially
for
modern
applications,
especially.
A
B
B
Balancer,
what
we
call
a
load
balancer,
which
is
exponentially
weighted
moving
average,
and
that
allows
us
to
make
sure
that
if
you
have
individual
pods
in
your
cluster
that
are
slow
because
they
have
noise,
neighbors
or
failing
because
some
bad
configuration
the
proxy
is
able
to
eliminate
them
from
consideration
for
balancing
and
we
only
send
traffic
to
the
healthy
end.
Once
this
is
all
powered
by
kubernetes
primitives
like
services,
we're
not
introducing
new
service
services.
Every
complexity
on
here
and
again.
B
This
has
real
effects.
This
isn't
just
about
kind
of
theoretical
performance
improvement.
Here's
a
test.
We
ran
with
the
various
load,
balancing
algorithms-
and
these
are
all
request
level
load,
balancing
algorithms
still,
but
we
see
by
switching
to
a
Yuma
balancer.
It
can
really
improve
success
rate.
So
if
you
had
a
timeout
of
say
one
second
for
your
requests,
what
a
round-robin
load
balancer
do
would
give
you
a
95%
success
rate
right
using
a
better
low
balancing
algorithm.
B
The
other
kind
of
really
important,
out-of-the-box
feature
filling
cardi
is
that
we
automatically
established
mutual
TLS
between
every
node
in
the
mesh
and
that's
to
say
if
we
have
the
sidecar
on
each
side
of
connection
that
will
all
get
TLS
without
any
application
participation,
and
that's
not
really
just
about
encryption.
So
we
talked
to
folks
who
say
well:
I,
don't
need
to
us
in
my
cluster,
because
I
trust,
my
cloud
provider
and
you
know
I'm
not
dealing
with
healthcare
data.
So
it.
B
Today
we
moved
up
all
of
us
left
from
kubernetes
service
accounts,
and
so
we
use
again
kubernetes
primitives
and
the
identity
move
shopping
in
with
within
kubernetes
to
bootstrap
service,
many
identity,
and
then
once
we
do
that
we
rotate
certificates
today
and
once
a
day
but
I
think
that
could
become
much
more
frequent
and
you
know
again,
the
keys
never
leave
the
pod,
so
there's
no
central
risk
there,
and
then
we
rotate
these
things
very
frequently.
When.
B
And
so
this
was
the
kind
of
banner
feature
4
to
7.
We
made
it
possible
to
boot
up
all
of
us
with
certain
manager,
so
previously
a
part
of
the
link
of
the
installation.
It
would
generate
trust
routes
and
do
some
things
for
you,
but
that's
not
really
a
great
way
to
run
in
production
where
you
want
to
have
a
real
chain
of
trust
around
your
certificates
and
so
cert
manager
lets
you
integrate
with
things
like
the
vault
and
various
cloud
providers,
and
so
that's
a
really
exciting
addition.
B
B
Us
what
other
things
you'd
like
to
see
in
that
space?
That's
a
really
cool
feature.
We
do
all
this
without
conflicting
or
opposing
your
requirements
on
your
dress
or
application
TLS.
So
if
you
have
big
breasts
pls,
we
will
transparently
proxy
that
through
and
not
be
encrypted
in
the
mesh
same
thing
for
application
traffic
and
that.
B
B
For
HDPE
and
to
your
PC
traffic,
but
we're
working
on
making
that
feasible
for
arbitrary
protocols
and
that
works
in
flight
and
again
like
most
of
our
features,
no
application
changes.
You
don't
have
to
do
anything
to
your
code
to
start
participating
this
other
than
a
few
annotations
to
your
workloads.
B
This
is
my
favorite
link.
Rity
feature
that
we
never
talk
about
and
I
didn't
find
a
good
image
for
it.
Unfortunately,
but
another
part
of
the
transparent
upgrading
that
we
do
is
that
we
communicate
everything
between
mesh
pods
over
HB,
2
and
again
with
MPLS,
and
so
what
that
means
is
that
we,
if
you
have
an
HTTP
application,
that
would
typically
open
many
many
connections
between
pods.
We
can
do
all
that
over
a
single
connection.
B
That
means
we
can
advertise
all
the
costs
for
TCP
handshakes,
and
it
means
that
things
like
empty
lists
are
not
a
significant
performance
overhead,
because
we
don't
have
to
do
session
establishment
repeatedly.
That
happens
about
a
few
times
per
her
edge
for
patchy
pods,
and
that's
it
and
again,
no
application
changes.
Nobody
knows
that
htv-2
is
involved
from
your
applications.
Point
of
view,
so,
whether
using
HP
to
issue
one
when
we
just
merged
that
all
under
one
big
fat
pipe,
it's
great.
B
B
Of
the
visibility
features,
so
we've
started
with
Prometheus
from
the
ground
up,
and
this
is
kind
of
there
in
direct
contrast
to
the
Envoy
base.
Meshes
Ron
Boyd
did
not
really
start
with
Prometheus
support,
they
started
with
asti
support
and
have
been,
and
you
know,
moving
towards
for
media
support
over
the
past
couple
years.
This
is
something
we
realized
was
really
important
for
kubernetes
native
mesh,
and
so
we
started
with
this,
and
we
do
that
in
order
to
give
every
pod
and
your
fleet
a
uniform
level
of
visibility,
and
so,
regardless.
A
A
B
Whether
that's
written
in-house
or
third-party
software,
we
can
get
the
same
gold
metrics,
we
can
get
latency
success
rates,
request,
counts
failure
counts
all
of
the
interesting
things
about
your
traffic.
This
is
each
EPG
RPC
aware,
and
so
we
know
what
an
HTTP
success
code
is
versus
a
failure,
same
thing
for
grac
and
we
can
annotate
the
metrics
with
lots
of
them.
B
And
in
addition
to
that
metadata,
we
pull
lots
of
the
kubernetes
workload
metadata
from
the
discovery,
and
so,
when
your
proxies
talking
to
another
pod,
we
can
tell
you
exactly
which
pod
it's
talking
to
what
service
that's
part
of
and
a
lot
of
the
other
kubernetes
centric
metadata
there.
We
done
work
to
make
sure
that
we
give
you
raw
histograms,
which
is
kind
of
made
the
esoteric
feature.
But
what
this
means
is
that
there's
no
averaging
of
Layton
sees
in
the
system.
B
A
B
Think
last
year
to
integrate
with
open
census-
and
this
lets
us
so
if
your
application
uses
open
census
with
something
like
Jaeger
link,
radik
can
participate
that.
So
here
we
see
in
this
screen
cap
though
there
are,
you
know
another
application
running
and
you
can
see
all
of
the
linker
D
hops
in
that
application.
However,.
B
B
Within
the
greediest
kind
of
wheelhouse
of
out-of-the-box
Ausmus,
however,
we
do
have
something
that
we
call
tap,
which
is
an
ad
hoc
tracing
feature,
and
so
this
can
be
done
without
any
application
change.
And
the
way
this
works
is
that
at
runtime,
as
your
systems
running,
you
can
connect
to
pods
to
the
control.
Plane
and
say,
show
me
requests
that
look
like
this,
and
we
can
actually
start
to
collect
data
from
the
fleet
of
pods
at
runtime
to
give
you
an
ad
hoc
tracing
without
having
to
do
any
application
change.
B
A
B
Of
our
awesome
features
is
traffic
split.
This
is
something
we've
been
working
with
the
folks
at
SMI,
and
so
SMI
is
a
service
mission
interface,
we're
working
with
folks,
we've
and
hash
it
for,
and
super
glue,
all
all
different
groups-
they're
Microsoft,
to
define
api's
that
are
kind
of
core
to
the
service
mesh,
regardless
of
implementation
and.
B
B
B
Ok
before
I
go
further,
are
there
any
questions
in
the
QA
that
we
want
to
look
at
looks
like
there
may
be.
B
B
So
Deepak
asked:
when
should
we
choose
linker
T
over
sto
and
I?
Wasn't
planning
on
talking
about
that
explicitly?
I
think
that
sto
is
a
great
solution
for
lots
of
complex
policy
problems
and
I
that
we
do
find
kind
of
consistently
that
folks
who
adopt
sto,
take
kind
of
a
long
time
to
adopt
it,
and
so,
if
this
is
part
of
a
longer
architecture,
thing
where
you
can
really
spend
the
time
to
get
this
right
and
really
dig
in
and
learn
a
lot
of
organizational
things.
B
B
Let's
see,
there's
some
other
questions
here
and
I
will
get
and
I
think
the
other
question
around
I'll
get
back
to
those
questions
towards
the
end
I'm
going
to
take
a
quick
deviation
from
my
slides
to
show
a
demo
if
that's
accept
everyone,
okay
and
so
I
have
taken
the
liberty
of
deploying
an
app
in
kubernetes
in
advance,
and
it's
this
typical
bookstore
type
application
and
I
can
see
that
it's
all
running
and
healthy.
But
that's
about
all
I
really
know
about
the
application.
B
You
know
I
could
probably
try
to
look
through
logs,
but
there's
not
a
lot
there,
and
so
I
want
to
show
you
what
link
Reedy
can
do
for
this
application
in
just
a
matter
of
minutes
and
so
for
the
first
things.
First,
we
I'm
gonna
install
a
link
ready
and
rather
than
install
stable,
t7
I'm
gonna
install
the
edge
release
which
we
released
yesterday
and
I
kind
of
skipped.
Over
this
earlier
we
do
edge
releases
weekly,
and
so
we
have
a
very
kind
of
regular
release
process
off
of
master
and
then
we
really
stable
area.
B
We
edges
weekly,
so
I've,
already
upgraded
to
this
week's
edge
teams
and
verify
that
and
I
don't
have
a
server,
and
so
the
first
thing
I'm
going
to
do
is
link
before
I
install
it.
I'm
gonna
check
my
cluster,
and
so
sometimes
your
kubernetes
cluster
can
be
configured
in
a
way
such
that
Linkwood
II
will
not
just
work
in
it.
Unfortunately,
and
some
cloud
providers
or
some
bare
metal
installs
are
effectively
especially.
B
B
B
Alright,
so
we
created
a
couple:
CR
DS
and
then
a
whole
in
big
maps
and
things
like
that
bunch
of
role
bindings.
This
can
also
be
split
up
so
that
you
can
set
out
the
rule
binding
the
cluster
role,
things
from
the
user
privileges,
an
animal
running
credit
check
and
linker
Dietrich
well
again
make
sure
that
the
cluster
starts
successfully
and
so.
B
B
B
A
B
Great,
so
we
still
in
cadiz
running
successfully,
but
we
have
no
stats
around
the
books
out.
So
how
do
we
add
3d
to
the
books
app
well
normally
in
production,
you
would
go
update
some
file
and
check
it.
Get
that
out
for
demos,
I'm,
just
gonna
run
this
handy
dandy
and
I've
annotated
the
namespace
so
that
link
Renea
will
be
injected
to
everything
in
that
namespace
and
then
I
can
do
a
restart
boy
and
I.
Just
this
will
take
a
couple
seconds
and
see
now
we're
we
we
used
to
have
one
container
per
pod.
B
B
B
B
From
those
things
as
well,
it's
not
just
cool.
We
see
a
success
rate,
it's
not
perfect
here,
so
we
can
do
more
introspection
here,
but
immediately
so
we've.
Just
all
we've
done
is
add
an
annotation
and
restart
the
pods,
and
we
now
have
traffic
data.
We
can
see
success
rates
and
you
can
see
Layton
sees
we
can
see.
The
success
rates
are
not
perfect,
and
so
we
can
do
something
like
maybe
Tahoe
they
pretty.
B
We
can
also
use
things
like
link,
ready
tap
and
this
will
just
dump
out
raw
requests
metadata.
We
can
do
I,
think
yeah.
We
can
get
really
high
granularity
jason
structured
data
here,
so
you
can
start
to
script
over
this
stuff.
If
you
want
to
there's
some
cool
jqs
things,
you
can
do
there
for
sure
and
and
there's
a
there's,
a
bunch
more
of
the
CLI,
but
before
we
go
further.
B
B
B
A
B
B
B
B
A
sec,
but
the
road
map
coming
up
so
in
2/7
we
just
shipped
a
bunch
of
syrup
manager,
features
to
eight
is
coming
up,
hopefully
towards
the
end
of
the
month.
It
was
going
to
be
after
cook
on,
but
without
ku
calm.
You
might
get
it
done
earlier,
and
that
is
going
to
be
a
bunch
of
stability,
improvements
and
improvements
as
well.
A
B
Mentioned
earlier
that
it
can
be
quite
large
or
the
Prometheus
instance
can
become
quite
large
in
production,
and
so
most
folks
tend
to
have
their
own
Prometheus
installs
and
don't
want
to
duplicate
that
with
ours
and
so
we're
working
on
making
that
something
where
other
folks
can
just
plug
their
Prometheus
and
delink
ready,
and
we
don't
have
to
ship
it
run
a
second
one.
That'll
be
really
cool.
B
A
much
bigger
project
we
just
started
on
is
multi
cluster,
and
so
that
will
allow
us
to
start
bridging
clusters
and
doing
cross
cluster
routing
and
policy,
which
is
super
exciting.
We've
just
shipped
some
of
the
first
pieces
of
that
in
the
end
of
the
codebase,
but
that'll
be
slowly
rolling
out
over
the
next
couple.
B
Stable
releases,
it's
a
great
time
to
come,
get
involved
with
that
and
we've
written
some
blog
posts
on
parts
of
that
that
we
understand
well,
and
so,
if
that's
exciting
to
you,
please
get
in
touch
because
there's
a
lot
more
to
talk
about
there.
A
lot
of
mind
focus
right
now
is
on
getting
NTS
for
everything.
So
I
said:
did
HTTP,
and
here
you
see
my
defaults
and
I
want
to
extend
that
to
be
all
TCP
traffic
that
we
automatically
empty
LS
it
that
will.
B
B
Additionally,
we
get
requests
for
exotic
protocol
support
things
like
Kafka
Rattus,
other
things,
I'm
sure,
and
so
this
is
a
big
opportunity
where
I'm
looking
for
books
to
help
contribute
these
things
to
the
code.
It's
not
technically
challenging.
It's
really
just
figuring
out
what
the
value
proposition
is
for
each
of
these
various
protocols
and
helping
us
and
working
with
us
to
integrate
that
into
the
system,
and
all
this
is
on
github.
All
this
is
open
source.
B
We
have
a
ton
of
contributors
from
ton
of
organizations,
and
so
if
this
is
something
that
appeals
to
you,
we
and
we
are
lacking
a
feature
or
lacking
Docs-
that
you
think
we
need.
Please
come
get
involved
and
ask
questions
we're
just
in
the
process
of
launching
a
new
era,
C
process,
and
so
the
intent
here
is
that
it
will
make
it
easier
for
folks
who
are
not
kind
of
integrated
well
with
the
project
you're
ready
to
start
proposing
bigger
changes.
B
We
find
that
some
folks
will
to
kind
of
walk
into
issues
and
I
want
to
think
through
large
new
features,
and
we
think
at
RFC
processes
a
little
bit
more
structured
way
to
deal
with
that.
We
have
a
great
community
on
slack
lots
of
questions
and
lots
of
people
answering
questions
so
I
encourage
you
to
join
our
slack
wave
mailing
list,
they're,
not
so
active,
but
they're
good
for
information.
B
We
do
regular
periodic
community
calls
and
we've
done
things
like
formal
security
audits,
cure
53
did
a
really
great
audit
of
our
code
base
last
year
and
we're
working
with
the
community
right
now.
Did
you
this
modeling
of
the
underlying
really
really
exciting
and
couldn't
do
it
without
the
CNCs?
That's
for
sure,
okay,
and
all
that
said,
I
think
I've
talked
enough
in
a
extremis
thoughts
but
I.
Hopefully,
there's
some
more
questions.
Now
Paul
wants
to
know
if
we're
hiring
that's
a
good
question.
B
We
were
always
opportunistically
hiring
the
right
people
for
four
needs
that
we
have
so
talk
to
us,
but
I
expect
we'll
be
hiring
more
later
in
the
year
next
year.
After
some
boyens
business,
things
dress
but
stay
in
touch
for
sure
pretty
ass.
How
do
we
export
the
configs
to
vigils
style
deployments?
B
I'm,
not
sure,
I
understand
what
a
vigil
stud
the
ployment
is,
but
I'm
willing
to
learn
and
deep-ass
hasn't
so
many
actually,
in
which
case,
is
it
not
a
good
idea
as
a
service
mesh,
just
cosplay,
a
role
in
any
role
in
this
decision,
and
that's
a
really
good
question
and
I
could
probably
talk
for
40
minutes
on
on
that
I
think.
The
short
answer
is
that
all
mature
micro
services
end
up
having
something
like
a
service
mesh.
B
The
question
is
whether
it's
decoupled
from
the
application
itself
or
whether
it's
integrated
as
a
library,
and
so
when
I
was
at
Twitter
working
on
this
we
used
finagle,
which
was
the
library,
but
it
was
effectively
a
service
mission.
It's
a
rich
data
plane.
It's
you
know.
Smart
data
plane
that
was
how
to
talk
to
a
service
discovery
that
knows
how
to
learn:
timeouts
and
policy
information
and
all.
A
B
B
Oh
wait:
okay,
so
trim
is
asking
about
get
off
style
to
planets,
and
so
yes,
that's
a
big
focus
of
ours.
Right
now
we
have
been
focused
on
one
nice
that
help
integration.
So
how
is
kind
of
the
de
facto
standard
there
right
now,
I
think
there's
a
lot
of
other
opportunities,
for
you
know
in
doing
more
integrations
there,
but,
for
instance,
the
surveyor
integrations
that
we
didn't
to
seven
were
done
specifically,
so
that
we
could
support,
get
up
style
workflows
where
we
don't
want
folks
to
have
to
check
in
linker
DS
sign
credentials.
B
B
B
I
I
think
I'm
about
out
of
good
answers.
Unless
there
are
easier,
I,
don't
know
much
about
identity,
we're
approximating
the
I
think
so
reading
into
deep.
Our
question
here
is
with
so
many
security
services
like
cloud
armor
I,
a
P
identity
where
proxy.
How
does
this
using
a
service
mission
clean,
pretty
health
and
securing
the
traffic
and
I
would
view
those
as
well
different
layers
in
the
same
in
in
a
complete
solution
and
so
liquor?
B
B
Forwarding
TCP
connections
to
an
ingress
and
we
want
there
to
be
smart
and
dresses
that
deal
with
OAuth
from
the
various
authentication
systems
that
you
may
need
there,
where
we
see
a
service
mesh
really
coming
in,
as
in
the
workload
or
workload
service
to
service
communication.
And
how
do
we
extend
the
identity
model
to
deal
with
services
and
not
just
people
and
so
I
I?
Do
them
both
as
kind
of
a
sin
components
in.
B
Question:
okay,
and
if
there
are
no
new
questions,
popping
up
I
think
I
have
exhausted
my
voice.
Oh
person
from
Debian.
Is
it
possible
expose
nuclear
metrics
without
the
use
of
internal
community
as
metrics
in
the
current
version
to
seven?
Or
will
this
be
part
of
the
next
release
so
link
or
these
metrics
are
automatically
exposed
from
their
proxies
themselves
and
so
you're
always
able
to
connect
to
the
proxies
on
four
four
one:
nine
one
and
curl
the
slash
metrics
in
point
and
you'll
get
all
the
Prometheus
metrics
you
want
from
there.
A
B
B
B
Think
anchor
de
has
too
much
of
a
horse
in
the
game
of
how
you
do
installs.
We
want
to
make
sure
that
we
have
kind
of
generics
expose
so
that
it's
possible
to
go.
Do
anything
you
want
I,
know
Thomas
who
works
with
us,
has
done
some
stuff
with
customize
here,
but
I.
Just
don't
know
enough
about
it
to
know,
but
I
would
love
folks
in
the
community
to
come
and
get
involved
here
and
show
us
recommendations.
B
B
B
Us
know
if
you
find
any
issues
and
these
there,
a
guy
who
can
follow
which,
if
we
were
running
a
running
lincolni
and
large
deployments,
three
thousand-plus
pause
scaling
the
control
plane.
There
are
I
I,
don't
know
there
probably
are
some
things
on
the
internet
around
that.
However,
I
will
also
add
that
company
is
like
buoyant,
which
I
work
at
to
offer
support
for
very
special
installations
and
things
that
need
a
little
higher
touch.
B
So
if
that's
not
appealing
to
you
I
would
say,
let's
get
come
to
the
to
the
github
or
slack
and
start
the
questions
there,
and
we
can
find
folks
in
the
community
who
can
who
have
been
through
some
of
these
things
that
can
help
you
yeah
we're,
not
a
top-down
community.
There's
a
lot
of
folks
figuring
these
things
out
on
the
ground.
Use
them
not
just
us.
B
B
Will
there
be
support
for
rotates
the
asserts?
So
that's
a
hot
topic
of
conversation.
That's
a
really
good
question.
I
am
gonna,
probably
ask
you
to
take
that
offline
and
come
to
slacker
or
github
where
there
there
are
multiple
schools
of
thought
on
that,
and
we
want
to
help
your
use
case.
We
there
are
some
security
concerns
around
making
the
CA
rotation
easy
and
so
we're
trying
to
balance
the
security
risk
there
with
the
difficulties
around
managing
these
configs.
B
B
Up
here
we
have
a
bunch
of
big
slides
with
folks
who
are
using
Lync
reduce
some
of
these
are
allocating
one,
but
many
of
them
are
lamperti
to
other
than
these
that
are
up
here.
I,
don't
want
to
out
anyone
anyone's
infrastructure
plans,
but
again
I
think
if
you
come
into
slack
you'll
find
people
at
various
companies.
B
And
calibers
another
really
good
question:
any
thoughts
aren't
giving
Lincoln
the
ability
to
have
an
intermediate
cert
with
a
private
key,
so
command
in
the
middle
and
inspect
traffic
for
services
that
have
to
be
TLS
and
and
like
elasticsearch.
That
is
a
fantastic
question.
There's
someone
on
the
team
who
really
really
wants
to
do
it?
It
kind
of
scares
me
because
it's
a
little
bit
mischievous
but
I
think
that'd
be
a
really
cool
shoe
or
remarks
you'd
open
up.
So
if
that's
this,
for
you
I'm,
not
gonna,
say
no
to
it.
Just
we.
B
A
Thanks
everybody
for
all
your
great
questions
and
Oliver
for
an
awesome
presentation
and
thanks
everybody
for
joining
us
today.
The
webinar,
recording
and
slides
will
be
on
later
online
later
today
at
CN
CF
o--
/
webinar.
If
you'd
like
to
download
the
slides
and
check
them
out
yourself,
and
we
look
forward
to
seeing
everyone
at
a
future
Sancerre
webinar
thanks
so
much
everybody
thanks,
Oliver
Thanks.