►
From YouTube: Kubernetes SIG Network meeting 20210211
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Yeah,
absolutely
I
if
anyone
would
like
to
propose
as
a
member
maintainer
now's
the
time.
Otherwise
we
have
one
project
candidate
and
it's
from
tomo.
I
don't
want
to
steal
his
thunder,
but
it's
a
cool
utility
that
integrates
with
a
cube
cuddle
and
it
adds
a
new
command
which
is
cute
cuddle
pod
net
and
then
it
parses
the
network
status
annotation
and
gives
you
some
output.
So
it's
a
handy
dandy,
util
for
a
little
bit
of
easier
visibility
for
our
net
attachment
definitions.
C
C
Yep,
so
we
have
the
cube
control,
get
so
two
parts
right
and
then
the
alternate
just
showing
there
this
stuff.
So
that's
it
so
easy
to
understand
right.
C
So
this
is
the
this:
the
botnet
just
process,
the
our
network,
working
group
network
status
and
then
the
output,
so.
A
That's
pretty
cool.
How
does
that
actually
integrate
with
cube
cuddle?
If
I
may
ask,
is
there
like
a
plug-in
framework,
yeah.
C
C
B
Awesome,
I
think
it's
great
personally.
Is
there
anyone
who
has
an
objection.
C
The
not
yet
well,
maybe
the,
if
possible,
I
I
could
do
this
stuff,
but
still
now
I
don't
have
the
any
the
adapter
wrap
for
that,
so
the
and
then
maybe
this
should
be
the
kind
of
the
optional
field.
I
think
so
yeah,
I'm
going
to
add
the
as
the
some
format
option,
which
extending
the
more
information
and
then
the
device
import
related
stuff
should
be
in
the
this
additional
fields
by
the
way
they
are
yeah.
I
don't.
C
Any
example:
output
for
the
network
status
output
for
our
device
info
stuff.
So
the
could
you
share
the
if
you
have
the
sum
example.
B
All
right,
awesome,
sweet,
I'm
glad
to
see
that
there's
already
some
more
stuff
we
can
add
it
looks
like
we
have
accepted
it,
that
being
the
case
for
maintainer
candidates,
we've
got
tomo
and
I
and
anyone
else
would
like
to
join.
B
Any
objections,
if
not,
I
think
that
we
can
carry
it
on
to
the
next
part
of
the
agenda.
B
All
right,
I
think
it's
me,
I've
got
the
first
one.
This
is
really
just
a
psa,
but
there
is
a
cve,
that's
open
for
protobuf
library
as
vendored
by
the
kubernetes
library.
So
if
you've
got
a
project
that
vendors,
the
cades
libraries
you
might
wanna,
you
should
look
into
updating
it.
I
don't
know
a
whole
lot
about
the
attack
surface.
Mostly.
I
know
that
it's
called
the
skippy
peanut
butter
issue
and
it's
listed
as
a
cve.
So
just
a
heads
up
and
that's
really
all
I've
got
on
that.
A
D
Doug
tomo
you've
got
the
next
item.
Container
registry
updates.
C
Okay,
so
so
so
the
I'm.
So
this
is
about
the
update
from
the
last
meeting.
I
mean
that
the
so
currently
that
we
so
the
many
our
repository
network
prompting
working
group,
the
repositories
the
container
image,
is
put
in
the
hour.
So
there
are
kinds
of
ideas,
my
groups
of
the
the
in
the
red
hat
and
the
docker
half,
so
that
this
is
not
the
not
for
not
their
community
repository.
C
So
so
they
are
that's
why
we
are
just
seeking
the
which
container
repository
is
the
feasible,
so
the
playback
is
really
so.
You
know
the
docker
hub
is
going
to
have
some
additional
policy
for
our
container
images,
so
they
are.
Maybe
this
is
not
good,
so
there
we
now
are
looking
for
acquire
and
then
the
github
container
registry
seems
to
be
the
stuff,
and
then
I
add
the
update
at
the
last
meeting
agenda.
I
mean
that
so
the
so
the
gist
is
the
acquire
and
the
github
content
registry.
C
This
is
the
two
option:
qui
is
the
g8
service
so
that
we
we
do,
we
can
use
and
then,
of
course,
the
free
and
then
github
content
registry
is
also
free.
However,
this
is
still
in
the
better
stuff
and
then
the
they.
The
github,
does
not
announce
the
what
is
changed
at
the
g8
service,
so
maybe
the
github
may
requesting
some
additional
fee
for
content
register
in
the
product
production.
So
so
the
from
I
I'm
just
thinking
that
yeah
we
we
could.
C
The
way
is
the
better
option
to
create
the
account
so
and
then
the
then
both
find
the
github
content
register
supports
the
multi-architecture
images
so
especially
for
multis
motors.
They
are
publishing
the
multi-architecture
images.
I
mean
that
the
neurally,
the
x86
and
arm
pc,
so
they
are
yeah
we
we
should
choose
one
of
them.
B
About
it
being
ga
and
then
github,
there
are
still
some
some
questions
about
it.
I
I
would
say
that
one
advantage
of
github
is
that
we've
already
kind
of
got
everyone
set
up
with
the
permissions
in
github.
So
theoretically,
it
would
be
kind
of
a
seamless
management
of
those
images.
But
I
I
don't
know
if
that's
a
good
enough
trade-off
against
the
you
know.
Knowns
of
the
ga
service.
A
C
So
they
are
this,
so
the
the
github
content
yeah.
I
I
test
that
the
the
two
or
three
times
in
my
personal
repository
and
then
the
it
seems
to
be
yeah.
It
works
and
then
the
this
also
the
integrates
with
the
github
action.
So
the
if
you
implementing
the
some
ci
workflow
in
the
github
action
and
then
it's
easy
to
putting
the
this
result,
content
images
to
their
github
content
registry
and
then
also
the
yeah.
C
The
the
github
content
registry
allows
the
access
permissions
per
user,
not
the
acquire
the
so
once
we
create
the
quality
acquire
is
only
create
the
one
user
account.
So
we
cannot,
or
maybe
we
we
should
create
the
acquire
group,
and
then
we
should
the
everyone
needs
to
be
at
the
account
in
the
query.
So
from
the
users
experience
point
of
view,
github
content
registry
seems
to
be
easy
to
maintain
from
administration
administrative
point
of
view.
C
Yep
and
then
also
the
so
currently,
the
github
is
saying
that
the
this
is,
they
are
free,
but
I
don't
know
their
feature
is
right
here.
Yeah.
A
I
mean
we
can
always
move
over
to
something
else
in
the
future
too.
I
guess
if
it's
integrated
with
our
you
know,
pipelines
and
stuff
like
that
github
actions,
maybe
it's
a
little
harder,
but.
D
A
C
So
how
about
they
use
the
github
content
register
first
and
then
they
see
the
github
keeping
their
policy
as
the
open
source
friendly
or
not
sounds
good
to
me.
B
Okay,
yeah,
I
think
it's
worth
a
trial
period
and
if
people
start
implementing
it
and
we're
not
happy
with
it,
we
can
always
switch
it
over
sweet
tomah.
Thanks
for
digging
into
this
one
to
pull
out
the
nitty-gritty
here,
appreciate
it:
okay,.
D
All
right,
thanks
tomorrow,
doug
back
with
logo,.
B
All
right
so
tomoho
has
been
putting
together
some
preliminary
work
to
help
us,
you
know,
promote
our
community
and
tomo
has
a
couple
things
he's
been
working
on,
such
as
kind
of
outlining
what
we
might
put
in
some
website
content
and
then
also
he's
been
outlining
some
ideas
for
some
youtube
videos
so
that
we
can
give
kind
of
a
newbie
friendly
introduction
to
some
of
our
technologies
which,
as
we
all
know,
sometimes
can
be
fairly
dense.
B
Although
when
you
go
get
down
to
using
them,
I
I
don't
think
they're
terribly
complicated
to
to
use.
So
one
thing
I
thought
might
look
nice
is
to
put
together
a
logo,
so
I
took
a
like
initial
crack
at
drawing
up
some
of
my
own
and
I
threw
him
in
a
dock
here.
B
I
would
love
to
get
any
feedback,
or
I
would
also
love
it
if
other
people
want
to
propose
some
as
well,
but
I
think
that
it
would
just
provide
kind
of
a
like
cohesive
branding
for
for
the
group,
and
then
I
guess
beyond
that,
it'd
also
be
nice
for
the
laptop
sticker
factor.
B
So
keep
that
in
mind
as
you
look
at
it,
that
it
would
work
as
a
sticker,
because
I
already
want
one
that
is
basically
what
I've
gotten
that
one
yeah
feel
free
to
comment
on
that
doc
or
to
bring
some
more
suggestions
to
the
table.
I'm
I'm.
A
C
E
Yeah,
so
I
was
using
obviously
and
we're
about
type
to
actually
try
to
configure
some
dns
inside
the
pods,
like
lexicon,
they're
networking
through
multis
and
network
attachment
definitions,
but
for
some
reason
the
dns
configuration
that
I
have
in
in
the
ibm
section
in
the
next
creation
definition
doesn't
appear
inside
the
board.
So
I'm
not
sure
why
do
you
have
any
quick
questions?
We
answered
that
sorry.
B
And
you
know
what
this
is.
I
can
help
take
a
look
at
that.
Let
me
pull
up
an
issue.
I
think
there's
a
regression
in
the
latest
edition,
so
one
thing
whereabouts
implement
so
the
way
that
it
sets
dns
is
borrowed
line
for
line
from
static,
cni
and
whereabouts
implements
everything
that
static
ipam
implements
yeah.
B
So,
and
and
rao
actually
worked
all
right,
let's
see
if
I
can
I'm
just
looking
through
the
issues
here.
I
think
that
we
have
a
spot
where
we
can
track
this
particular
problem.
C
So
one
questions
so
the
in
this
case
the
dns
config
configuration
is
ball's
net,
both
cni
I
mean
that
the
cluster
network
and
then
the
network,
that's
the
definition.
C
So
the
if
you're
using
the
models
and
then
you
you
must
configure
the
first
the
default
network
right
and
then
you
may
add
the
additional
network,
as
the
network.
C
C
E
C
Which
which
the
network
configuration
have
the
dns
configuration.
E
The
first
one,
the
e
to
zero-
I'm
not
it-
has
a
network
configuration
by
default.
It's
not
loaded
by
multus,
but
I
want
to
add
some
extra
dns.
E
Configuration
does
it
answer
your
question.
C
C
Yeah
that
yeah
so
the
as
far
as
I
know,
the
dns
configuration
is
the.
How
do
I
say
exclusive
configuration?
I
mean
that
if
the
the
crosstalk
network
have
the
dns
configuration,
then
this
configuration
is
must
used
and
then
the
additional
dns
cannot
be
used.
This
is
because
the
from
the
unix
unix
perspective
there
we
could
add
the
multiple
or
net
dns
in
the
resolve.
Conf.
However,
the
the
only
one
server
is
used
there,
so
that
this
is
not
the.
E
So
if,
if
if
we
have
already
some
dns
configuration
and
result.com
because
of
the
due
d
interface
like
due
to
the
primary
networking
configuration
anyhow,
then
if
you
try
to
pass
an
extra
dns
server
through
network
attachment
definition.
C
C
So
also
without
the
motors
and
then
the
container,
let's,
let's
think,
on
the
bare
metal
unix
at
that
time
we
can
add
the
multiple
lines
of
the
net
and
the
name
server.
However,
only
one
line
is
used.
I
mean
that
they
are.
Let's
imagine
that
we
have
the
names
of
our
a
and
the
name
server
b
and
the
c
at
that
time.
C
This
is
no
round-robin
stuff,
so
that
every
time
they
are,
the
their
query
is
goes
to
the
cyber
air
first,
try
so
that
that
is
the
behavior
of
the
unix,
so
the
yeah,
sometimes
that
we
would
like
to
if
the
abc
case,
let's
try
the
a
and
if
there
there
is
no
record
and
let's
try
to
go
to,
but
the
linux
network
stack
and
then
also
the
unix
as
well
does
not
behave
so
much
subtract
that.
E
Okay,
so
we
have
some
so
from
from
what
you're
saying
this
is
not
considered
a
bug.
This
is
considered
a
a
normal
functionality.
I
will
say
so.
Yeah.
C
E
We
have
some
comments
somewhere
in
multus
or
or
in
whereabouts,
or
no
that
actually
states
that
that,
if
you
use
ipam
dns
configuration
for
for
configuring
a
secondary
network
interface
behind
the
pod,
these
dns
computers
will
not
be
appeared.
Something
like
that.
I'm
just
suggesting
stuff
here.
B
I
I
think,
that's
fair
enough
to
to
add
there
and
you
know:
maybe
it
belongs
in
both
the
docs
for
static
cni
and
also
for
whereabouts.
B
But
dimitri,
are
you
saying
that
the
does
it
appear
in
the
list
of
name
servers
when
you
use
whereabouts
or
didn't
even
appear
in
the
list
of
name
servers.
E
C
E
B
Okay,
that
may
still
that,
may
that
may
be
a
bug
itself,
as
tom
was
saying,
if
multiple
get
added
and
it
does
just
use
the
first,
so
I
I
think
that's
normal
consideration
hit
dmitry.
How
about
this?
I've
got
issue
opened
on
whereabouts
number
87,
I
put
it
in
chat
and
the
agenda.
B
Would
you
mind
just
sharing
your
configuration
whatever
shareable,
for
it
like
the
netted
hatch
death
that
you're
using
and
then
I'll
go,
try
to
replicate
it
and
look,
let's
make
sure
that
you
know
this
is
operating
the
way
that
we
anticipate
in
the
in
the
first
place.
The
fact
that
it's
not
in
the
results
object
seems
weird
to
me,
so
I'm
I'm
worried
that
that
that
there's
a
regression
here.
C
Right,
so
so
that
you
you,
you
only
you
you're
only
using
the
whereabouts
in
the
network.
That's
definition
right,
not
the
cluster
network.
E
E
Okay,
okay,
okay,
you
know-
and
I
have
I
have
asked
a
colleague
who
is
more
expert
than
me
to
the
obvious
a9
whereabouts,
ipams
and
stuff,
and
he
told
me
that
the
ipams
doesn't
actually
configure
any
dns.
They
just
return
the
actual
configuration
to
the
obs
and
I
to
do
the
configuration
inside
the
pod
like
at
like
configure
the
eyepiece
on
the
interfaces
and
stuff
like
that,
and
maybe
it's
it's
not
even
a
word
about
problem.
Maybe
it's
like
obviously
problem.
E
Whatever,
okay,
anyway,
I
will
thanks
doug
and
tomorrow
I
will
add
my
configuration
in
the
issue
that
you
have
there.
I
will
add
my
nod
that
I'm
using
and
yeah
we
can
discuss
over
that
issue.
B
Okay,
cool-
I
will
I'll
just
double
check
it.
On
my
end,
you
know
I
definitely.
B
You
know
I
expect
that
tomorrow
is
probably
right
in
this
case,
so
in
in
the
case
that
this
is
operating
as
expected,
I'm
gonna
make
sure
I
get
a
documentation
update
in
there,
because
I
I
appreciate
you
sharing
the
experience
for
sure.
So.
Thank
you
very
much.
C
So,
regarding
these
issues,
this
is
pretty
interesting
things
that
the,
how
how
should
we
managing
the
dns
in
case
of
the
amount
network
case
and
then
then
the
the
previous
behavior.
I
mean
that
the
the
first
dns
take
over
the
everything
see
this
so
yeah.
This
is
the
implicit
3d
rules,
and
then
this
is
in
the
linux
or
unix
network
stuff,
not
the
container
specific
stuff,
but
also
yeah.
Maybe
we
we
could
write
somewhere
stuff
and
then
I'm
just
wondering
be
aware
should
be
right
about
that.
C
Maybe
the
kinds
of
the
additional
note
of
the
network,
our
network,
spec
or
maybe
the
cni-
I
mean
I
mean
that
the
if
so
this
this
also
they
are
related
to
if
multiple
dna
server
is
in
the
one
result
object
at
that
time,
the
how
so
I'm
I
I
don't
remember
actually,
but
the
cni
spec
does
not
mention
this.
A
A
A
You
could
run
an
init
container
with
a
dns
server
for
that
network.
Namespace
that
listens
on
127.01
inside
the
container,
and
then
it
like
you
know,
dns
mask,
for
example,
is
the
common
one,
and
then
you
could
set
certain
search,
domains
or
certain.
You
know
dns
domains
direct
those
domains
to
different
dns
servers.
A
You
know
that
would
go
out
a
certain
interface,
so
you
know,
if
you
have
one
interface,
that's
like
you
know,
10.x
and
another
interface.
That's
172.16.x!
You
have
everything
for
food.com
go
to
172.16
whatever
that
name
server
is
for
that
yeah
and
everything
else
go
to
the
other
one
yeah.
The
problem
is
that
this
is
only
going
to
work.
A
If
somebody,
you
know,
has
a
feature
that
intercepts
the
dns
traffic
and
then
redirects
it
based
on
the
information
coming
out
of
each
network,
it's
kind
of
taking
what
we
can
already
do
on
the
host,
where
lots
of
installations
run
a
local
caching
name,
server
and
split
dns
for
like
vpn
reasons
or
other
reasons,
and
then
putting
that
into
a
per
container
solution.
Yeah
so
well,.
C
Okay,
I
I'm
just
thinking
that
we
we
have
the
two
issues
around
this
stuff.
One
is
the
how
how
to
managing
the
one
resolve
conf
from
the
multiple
cnn
implication,
or
maybe
the
we
also
need
to
specify
the
the
behavior
how
the
resolved
company
is
managed
in
the
cni
in
the
explicitly
that's
the
one
one
stuff
and
then
another
things
is
the
okay.
Let's
imagine
that
we
actually
have
the
2d
and
the
sub
or
more
and
then
other
than
how
to
utilize
this
stuff
using
the
yeah.
C
At
that
time,
the
I
understood
that
the
dns
mask,
or
maybe
the
coordinates
or
other
you.
You
also
could
implement
any
anything
at
all
and
then
do
some.
So
so
they
are,
how
do
I
say
so
they're
down
matching
and
then
they
have
been
changing
the
dna
server
to
query
that,
so
maybe
this
is
also
there.
This
is
related
to
the
network.
Is
the
challenges
and
then
they
both
should
be
addressed
in
the
some
future,
not
not
now
actually,
but.
A
Yeah,
I
think
it's
a
good
idea
to
be
more
explicit
about
how
resolve
comps
should
be
handled
in
the
multi-network
case
and
the
problem
is.
We
can't
actually
do
anything
because,
as
you
say,
the
first
dns
server
is
always
used.
I
think
until
it
like
returns
some
error
until
it
fails
and
then
it
kind
of
round
robins
to
the
next
one
in
the
normal
glibc
implementation.
A
E
A
Should
we
say
something
like
the
primary
networks?
Dns
server
should
be
the
first
one
and
thus
used
by
default,
and
then
other
ones
can
be
appended,
but
that's
still
not
going
to
give
great
behavior,
because
if,
for
some
reason
the
primary
dns
server,
glitches
and
glipc
goes
on
to
the
next
one
and
starts
using
that
that's
going
to
be
for
a
different
network
and
that
might
not
return
the
right
result.
Yeah
so
yeah.
C
Maybe
this
is
the
good
good
the
challenges
for
further
cni
specification.
I
mean
that
the
cni
2.0
or
some
stuff.
A
Yeah
could
be
the
other
thing
that
occurs
to
me.
Is
you
know
you
had
mentioned
core
dns
tomo
most
container.
Well
I
mean
all
containers
will
have
an
assigned
ip
address.
A
Well,
unless
it's
an
l2
container
or
something
like
that,
but
point
being
like
if
you're
doing
dhcp,
you
have
an
ip
address
and
most
of
the
time
that
will
be
assigned
and
fairly
well
managed
by
the
cluster.
So
you
could
conceive
of
a
system
as
well
where
the
intelligence
about
dns
servers
actually
goes
to
one
dns
server,
core
dns
on
the
node
or
somewhere
else.
That
then
uses
the
source
ip
of
the
request
to
serve
different
results.
A
That
might
be
another
way
to
do
it,
but
and
then
you'd
have
to
make
sure
that
you
know
you'd
funnel
the
information
about
which
domains
go
to
which
server
to
that
core
dns
instance,
or
whatever
other
dns
servers
or
any
other
cluster
yeah.
So
there
are
ways
to
do
it,
but
you
know,
honestly,
I
think
it's
a
lot
of
work
and
it's
probably
you
know
out
of
scope
for
the
moment.