►
From YouTube: OKD Working Group 2021 05 11 Full Meeting Recording
Description
OKD Working Group 2021 05 11 Full Meeting Recording
https://groups.google.com/forum/#!forum/okd-wg
https://okd.io
B
Has
started
all
right
folks
welcome
to
this
edition
of
the
okd
working
group
meeting.
This
is
our
meeting
for
may
11th
of
2021,
and
I
would
encourage
folks
to
go
to
the
meeting
group
notes
and
put
in
your
name
for
the
little
attendance
sheet
there
under
may.
11Th
on
that
link
is
in
the
invite
and
I'll
also
put
it
here
in
the
chat,
real,
quick.
B
Here
so
we'll
get
started
with
vadim
and
then
we'll
move
from
there
so
vadim.
What
do
you
have
for
us
in
terms
of
updates.
C
Some
engineering
points
of
on
the
yokiti
we
didn't
release
this
weekend
because
our
previous
signing
key
has
expired.
We
submitted
a
fix
for
the
new
one,
but
that
fix
is
not
being
accepted
by
cbo.
Yet
we
have
a
request
to
fix
that.
We
would
need
it
to
get
reviewed
and
hopefully
this
weekend
will
make
a
new
47
release
other
than
that.
C
I
think
just
a
bunch
of
small
fixes
landed
in
47,
so
it
shouldn't
be
anything
groundbreaking
last
week
or
was
it
the
week
before
that
we
released
a
release
candidate
for
okd
for
one
for
eight,
the
main
difference
is
well,
it's
kubernetes,
121,
based
and
the
way
we
pack
machine.
Os
content
has
changed
significantly
longer
real
life
on
fedora
cos
comets.
C
Instead,
we
build
from
the
same
configuration
using
the
same
rpms,
but
we
build
our
own
comet
layer,
cryo,
all
the
necessary
rpms
like
open,
vm
agent
for
for
alert
and
vm
tools
for
vmware,
and
that
helps
us
to
avoid
using
os
extensions.
So
that
should
make
initial
setup
and
upgrades
a
bit
faster,
because
all
we
need
to
do
is
just
turn
back
the
us3,
and
once
you
have
the
new
deploy
new
os
3
deployment,
you
would
be
able
to
check
the
versions
of
the
components
using
rpmqa
command.
C
So
that's
very
helpful
to
figure
out
what
versions
have
been
installed
and
most
important
releases
with
extensions.
You
no
longer
need
access
to
fedora
repos.
In
fact
we
disable
them
on
every
upgrade,
so
that
should
make
it
less
rather
more
error.
Prone
48
is
in
so-called
feature
complete
status,
meaning
significant
features
won't
be
added,
but
still
more
fixes
are
landing.
So
please
stay
tuned
file,
box
report,
issues
and
and
general
feedback
would
be
very
appreciated.
B
There
any
particular
bugs
that
stand
out
of
in
things
that
are
filed
right
now
that
are
open,
that
people
would
be
able
to
help
you
with
by
testing
or
at
least
taking
a
look
at.
B
C
I
think
a
usual
testing
of
sphere
upi
especially,
would
be
very
useful.
C
We
could
ask
for
a
copy
of
development
version
of
documentation
which
lists
some
new
features
in
48
like
proxy
protocol
support
in
ingress.
That
might
be
useful
for
those
who
want
to
preserve
the
forwarded,
ips
and
some
more
features
coming
in
parade.
These
would
be
very
useful
to
test
early
and
report
feedback
bootstrap
in
place.
That
one
is
a
tricky
beast,
but
some
early
testing
would
be
very
appreciated.
C
But
something
to
fix
right
now,
I
think
a
lot
most
significant
issues
are
in
our
infra
and
those
are
very
because
they
require
a
lot
of
tinkering
and
a
lot
of
carefulness
so
that
we
won't
break
nightlys
at
all.
B
And
so
next
thing
I
wanted
to
do
is
I
wanted
to
sort
of
highlight
the
discussion
section
of
the
repo
which
has
gotten
a
little
bit
of
activity
lately
and
wanted
to
sort
of
go
through
these
to
see
if
there's
anything
outstanding
or
just
to
bring
it
to
people's
attention.
So
one
of
the
first
ones
that
came
in
was
the
suggestion
to
have
exact
links
to
the
fedora
core
os,
there's
nothing
that
I
know
of
right
now.
Actually,
my
script,
the
my
oct
cool
tools.
B
Any
comments
on
that
before
we
move
to
the
next
one.
I
think
that
would
that
was
probably.
C
A
B
And
now
bouncing
over
to
609
automated
solution
to
backup
scd
on
a
schedule
from
within
the
cluster.
Where
did
we
land
on
that?
There
was.
A
lot
of
discussion
is
sri
on
today.
B
I
don't
see
no,
but
just
to
highlight
this.
I
think
this
was
a
great
discussion.
Automated
solutions
for
backing
up
at
cd,
so
folks
can
check
out
that
thread
again
for
those
that
are
just
joining
or
for
who
are
watching
the
video
there's.
B
Now
a
discussion
section
of
the
okd
repo
that's
opened
up
and
folks
are
having
sort
of
more
nuanced
technical
discussions
there
about
features
and
things
related
to
website
and
whatnot
and
yeah
automatic
backups
would
would
be
awesome
for
sure,
but
there's
some
complexities
there,
obviously,
and
okay,
any
comments
or
thoughts
on
that
or
anything
folks
want
to
mention
before
we
move
on
from
that
one.
C
And
we
have
various
approaches
to
the
same
problem
because
well
the
use
cases
might
be
very
different.
Some
want
backups
on
requests,
so
maybe
a
text
on
task
would
be
easier
for
that.
Someone
to
persistent
snapshots,
so
an
operator
is
probably
the
best
pick
here.
I
think
various
approaches
having
some
code
scripted
for
a
start
would
be
great
and
then
we'll
see
which,
which
one
is
more
widely
adopted
and
wins
effectively.
D
No
something
like
that
dude
yeah
yeah,
some
crunch
up
that
at
least
would
be
sufficient.
I
think
for
the
most
critical
backups
once
a
day,
or
I
think
I
think
I
remember
that
we
have
to
call
a
backup
script
that
is
always
present
on
the
notes
to
do
a
ptcd
backup
and
such
a
cronjob
simply
could
call
that
script
regularly
and
yeah,
maybe
copy
that
away
to
external
location
that
you
can
define
by
changing
the
current
shop
scripts
like
that.
C
C
And
the
restore
procedure
probably
should
be
somehow
I
automate
it,
but
that's
something
to
look
forward
to.
But
the
container
with
this,
which
launches
the
script
on
the
notes
is
probably
the
most
critical
part
reused
in
every
single
approach.
C
We
tried,
especially
like
so
so
current
job
is
a
great
starting
place,
which
we
can
easily
evolve
into
tax
on
pipeline,
and
then
we
could
write
a
more
complicated
operator
on
top
of
that
and
have
a
proper
reporting,
maybe
even
biotech
tone
so
that
that
would
be
a
great
showcase
of
how
various
technologies
connect
with
each
other.
B
Let's
see
if
we
can
round
up
some
folks
there's
a
lot
of
ideas
floating
around
it'd
be
good
to
sort
of
get
a
concerted
effort,
maybe
get
a
repo
together,
I'm
happy
to
contribute
to
the
tech
town
aspect
of
it
and
I
think
it
would
it'd
be
cool
if
a
couple
folks
from
the
group
got
together
and
created
a
little
subgroup
to
come
up
with
something
all
right.
Next
one
is
transition
to
c
groups
v2.
B
I
threw
this
in
to
sort
of
have
documentation
of
the
conversations
between
the
fedora
core
os
group
and
the
okd
group
video.
If
you
just
want
to
talk
to
this
for
a
second
basically,
it's
covered
in
there.
But
if
there's
anything
you
want
to
add.
C
Right
so
we
have
all
the
basics
in
for
eight
nightlys,
the
only
missing
part
is
run
c
used
by
builders.
So
all
the
features
are
working.
We
have
a
photocross
which
can
enable
it,
but
the
test,
failing
because
all
the
builds
are
effectively
crashing
immediately.
C
What
we
could
do
is
to
experiment
with
building
our
own
ogde
setup
with
updated
builder
and
where
we
pull
it,
the
problem
is
that
it
might
be
using
rail
packages,
so
testing
it
in
ci
would
be
pretty
complicated,
but
as
an
exercise
to
build
a
builder
container
using
centos
streams,
fedora
would
be
great
is
excellent,
because
all
we
have
is
a
docker
file
starting
part,
and
that
would
be
very
helpful
because
the
ticket
is
well
filed.
B
Any
other
thoughts
on
that
timothy.
Do
you
have
any
thoughts
you
wanted
to
add
to
that.
A
I'm
looking
at
the
picket
details
and
yeah,
even
though
it's
not
a
goal
right
now
for
for
openshift
product.
It's
it's
definitely
on
a
radar.
So
I
guess
this.
Should
I
don't
know
how
it's
going
to
get
fixed
exactly
but
yeah?
That's
it's
still
kind
of
a
priority
for
us.
So.
A
B
Knows
have
any
comments
and
comments.
Your
thoughts
on
c
groups,
v2.
E
C
A
A
F
Biggest
thing,
the
biggest
thing
for
me
from
a
v2
perspective
is
you:
can
trivially,
tie
resources
to
a
process
and
be
able
to
track
that
through
its
children,
because
the
groups
aren't
based
on
this
concept
of
controllers
of
different
types
of
resources,
they're
based
on
where
you're
instantiating
a
scope
for
a
process.
So
control
groups
are
actually
singularly
grouped
in
v2,
whereas
they're
multiply
grouped
in
v1,
and
that
makes
it
a
lot
harder
to
track
and
make
sure
that
things
are
actually
being
allocated
correctly
and
tracked.
F
And
that's
why
a
lot
of
things
like
umd,
psi,
tracking
and
stuff,
like
that?
They
all
depend
on
v2,
because
you
can
connect
all
the
resources.
You're
you're,
allocating
to
the
process
in
question
that
you're
trying
to
instantiate.
So
in
this
case
with
a
container
that'll,
instantiate,
a
scope
and
a
slice
and
a
and
a
resource,
you
can
tile
those
resources
directly
to
that
and
make
it
the
distinct
owner
of
those
resources.
B
Next
one
up,
so
this
one
is,
was
an
error
or
a
perceived
error.
So
I'm
looking
at
622
vadim.
Did
you
want
the
discussion
section
to
be
a
place
for
people
to
put
errors,
or
would
you
prefer
that
they
actually
go
in
the
issues
section,
as
opposed
to
in
the
discussion
section.
C
Now
I
think,
starting
with
the
discussion,
is
a
good
idea.
Well,
two
starting
points
are
equally
fair
because
we
don't
know
if
it's
an
issue
in
oakley
initially
or
it's
just
here-
I
I
might
have
typoed
or
using
an
old
julie,
so
it
might
have
been
fixed.
The
we
can
anyway
convert
any
discussion.
Should
you
into
this
discussion,
so
that's
interchangeable.
C
I
think
it's
a
great
place
to
share
some
logs
and
error
lines,
but
of
course,
no
guarantee
that
we'll
look
into
this
and
somebody
actually
actively
supporting
this.
So
discussions
like
that
are
probably
a
good
place.
We
probably
should
create
a
new,
a
special
topic
for
this
yeah
yeah
category.
C
I'm
not
sure
how
to
name
it
correctly,
but
I
think
it's
a
great
idea
to
group
these
kind
of
discussions
and.
B
Tickets
and
and
this
one
in
particular,
okay,
d47
storage
operator,
degraded-
basically
it's
it's
resolved.
Did
you
want
to
say
anything
about
that
vadim,
add
anything
other
than
what
you
have.
C
It's
a
common
problem
in
for
seven,
including
a
recipe
where
folks
don't
know
what
they
want.
They
have
a
vsphere
platform,
but
they're
not
sure
if
they
would
be
using
machine
api
or
they
would
be
using
storage,
they
might
switch
to
csi,
which
doesn't
use
entry
drivers,
so
the
credentials
might
or
might
not
be
valid
and
the
installation
we
pass.
C
If
you
don't
use
those
features
with
invalid
credentials
and
in
order
to
track
that,
we
added
a
degraded
condition
for
this
to
verify
that
if
you
have
a
vsphere
cluster
and
your
credentials
are
invalid,
we
won't
proceed
with
upgrade
because
it
might
break
a
lot
of
things
that
caused
a
lot
of
discussion
internally,
because
a
whole
bunch
of
people
are
using
vsphere
without
using
storage
or
machine
api,
so
they
use
fake
credentials.
C
I
don't
think
I
think
it
was
it's
a
good
starting
point
where
we
can
link
people,
because
it's
not
effectively
an
issue
in
okiti,
but
it's
really
a
good
example
of
how
discussions
should
be
how
they
should
work.
C
B
B
All
right,
I
threw
another
one
in
here:
that's
from
the
f
cos
working
group
fcos
moving
ip
tables
to
nft
backend
vadim,
again
anything
you
want
to
add
to
that.
It's
pretty
straightforward.
C
I
don't
think
it
should
affect
us
significantly,
mostly
because
okay
defaults
to
ovn,
meaning
the
q
proxy,
the
most
fragile
component
here
would
not
be
using,
is
not
in
fact
used
at
all
and
railroads.
C
If
I
remember
correctly
from
day,
one
has
been
using
nft,
so
all
these
cases
have
been
tested
and
it's
just
us
lagging
behind.
In
any
case,
the
instant
4a
to
effectively
control
the
whole
configuration
for
48.
C
All
we
need
to
do
is
just
pull
one
podman
container
and
that's
a
basic
case
where
which,
for
the
request
ensures
so
once
we
can
control
the
whole
configuration
and
roll
back
the
nft
change.
If
we
have
a
good
reason
to
do
that,
but
I'm
thinking,
even
if
we
hit
issues
these
would
be
reported
to
the
sdn
team
and
they
should
be
fixing
it
because
eventually
nft
would
be
default
in
rail
if
it's
not
yet.
A
Yeah,
I'm
just
thinking
right
now,
but
it
needs
its.
The
nfd
background
has
been
default
in
real
eight,
so
I
agree.
A
E
A
D
B
B
Well,
we'll
bring
that
feedback
to
diane.
This
is
generally
how
I
do
meetings
of
this
sort,
but
we'll
bring
it
back
to
diane
and
see
what
she
says:
okay,
so
the
next
one
that
I
added
is
clarify.
Okd's
community
support
model.
B
I
threw
this
one
in
because
red
hat
employees,
one
in
particular,
keeps
getting
personal
emails
and
people
tagging
him
in
the
channel
and
all
sorts
of
things
trying
to
get
support
from
the
person
that
they
know
is
sort
of
the
red
hat
employee,
that's
working
on
okd,
and
so
my
thought
is
that
we
scour
through
the
website,
the
repos
and
whatever
and
actually
have
a
boilerplate
a
couple
sentences,
paragraph
that
says
what
does
community
support
mean?
B
We
have
that
in
the
big
banner
on
the
website,
but
we
don't
actually
explain
what
is
community
supported
and
compute
community
driven
and
there's
two
reasons
that
this
is
important.
I
think
number
one
it's
unfair
and
to
the
people
who
are
red
hat
employees
who
are
providing
their
time
to
help
out
and
probably
wears
on
them
I'd.
Imagine
that
it
sort
of
stresses
them
out.
B
The
other
thing
is
that,
if
more,
if,
if
one
person
gets
tagged
and
one
person
is
the
target
for
it,
then
the
community
can't
help,
because
we
know
we've
got
our
institutional
knowledge
and
memory
and
whatever
to
help
on
these
issues,
and
also
we
don't
learn,
because
if
we're
not
privy
to
these
discussions
or
we're
sort
of
boxed
out
of
them,
then
the
rest
of
us
can't
learn
about
these
particular
things.
So
there's
there's
multiple
advantages
to
this.
So
we
talked
about
this
at
the
doc
groups.
B
Meeting
doc's
group
meeting
last
tuesday
and
we'll
be
talking
about
it
at
the
next
meeting.
If
anyone
can
join
us
at
the
doc's
meeting
to
chip
in
on
this
coming
up
with
a
couple
sentences
that
we
can
put
somewhere,
then
that
would
be
yeah
bruce
says
reference
the
goose
that
laid
the
golden
egg
fable.
B
Then
I
think,
if
if
we
can
come
up
with
something
in
the
next
like
week
or
so,
and
then
just
go
through
all
of
the
okd
references
and
plaster
this
up,
so
that
we
can
relieve
the
load
on
on
the
red
hat
employees
who
have
been
bearing
so
much
and
and
get
ourselves
more
up
to
speed
on
things.
E
Sorry
would
it
be
worth
putting
in
you
know
to
the
slack
channel.
Occasionally
you
know
a
boilerplate
message
indicating
that
you
know
this
is
community
supported.
You
know,
along
with
the
working
group
email,
you
know
saying
that
this
is
not
vadim
supported.
I
mean
not
specifically
saying
vadim,
but
you
know
the
idea
is
that
community
members
are
helping.
It's
not
just
one
person
it
just.
You
know
once
a
week
put
something
out
there
or
whatever,
so
people
get
the
hint.
I
mean
I'm
trying
to
help
people.
B
And
we
can
do
channel
announcements
and
things
like
that
for
sure
I
think
that's
a
great
idea.
Anyone
else
have
thoughts.
G
Yeah
jimmy
it
sort
of
just
occurred
to
me
earlier
today
that
it
actually
might
be
useful
to
somehow
put
to
get
us
some
information
of
this
is
what
you
can
do.
G
Okay,
because
I
think
what
happens
to
a
lot
of
people
and
as
I
was
telling
video,
I
would
include
myself
on
that
list
on
occasion,
especially
when
I'm
tired
and
irritable
or
lazy,
which
is
often
is
that
you
know
the
easiest
thing
to
do
is
to
seek
help,
but
that
that's
not
really
a
long-term
strategy,
and
it's
not
that
easy
in
the
documentation
to
find
out
like
we
do
have
a
faq
and
we
do
have
some
information
there.
G
So
that's
certainly
good
as
far
as
it
goes,
but
it
might
be
useful
to
try
and
pull
together
some
like
checklists
or
try
this,
or
did
you
look
at
that
or
things
that
are
important?
G
I
don't
know
I
mean
it's,
it's
a
big
topic,
so
that's
that's
just
a
a
vague
idea
of
something
that
might
help
as
well.
B
And
we
can
always
link
to
that
website.
What's
the
website,
that's
like
how
to
ask
questions,
there's
like
an
actual
url,
where
someone
set
up
a
site
that,
like
you,
go
to,
and
it's
actually
like.
How
do
you
ask
questions
for
support
and
for
for
for
troubleshooting
and
whatnot
john?
Did
I
hear
you
say
start
something.
E
Yeah,
I
have
a
I
have
a
delay
here,
so
I
think
you're
done,
and
I
start
talking
or
whatever
I
thought.
One
of
the
what
vaneem
said
earlier
is
doing
something
you
know
how
to
debug.
You
know
looking
at
you
know
when
you
get
that
huge
log
bundle,
you
know,
how
do
you
analyze
it?
I
think
something
like
that
would
be
great
to
send
us
a
go.
Go
look
at
this
youtube,
video
or
something
to
give
you.
You
know
the
basics.
Is
you
know?
How
do
you
get
the
log
stack?
E
You
know,
how
do
you
do
a
basic
analysis
of
it?
What
are
the
things
in
it
that
are
important
and
that
might
help
yeah
and
it
might
help
people
who
are
who
want
to
help
too,
because
some
of
that
stuff
in
there
is
very
esoteric
and
sometimes
you
just
have
to
stumble
across
it.
So
I
know
I'm
not
trying
to
put
more
work
on
vanim,
though,
but
that
might
be
well.
E
C
Yeah,
that
sounds
like
a
that
sounds
like
a
very
useful
thing.
I'm
just
not
sure
about
the
format.
Should
it
be
a
youtube
thingy,
a
blog
post
text.
G
G
B
H
C
I
think
I
think
we
should
start
with
a
youtube
video,
because
it's
easier
and
I'm
not
sure
which
parts
of
the
process
should
we
focus
on,
for
instance,
there's
a
bunch
of
code
which
generates
certificates.
I
have
no
idea
how
it
works.
I
can
barely
handle
the
open,
ssl
cli,
so
I
probably
would
rather
discuss
the
versions
and
the
bootstrap
process,
but
this
thing
also
needs
to
be
covered
in
some
kind
of
a
blog
post
and
so
on.
C
But
initial
response
from
the
video
would
help
us
to
shape
up
the
basics
of
the
of
the
markdown
document
we
would
put,
and
then
we
could
extend
it
later
on.
Another
concern
is
that
things
get
outdated.
C
C
E
B
Yeah,
I
think
that
that
would
be,
I
think,
john's
making.
A
good
point
was
really
if
we
could
just
get
some
pointers
from
you,
like
just
a
template
of
of
items
that
you
think
would
be
important
to
hit.
The
group
can
handle
john
himself,
who
can
do
the
video
anyone
else?
Can
chip
in
and
same
with
documentation.
We
don't
want
to
add
more
work
to
you,
but
you're,
the
one
who
handles
the
majority
of
the
tickets
and
also
knows
the
in
innards
best
of
anyone
in
the
group.
B
So
if
you
just
gave
us
a
template,
if
these
are
the
things
to
hit,
then
we
could
run
with
it
in
all
of
the
the
venues
be
it
you
know
blog
post
or
video,
or
anything
like
that.
F
Well,
it
sounds
fine
to
me,
but
I
was
also
just
going
to
say
that
I've
got
to
drop
now
because
I'm
gonna
go
get
go
on
my
merry
way
to
go.
Get
my
second
vaccine
shot
so
I'll,
see
y'all
later.
D
D
I
think
it
would
be
a
good
idea
to
have
a
recording
yeah,
maybe
a
like
a
role
play
if
you
get
a
lock
bundle,
what
you
do
first
and
just
to
see
some
typical
yeah
typical
steps
that
may
be
reproducible
by
others.
D
I
I'm
sure,
because
you
are
always
so
fast
that
you
have
some
typical
spots
where
you
look
first.
B
Well,
folks
should
feel
free
to
chip
in
so
vadim
can
get
us
something
within,
let's
say
within
the
next
month
right.
We
don't
want
to
put
too
much
on
his
plate,
but
if
vadim
gets
us
a
template,
then
folks
from
the
group
will
all
sort
of
divvy
up
the
tasks
of
getting
it
into
the
various
formats
coming
up
with
something
in
the
various
formats
blog
whatever
and
the
docs
group
I'll
mention
this
at
the
docs
group,
because
there's
some
people
going
to
the
docs
meeting
that
aren't
coming
to
this
and
vice
versa.
B
So
all
right
anything
else
on
this
topic
is
there
anything
else
that
we
can
do
that
folks
can
think
of
to
make
it
to
clarify
what
the
model
is.
The
support
model
is:
is
there
anything
else
we
can
do
other
than
providing
this
documentation
and
putting
some
boilerplate
language
into
various
places,
though,
where
we
have
a
presence
anything
else.
C
That
shouldn't
well
happen.
The
internal
knowledge
is,
of
course,
a
huge
source,
but
it's
not
being
used
in
every
single
report.
Most
issues
are
very
trivial
and
I
think,
like
three
or
four
of
our
architects
are
in
openshift
dev,
so
you
can
think
I
can
name
drop
later,
but
I
don't
think
they
would
like
it.
D
C
And
the
real
the
the
real
way
to
handle
is
probably
starting
with
some.
We
have
a
bunch
of
issues
logged
for
okay
and
showing
activity.
There
would
be
very
helpful.
Just
some
basics
here
is
how
I
understand
bootstrap
here
is
what
I
see
from
log
bundle.
I'm
stuck
here,
I
don't
know
what's
happening.
C
I
could
jump
in
and
help
and
extend
this
of
course,
but
a
lot
a
lot
of
issues
are
just
I'm
thinking.
Folks
are
thinking
me
directly
just
because
I
respond
there
and
and
respond
in
every
single
issue,
and
people
assume
that
I'm
the
only
one
here.
Well,
that's
the
belief.
D
I
remember,
as
we
were
searching
for
jon
vadim
and
I
was
searching,
were
searching
for
an
ovn
kubernetes
problem
and,
finally,
I
think
vadim
got
us
the
first
first
steps
and
john
and
me
were
searching
through
the
community
and
finally,
I
think
the
guys
at
john.
Do
you
remember
at
the
network
manager?
D
I
don't
know
what
the
jet
is
called
now
they
helped
us
and
it's
just.
Finally,
I
think
it
was
a
solution,
though
you
don't
need.
D
C
I
think
it
was
a
great
example
of
how
it
should
be
structured.
I,
my
network,
knowledge
is
very
limited
and
I
would
rather
not
extend
it
actually.
So
this
is
why
I
would
rather
pass
it
to
some
professionals
who
can
chase
folks
on
irc
and
help
with
some
details.
I
don't
they
don't
fully
understand.
E
And
it
may
be
that
there
are
bugs
that
you
know
you
know
like
we
did.
We
created
that
sort
of
private
slack
group
or
whatever
with
the
three
of
us,
but
there
may
be
issues
that
you
know
we
can
get
if
we
need
to
get
three
or
four
people
in
where
it's
easier
to
have
a
discussion
in
that
private
channel.
You
know
versus
the
open
discussion,
then
publish
the
the
findings
afterwards,
because
sometimes
you
get
chime
in
after
chime
in
after
time
and
and
it
gets
distracting.
C
B
Well,
let's
move
on
now
I
think
we
we've
got
a
great
foundation
now
for
this
particular
topic
to
move
forward
with
and
I'll
go
back
and
fill
in
the
discussion
item
with
sort
of
what
we
discussed
here
at
the
meeting,
so
that
we've
got
sort
of
a
clear
record
in
that
actual
threat,
and
that's
the
other
thing
is:
if
you
put
something
in
the
discussions
and
we
talk
about
it
at
the
meeting,
if
you
can
update
the
discussion
item,
if
you
created
it
folks,
that's
helpful
and
sort
of
keeps
things
organized
and
then
the
last
thing
in
the
discussions
is
insights
operator
is
degraded.
C
No,
I
think
we
need
logs,
because
there
are
multiple
things
which
could
be
causing
it.
It
could
be
proxy,
it
could
be
actual
well
unlikely,
but
actual
downtime
of
the
inside
separator
for
some
short
time.
Well,
if
we
would
have
logs
we
would.
We
would
have
some
something
to
discuss
there.
Yeah.
B
B
We
talked
about
the
issue
of
identifying
information.
Does
anyone
did
anyone
come
up
with
or
know
of
like
just
a
simple
bash
script
that
cleans
things
up
to
make
it
easier
that
we
could
just
post
and
people
could
download
it
and
scrub
anything?
B
C
C
H
Hi,
here's
eric
what
is
the
sos
tool
doing,
because
this
is
now
also
scratched
with
logs.
I
think,
since
maybe
half
a
year
ago,.
E
C
H
Is
the
shell
tool
right?
You
call
it
and
it
collects
stuff
for
you.
C
The
problem
is
sharing
the
results.
The
log
bundle
is
built
in
a
similar
way,
but
some
information
might
be
considered
sensitive.
Let's
put
it
like
this,
I
don't
think
host
names
are
sensitive,
some
folks
thinks
they
are
and
we
need
some
way
how
to
have
them
replaced
with
some
fake
names,
but
still
make
sense.
In
the
end,
that's.
C
No,
rather
it
was
not
built
for
that
and
another
problem
is
we
also
copy
a
lot
of
certificates
which
effectively,
in
the
end,
have
the
host
names
embedded
in
them?
You
just
have
to
extract
them.
C
Probably
our
solution
would
be
to
avoid
scrubbing
but
do
up
long,
temporary
places
which
would
have
to
delete
it
after
a
couple
of
hours
days.
C
That
would
be
very
welcome
because
the
log
bundles
are
expected
to
not
have
all
the
sensitive
information,
and
the
same
applies
to
must
gather,
but
in
order
to
identify
all
the
issues
they
effectively
have
to
also
read
data
from
user
name
spaces
so
that
we
could
understand.
Maybe
it's
a
pdb
blocking
the
upgrade
or
something.
B
I
don't
want
to
spend
too
much
time
on
this,
because
we've
only
got
14
minutes
left
and
we've.
We
actually
had
a
larger
discussion
a
couple
months
ago
about
this
that
took
up
a
significant
amount
of
time,
let's
regroup
on
this
at
the
next
meeting,
but
maybe
if
we
came
up
with
a
list
of
things
that
people
do
feel
are
concerning
items
from
the
logs
that
are
concerning.
If
we
generated
a
list
and
then
said
okay,
how
could
we
tackle
this?
Can
we
tackle
this?
B
Then
we
actually
know
what
we're
looking
at,
and
some
folks
in
the
group
may
not
be
familiar
with,
must
gather
so,
let's,
let's
table
this,
but
in
the
meantime,
if
folks,
who,
just
in
their
minds
who
are
familiar
with,
must
gather
think
about
some
of
the
things
that
would
be
problematic
and
maybe
I'll
send
something
to
the
group
share
out
like
a
google
doc
or
maybe
discussions
yeah
in
the
discussions.
B
Actually,
we
could
do
it
in
there
in
the
discussions
just
generate
a
list
of
things
that
we
think
could
be
viewed
as
problematic
and
again
we're
just
we're
having
to
put
ourselves
in
other
people's
place
and
and
what
they
would
think
would
be
problematic
in
that
and
then
from
there
at
the
next
meeting
or
future
meeting
discuss
ways
in
which
we
could
ally
those
concerns.
A
little
bit.
Does
that
sound
good.
B
Yes,
okay,
all
right
and
that's
it
for
the
discussions.
Timothy
did
you
want
to
bring
up
anything
from
the
fedora
coral
west
world
that
that
you
think
that
folks
should
know
about.
A
A
Apart
from
that,
I
think
we've
already
discussed
c
group
d2,
which
is
one
thing:
the
move
to
nfg,
which
is
second
one.
The
account
me
changes
are
coming
later
in
august,
just
a
reminder
for
folks,
starting
from
the
releases
in
august.
So
maybe
not
the
releases
you
know
in
august,
but
a
little
bit
later
defaults.
A
The
the
default
will
be
on
fedora
cores
that
you
will
send
account
me
requests
to
the
fedora
servers,
so
essentially
it's
a
very
privacy
friendly
way
of
counting
the
number
of
federal
cross,
node
leavings
running
on
the
planet
and
for
us
to
have
some
kind
of
statistics,
of
how
many
peoples
are
running
federal
courts
and
yeah.
So
this
one
is
coming
around
august
and
there's
already
instructions.
There's
a
magazine
article
coming
up
like
probably
next
week
or
this
week
or
next
week,
to
explain
how
to
disable
that.
B
B
A
A
I
would
say
so.
Maybe
maybe
I'm
wrong
here,
but
I
think
okd
should
not
be
impacted
by
that.
So
the
the
issue
in
general
is
that
if
you
start
from
a
very
old
federacy
node
and
you
update,
you
try
to
update
to
a
fresher
version,
you'll
get
issues
because
you
won't
get.
You
won't
have
the
signing
keys
to
verify
that
the
latest
version
is
actually
a
valid
one,
because
you're
in
a
very
old
node
and
potentially
you're
out
you
you're
way
out.
A
You
don't
have
the
latest
keys
to
verify
that,
because
on
federacrs
we
well
in
general,
we
only
ship
the
next
two
releases
keys
in
in
in
an
image.
So
if
you're,
starting
on
fedora
30
for
example-
and
you
want
to
upgrade
to
fedora
35-
then
you
won't
have
the
key
for
federal
certified,
but
that
should
not
be
an
issue
as
far
as
I
know
on
the
kitty,
because,
honestly
you,
the
updates
happen.
A
They
are
at
the
mco
so
and,
and
the
machine
is
content,
you
beat
it
on
the
cluster
and
there's
no
it's
it's.
It's
not
it's
not
pulling
any
content
from
a
from
a
sign
or
repo.
So
it's
it's
not
exactly
updating
the
same
way
that
we
update
the
classic
federal
course
notes.
So
yeah
I
made
I'm
not
100
sure
so,
maybe
about
him.
You
could
confirm
that,
but
yeah.
C
C
As
for
upgrades
right,
we
don't
use
native
core
os
mechanism,
we
upgrade
from
one
major
version
to
the
other,
so
it
may
not
even
span
fedora
releases
at
all,
but
in
the
worst
case
scenarios
fedora
is
33
to
34
and
we
can
also
pull
out
the
edge
to
prevent
that
upgrade
from
happening.
If
we
find
some
issues
with
that
so
yeah,
I
don't
think
that
would
affect
us.
B
I
just
wanted
to
bring
that
up
in
case
it
did,
and
at
least
we're
aware
of
that,
which
I
think
is
a
significant
change,
because
there
are
people
going
between
both
groups
and
sort
of
playing
with
fcos
and
playing
with
okd.
At
the
same
time,
in
the
last
few
minutes
we
have
seven
minutes
left.
I
want
to
talk
about
the
kubecon
office
hour.
What
did
we
learn
from
that
in
terms
of
our
audience
in
terms
of
our
our
ability
to
communicate
our
ideas,
our
ability
to
answer
questions?
B
What
how
do
folks
think
that
came
out
for
those
that
were
there
for
folks
that
haven't?
We
can
put
the
link
to
the
video
which
was
posted.
D
For
me,
I
had
big
problems
to
follow
the
correct
chats,
because
there
were
lots
of
chats
and
q
and
a
chats
and
main
chat
and
background
chat.
I
think
it's
a
little
bit
too
much.
I
don't
know,
what's
your
impression.
B
C
E
C
Yeah
I
was
following
the
twitch
chat
and
apparently
had
copies
from
everywhere,
like
at
least
from
youtube,
but
following
the
moderator
was
apparently
a
better
idea,
because
they
know
how
to
like
switch
topics
and
slowly
move
from
one
topic
to
the
other.
So
you
wouldn't
have
a
whole
mess
of
different
chats.
C
My
impression
that
was
that
it
was
great
except
when
folks
started
with,
what's
coming
in
four
eight
and
four
nine,
which
are
very
technical
issues.
We
shouldn't
have
to
answer
them,
but
we
should
have
a
prepared
answer
like
here
is
where
we
post
the
release.
Notes
here
are
the
links
to
our
workgroup
meetings
and
so
on.
C
What
I
think
I
would
add,
some
more
discussions
about
that.
It's
in
not
just
a
free
version
of
ocp,
which
is
not
true,
but
it's
rather
a
community
version
where
you
can
affect
every
single
detail
of
your
okd
cluster
folks
who
are
asking
why
you're
not
starting
with
railcar
os.
That's
exactly
the
reason,
because
the
community
cannot
contribute
to
it
directly.
They
would
have
to
go
all
the
way
from
upstream
to
fedora
to
rail,
and
that
shows
the
value
like.
C
C
C
C
That
means
you
might
hit
some
more
bugs
yes,
but
that
also
means
you
also
get
a
feeling
of
how
the
distribution
would
look
in
in
a
couple
of
years.
The
whole
city
groups
it's
a
perfect
example.
C
We
might
want
to
enable
this
and
ok
if
it
works
great
if
it
brings
benefit
to
the
community,
because
ocp
is
more
conservative
and
we
have
our
hands
on
time.
We
can
do
whatever
yeah
complex
topic
is
because
folks
might
think
that
we
test
new
features
on
them.
C
This
is
not
well,
that's
not
entirely
true
and
probably
that's
a
very,
very
complicated
topic,
but
once
we
should
actually
make
a
point
of
it
that
okd
is
a
place
where
you
can
get
latest
and
greatest
features
and
since
the
whole
community
is
looking
at
it,
your
chances
to
have
them
fixed
sooner,
are
actually
multiplying,
but
other
than
that,
I
think
it
was.
C
B
I
thought
one
thing
is
that
we'll
have
to
find
this
is
assuming
that
it
moves
forward.
Diane
was
saying
that
if,
if
it
went
well
that
we
had
the
chance
of
doing
like
a
bi-weekly,
I
think
that
would
be
fantastic.
I
haven't
heard
back.
If
that's
the
case,
one
thing
I
think
we
would
want
to
do
is
find
what's
the
right
level
of
support
to
provide
in
these
office
hours.
How
are
we
going
to
start
looking
at
people's
logs?
B
I
we
got
a
question
along
those
lines
of
like
here's,
a
big
log
error.
Are
we
going
to
start
looking
at
that
or
at
when
you
know
we'll
have
to
figure
out?
What's
the
threshold
low
and
high
for
saying,
okay,
this
needs
to
happen
offline
from
the
show,
because
otherwise
it
would
take
up.
You
know
a
significant
amount
of
time
on
this
one
particular
issue
and
we
wouldn't
get
to
anyone
else
so
that
that
would
be
my
only
thought.
Anyone
else.
A
I
I
guess
well
I
don't
know
the
weekly
might
be
a
little
bit
too
much
for
for
the
office
hour,
but
certainly
not
up
to
me
to
decide
but
yeah.
We
could
maybe
monthly
or
something
like
that
and
well.
I
I'm
not
sure
office
hours
are
great
for
debugging
session
either,
because
they'll
just
pin
down
a
lot
of
people
for
just
one
issue,
one
specific
issue
so
yeah,
I'm
not
really
in
favor
of
that
but
yeah.
It's
it's
still
open
for
question.
B
All
right,
we
have
one
minute
left,
and
I
do
want
to
be
mindful
of
people's
time.
Is
there
anything
else
that
folks
want
to
bring
to
our
attention
before
we
step
away
from
this
meeting.
B
Your
meeting
oh
good,
meeting
jamie
yeah,
you're
you're,
welcome
and
if
folks
like
this
format,
then
we
can,
you
know
mention
it
to
diane,
I'm
happy
to
facilitate
in
the
future.
I
don't
I
don't
know
what
what's
behind
that,
but
she
might
be
interested
in
letting
me
co-chair
or
whatever
for
we'll
see.
I
don't
know
if
it
has
to
be
a
red
hat
person
or
not,
but
this
was
great
and
we'll
talk
in
two
weeks
and
also
online
and
use
the
discussions
and
don't
forget.