►
From YouTube: Foundational Infrastructure Working Group [Apr 21, 2022]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay:
let's
go
over
the
open,
full
requests
for
porsche
so.
A
A
This
is
a
follow-on
from
what
has
been
done
related
to
like
locking
down
nets,
access
for
other
jobs
using
iptables
on
linux
and
that
didn't
support.
A
Initially,
there
was
no
support
for
ipv6,
so
this
addresses
that
I
think
this
is
already.
A
In
the
on
the
branches
of
the
agent
that
xenial
uses,
this
was
already
merged,
so
this
is
about
merging
that
into
main,
so
that
bionic
has
this
as
well.
A
A
A
Support-
and
that
was
this-
is
a
follow-on
from
the
what
was
it
net
adder
gem
or
something
there
was
some
bump
of
a
gem
in
another
pool
request,
and
this
was
more
of
a
things
that
were
while
bumping
that
this
person
has
found
additional
yeah
and
security
that
would
rather
be
addressed,
or
these
specs
demonstrate
that
there's
some
ideas
on
fixing
this.
A
The
main
problem
is
that,
because
the
way
it's
stored,
the
ip
addresses
are
stored.
A
That's
good,
so
we
just
have
to
wait.
There's
basically
changes
we're
waiting
for
changes.
A
C
A
A
These
are
the
boss,
agent
changes.
Yes,
there's
also
another
part
where
they,
where
we
have
the
porsche.
The
stem
cell
builder
changes.
D
Yeah,
I
think
we
have
three
options
here,
either
on
massage
the
pr
to
bring
in
that
address
in
the
latest
version
or
maybe
getting
a
release,
because
there
is
already
a
version
that
has
a
fix
in
the
1.5x
brand,
so
to
say,
but
it's
not
released
to
rubygems
or
we
go
for
ip
address
implementation.
I
think
these
are
the
three
options
here.
A
Yes,
but
I
like
I
mean,
if
there's
more
animo
to
to
move
to
ipad
or
that
this
could
be
an
intermediate
fix
right,
so
we
this
would
be
a
quick
fix
if
it's
released,
is
it
released
or
not.
D
So
they
have
a
github
tag,
but
no
ruby,
gems
release
so
and
the
maintainers
are
super
responsive.
So
it's
an
unclear
path
to
be
honest
right
now
you
just.
B
A
D
A
Like
we
had
to
do
the
same
with
event
machine
because
event
machine,
they
are
also
not
releasing
and
they
have
like
fixes
that
address
ruby,
3.0
incompatibility
and
they
haven't
released
like
they
worked
on
it,
and
then
they
just
stopped
responding
because
they
had
to
make
a
change
log
and
they
didn't
want
to
make
a
change
log
or
something
that
was
too
much
work
for
our
ruby
project.
A
A
C
A
Okay,
something
like
that
yeah
I
mean
that
would.
A
A
Yes,
this
is
also
an
interesting
one.
That's
fully
being.
C
So
I'm
gonna
add
another
comment
to
this
today
too,
so
it
looks
like
this
is
likely
breaking
change
and
we
need
to
look
into
some
more
further
modification
to
the
pr
before
it
gets
merged.
Unfortunately,
this
might
take
a
little
while,
though,
because
we've
got
some
competing
priorities,
so
I'll
update
this.
A
Yes,
just
a
quick
summary:
it's,
it
seems
like
a
small
change,
but
this
will
conflict
with
dns.
So.
C
I
think
I
think
the
issue
is
the
dhcp
option
sets
in
aws
and
it
seems
like
the
primary
means
to
get
around.
That
is
a
dhclient.
C
C
We
might
just
need
to
have
some
sort
of
toggle
and
have
the
agent
be
able
to
do
its
current
behavior
of
modifying
base
or
toggle
it
to
modify
head
instead
of
base
so
that
we
can
use
that
route
because
as
long
as
you're
not
using
bosch
gns,
that's
not
a
problem,
but
we
do
want
to
try
to
get
off
of
our
fork.
So
yeah.
C
So
well,
the
other
thing
is:
would
we
want
to
look
into
using
updates
to
dhclient.com.
C
Instead
of
base
is
anybody
aware
of
whether
that
has
cross
iaas
support.
C
I
know
this
is
some.
This
is
a
fix,
apparently
ubuntu
a
while
ago
made
a
change.
It
used
to
be
the
precedence
between
the
basefi
resolve.com,
face
file
and
dynamic
name
servers
and
then
at
some
point
they
changed
it
so
that
base
had
less
precedence
than
the
dynamic
name
server.
So
if
on
aws,
if
you're
using
dhcp
option
sets,
they
automatically
insert
a
name
server
higher
than
whatever's
in
base,
so
we
fixed
it
on
our
fork
by
changing
from
base
to
head
which
completely
resolved
to
the
issue.
C
However,
reuben
pointed
out
that
bosch
dns
also
expects
to
control
the
head
file,
we're
not
using
bosch
dns,
so
our
fix
doesn't
break
anything
for
us
and
fixes
our
problem,
but
it
would
result
in
contention
between
the
two
components
for
the
head
file
and
there
just
doesn't
seem
to
be
a
good
and
easy
way
to
resolve
those
two.
C
C
C
C
Oh,
why
that
I'm
still
digging
into
as
well
okay,
so
it
might
be
a
legacy
thing
I
think,
for
a
while,
it
was
necessary
as
part
of
some
aws
traffic
control,
stuff,
the
global
traffic
routing
they've
since
added
dns
route,
53
dns
records
stuff
that
obsoleted
and
replaced
some
of
that.
So
I'm
not
entirely
sure
we
even
still
need
those,
but
I'm
digging
into
that.
C
If
we
can
just
enough
of
dhcp
option
sets,
we
may
not
need
this
fix
anymore
hypothetically,
but
so
that's
why
I
need
to
figure
out
on
our
side
if
we
still
need
have
the
actual
need
for
dhcp
option
sets.
If
so,
we
need
to
fix
the
problem
for
ourselves
somehow.
A
Yeah
there
is
some
we
are
seeing
some
flakes
in
rci,
where,
if
you
direct
directly
after
updating
a
porsche
director
with
active
deployments,
try
to
do
it
deploy,
then
you
get
some
unresponsive
agents
and
the
problem
is
that
the
the
net
connections
take
like
five
minutes
to
timeout
or
pre-connect,
or
something
like
to
for
the
agent
to
crash
and
then
restart
and
reconnect
to
the
new
director
and
while
investigating
that
constantine
has
added
some
logging
to
better
get
a
better
idea
of
what's
happening
and
that
logging
maybe
can
be
useful
for
others.
A
Yeah,
so
a
follow
up
of
this
would
be
to
also
rethink
the
way
we
reconnect
with
nets
because
we
have
like
retry
like
retriable
things.
I
think
we
use
from
bosch
utils.
We
wrap
it
in
that,
but
since
we
switched
to
the
new
nas
or
the
so
we
I
think
half
year
ago
or
something
there
was
an
effort
to
move
from
jacknet,
which
was
a
nest
client
in
go
that
we
maintained
to
the
upstream
nets,
client
that
was
done,
and
that
client
has
a
lot
more
options
to
like
retry
connections.
A
So
it
would
be
better
to
start
tweaking
the
the
the
net
client,
the
upstream
nets,
land
that
we're
currently
using
more
right
for
this.
These
particular
cases,
for
that
would
be
nice
to
have
some
additional
logging.
C
A
No,
can
we
many
reasons?
I
don't
know
if
it
was
particularly
this.
I
think
in
this
case
it
had
something
to
do
with
the
mac
address
changing
of
the
machine,
and
then
we
need
to
basically
on
during
bootstrap
of
the
agent.
It
clears
harp
entries
if
I'm
correct
and
that
fixes
the
issue
so
but
first
it
needs
to
time
out
right
so
that
it
crashes
and
then
it
goes,
restarts
and
goes
through
bootstrap.
A
That
resolved
the
issue,
but
it
would
be
better,
maybe
like
we
could
have
something
where,
when
it
in
the
connection
handlers
or
like
retry
handlers,
where
there
we
could
have
something
that
flushes
the
arp
tables
or
something
that
would
be
a
quicker
solution,
a
solution
that
would
quicker
resolve
itself.
A
But
beyond.
You
had
also
some
questions.
B
B
A
It
is
related
to
the
connect.
So
when
we
switch
to
the
new
nets
client,
then
we
started
seeing
these
flakes,
but
it
took
a
while
to
like
relate
it,
like
figure
out
that
it
was
actually
related
to
that.
B
D
Room
you
mentioned
like
this
is
a
problem
right
after
the
director
has
been
upgraded
and
the
deployment
has
started.
What
time
frame
are
we
talking
about
here
like
minutes
or
five
minutes.
A
D
A
Yeah-
and
I
think
that's
why
it's
good
to
to
have
that
additional
logging
right,
because
that
would
help
with
this
discussion
as
well.
If,
like
people
can
gather
some
more
metrics
around
this,
because
we
we
switch
to
a
different
client,
I
mean
we
did
that
a
while
back,
but
yeah
would
be
good
to
have
some
data
from
the
wilds
on
how
that
thing
actually
performs,
and
then
because
we
need
to
learn
about
it
right,
it's
a
new
client,
so
it
has
different
behavior
and
at
scale.
A
So
yeah
take
a
look
at
this
discussion
and
I
think
this
this
logging
would
be
good
to
have
that
yeah.
C
I
I'd
upload
some
better
logging
around
the
bus
agent.
Logs
too,
there
seemed
to
be
a
lot
of
expected,
but
ignorable
errors
that
are
red
herrings
in
there.
We
ran
into
that,
while
trying
to
debug
some
things
with
the
new
stem
cells,
yeah.
A
This
was
reviewed
and
is
currently
waiting
for
changes.
I
think.
A
Yes,
this,
I
still
need
to
work
on.
I
have
some
code,
it's
not
done
yet,
because
I
got
dragged
into
that
whole
community
automation
thing,
which
is
still
taking
up
way
too
much
time.
It's
crazy.
A
The
the
run
time
on
that
thing
it
like
it
takes
four
hours
or
something
to
sync
the
repo,
and
that,
like
the
feedback
cycle,
is
terrible
but
yeah.
I
will
continue
on
this
once
I
get
the
community
thing
figured
out.
A
B
A
C
A
C
So,
in
a
nutshell,
the
first
thing
I
did
was
undo
the
removal
of
the
centos
rel
suzy,
photon,
os
trusty
stuff
and
the
branching
based
upon
os,
then
re-removed,
suzy,
photon
os
leaving
centos
rel,
and
then
you
know
built
back
from
their
fixing
stages,
etc
and
sp,
adding
updating
fixing
specs
as
needed.
C
So
I
tried
to
make
the
history
very
clean
and
detailed.
A
Yeah
and
this
just
to
clear
up
we're
targeting
jimmy,
because
that's
gonna
be
the
new
target
moving
forward.
C
Yeah,
so
that
this
is
direct
was
directly
rebased
onto
the
latest
jammy.
Like
a
week
ago,
there's
been
a
couple
of
bot
commits
on
top
of
jammy
head,
but
still
pretty
much
mainline
jamming.
A
I
mean
we're,
not
gonna,
publish
them
right,
so
this
is
just
about
putting
it
in
the
builder
because
then
justin,
the
company
he
works
for
can
like
use
the
same
repo
to
build.
That.
A
C
What
we
were
especially
concerned
about
was
maintaining
a
fork
with
this
extent
of
changes,
because
it'd
be
almost
impossible
to
keep
up
to
synced
up
to
the
main,
so
we're
concentrating
on
rail8.5
vsphere
right
now,
so
the
other
ias
is
should
theoretically
work,
but
we're
focusing
on
getting
that
one
working
and
then
there
might
be
minor
tweaks
needed
for
the
other
iis.
But
that's
our
immediate
need.
A
This
is
for
jammy
as
well.
We
can
have
and
ruby
3.1
is
a
requirement
for
jammy
compatibility.
So
I
think
tomorrow
there
will
be
a
a
final
release
or
I
think
it
goes
ga
right.
The.
A
C
Speaking
of
that
cpi,
we
just
ran
into
an
issue
yesterday.
Does
anybody
have
context
on
why
system.view
permission
is
needed
because
it's
causing
a
problem
getting
that
for
us
with
one
of
our
customers
and
the
law?
That's
recently
changed.
No,
it
didn't
recently
change
they're,
just
pushing
back
it's
against
their
policy
and
they're
asking.
If
there's
something
you
know
lesser
that
can
be
done
and
the
logs
didn't
reveal
exactly
what
was
failing
as
a
result
of
the
missing
permission.
So
may.
B
C
A
Carney
brian
connie
usually
has
the
most
context
on
vsphere
related
things
check
with
him.
A
Okay,
there's
some
open
issues.
C
Yeah
so
I
discovered
that
there's
a
whole
folder
full
of
spec
files
that
aren't
getting
executed
by
either
of
the
two
rake
tasks
that
are
documented.
B
C
Or
so
there
they
all
are
get
executed
and
they've
been
getting
the
code
in
those
files.
It's
been
getting
updated
periodically
as
other
changes
get
made,
but
as
far
as
I
could
tell,
these
files
have
never
been
getting
called
by
those
vague
tasks
or
anything
anywhere
in
the
code
base,
including
historically
so.
C
A
C
A
Like
only
work
on
linux,
okay,
so
I
think
when
I
run
it
in
a
docker
container,
it
works
or
like
last
time
I
touched
it.
Maybe
we
can
just
add
some
github
actions
for
this.
That
would
be
like
really
simple.
I
guess.
C
Or
even
just
some,
you
know,
read
me:
instructions
on
the
correct
way
to
you
know
execute
them.
You
know
when
and
how?
Under
what
circumstances,
I
wasn't
sure
if
they
should
be
getting
executed
as
part
of
the
the
build
image
and
stem
cell
rake
tasks.
You
know.
C
A
A
I
don't
remember
his
name
when
we
were
doing
the
bionic
work.
I
think
there
were
some
like
shell
out
specs
or
something
that
were
like
fixed.
C
I
think
there
might
be
the
shell
out,
specs
might
be
in
a
different
folders.
You
know
somewhere
else.
Okay,
you
can't
recall,
but
yeah,
and
certainly
some
of
these
specs
started
failing
as
a
result
of
some
of
the
rel
stuff
and
if
I
hadn't
found
that
they
were
orphaned
and
run
them
way.
I
would
have
missed
that
so,
but
I
didn't
run
all
of
them
because
I
couldn't
run
some
of
them.
My
initial
attempts,
I'm.
A
C
So
yeah
I
opened
this
in
response
to
ramon
ski's
comment
that
the
ci
stuff
in
the
repo
isn't
used
anymore.
We've
just
got
a
separate
ci
repo,
so
it
seems
like
there's
still
a
lot
of
asset
files
and
some
things
you
know
ci
docker,
all
that
stuff,
oh,
but
this
is
used
yeah.
That
one
is
so
I
wasn't
sure
what
is
you
know,
orphaned
and
unused,
and
what
isn't.
A
But
I
mean
everything
here
is
used.
Okay,
these
are
this
is
when
you
like,
run
when
you
follow
the
instructions
on
the
readme.
You
use
this
run
script
right
to
run
into
docker
and
then
execute
that
builds
like
the
builder
or
the
publisher,
not
the
publisher
publishers
for
later,
and
that
is
also
what's
used
by
ci.
C
A
Here
talks
publish
right
so
this
still
uses
that
and
I
think
it's
all
really
spread
out
because
then
there's
like
here
there's
a
ci
directory
where
we
have
old
docker,
I
think
maybe
or
there's
a
pipeline.
You
have
a
yeah.
This
has
a
pipeline.
A
A
Anyway,
yeah
yeah
and
then
the
problem
is
also
that
there's
like
also
some
repos,
there's,
also
a
bosch
stem
cell
ci.
That's
like
private,
because
there's
like
the
stem
cell
publishing
that
goes
on
to
like
bifnet
or
tenzu
nets
like
the
vmware
thing
and
then
there's
like
the,
but
I
think
this
publishes
it
to
to
bosch
o.
C
That
it,
you
know,
I
felt
like
I
was
running
in
circles
at
one
point,
but
so
yeah
this
could
be.
You
know
nothing
to
do
close
it
scenario
if
everything
is
still
good.
A
A
A
Really
helpful
yeah,
so
this
is
already
addressed
right.
We
have
a
request
for
that.
C
A
I
mean
it's
deprecated,
but
yeah.
That's
what
the
reality
of
ruby
projects
nowadays,
like
things
just
get
like
no
support
anymore,
I
mean
it
is
a
vmware
thing
so,
like
we
probably
like,
have
the
option
to
ping
the
maintainers.
If
there's
an
issue
and
say
we
actually
need
a
fix,
think
yeah,
we
should
be
fine.
A
We
weren't
that
worried
and,
alternatively,
we
might
have
to
fork
it
if
we
really
start
running
into
issues,
but
yeah
I
mean
the
alternative
is
moving
to
go
or
something,
but
there's
way
too
much
stuff
or
logic,
and
if
these
first
api
could
be
way
too
expensive.
A
Yeah
there
are
some:
this
person
is
from
vmware
as
well.
There
was
some
internal
discussion
about
this.
They
got
a
workaround
and
still
waiting
on
verification
that
the
workaround
worked
and
then
this
issue
will
be
updated
or
it
will
be
closed
by
the
stillbot.
But
it's
being
addressed.