►
From YouTube: Implementations Sync: 2021-03-11
Description
Meeting notes: https://bit.ly/38pal2Z
A
B
Them
wow
and
natalie.
I
work
at
vmware
and
contributing
to
the
life
cycle.
I
actually
don't
have
a
terribly
exciting
status
update
because
I
was
on
vacation
last
week
and
spent
a
lot
of
this
week
just
doing
some
spec
work
and
and
pr
review
so
I'll,
pass
too.
A
C
I
can
go
next.
My
name
is
emily.
I
work
at
vmware,
I'm
a
maintainer
on
the
implementation
team
and
a
core
team
member.
My
status
update
is,
I
was
very
busy
with
non-bill
pack
stuff
last
week,
so
I
haven't
done
a
lot
of
things
that
are
related
to
this
group.
That's
been
happening
a
lot
lately,
so
natalie
and
jesse
have
really
been
running.
The
show
around
here
on
the
life
cycle
side.
D
I
don't
know
about
that,
at
least
on
my
side,
my
name's
jesse
brown.
I
work
at
heroku
and
I'm
a
recent
maintainer
to
the
lifecycle
project
and
this
week
I
worked
on
a
couple
issues
that
were
in
the
milestone,
trying
to
kind
of
push
that
along
so
that
we
can
cut
a
release
so
that
I
can
so
we
can
have
011
release
hopefully,
but
we'll
discuss
the
merits
of
that.
I
guess
later
today
and
I
have
to
leave
here
by
like
in
like
30
minutes
time
so
I'll
be.
E
Today
I
can
go,
I'm
mikey
young.
I
work
at
vmware
and
work
mostly
on
windows,
container,
related
stuff
for
pack
and
life
cycle,
and
most
of
what
I've
been
working
on
is
some
changes
to
well,
mostly
stemmed
out
of
fixing
the
cache
image
bug
where
it's
writing.
Linux
images
instead
of
windows,
images
for
windows,
but
two
related
pr's
that
are
yeah
mostly
to
your
related
peers
out.
E
There
changes
to
image
util
one
to
for
the
constructors,
which
I
haven't
touched
too
much
on
the
other
for
changing
how
our
registry
helpers
work.
I
put
an
item
on
the
list,
but
I
can
talk
through
some
of
the
changes
I
was
proposing
for
making
some
registries
read-only
a
little
bit
more
secure
by
default
and
then
only
being
able
to
write
to
two
authenticated
registry
interest
instances.
All
these
are
just
task
related.
So
I
can
go
into
more
detail
for
my
item
on
the
agenda.
F
I'm
yael
I
work
with
vmware,
mostly
on
the
life
cycle.
I
mostly
worry
this
week
on
the
opt-in
layer.
Caching,
I
made
some
more,
I
mean
refactoring
following
others
feedback,
so
thank
you
for
everyone.
It
was
great
and
I
started
working
on
this
spec
to
make
some
changes
over
there
emily.
I
really
need
your
help
regarding
this,
so
I
hope
that
we
can
discuss
it
before
you
go.
F
Or
I
mean
I,
I
put
a
an
item
in
this
meeting,
so
I
mean
it's
a
general
discussion
about
this
fact.
So.
G
I
can
go
I'm
anthony.
I
work
at
vmware
alongside
some
of
these
awesome
co-workers
here.
So
far
recently
I've
been
bowling
up
a
little
bit
on
the
platform
side
and
I'm
sort
of
coming
to
these
meetings
to
be
a
little
bit
more
well-rounded.
I'm
just
trying
to
understand
the
implementation
side
a
little
bit
more
so
as
far
as
like
progress
done
on
stories.
None
for
me
yet.
B
A
B
I
think
that's
everyone,
so
I
guess,
let's
move
on
to
release
planning,
I
could
share
my
screens.
We
could
look
at
the
milestone
together
if
that,
if
that
works
out,
let
me
try
to
do.
A
That
share
the
right.
B
B
B
D
Yeah,
I
think,
after
some
discussions
the
last
couple
weeks,
we
decided
to
kind
of
push
push
pause
on
the
pr
work
to
make
sure
that
the
spec
pr
came
about
and
was
reviewed
and
emily
reviewed
that
initially
a
little
bit,
and
so
we
added
some
fields
that
were
missing
to
the
spec
pr.
So
I
think
the
spec
vr
is
in
the
case
to
be
reviewed.
D
I
believe
joe
already
reviewed
it
and
on
the
implementation
side,
once
that's
approved
if
it
gets
approved
the
way
it
is,
then
there's
gonna
be
some
additional
fields
that
need
to
be
added
to
the
analyzer,
some
additional
flags
and
that
sort
of
brings
up
the
question
of
do.
We
have
to
do
the
work
that
uses
those
flags
in
for
o110,
or
can
we
just
like
take
in
these
flags,
and
you
know,
create
tickets
for
doing
things
like
validating?
D
What
was
it
like,
validating
the
caster
or
validating
the
the
additional
registries?
Things
like
that.
C
D
I
don't
know
if
it
says
anything
about
that
like
on
the
right
now,
because
we
don't
really
have
like
the.
I
guess
the
rfc
talks
about
what
we
would
do
with
those,
but
the
spec
is
not
currently
talking
about
using
all
the
flags.
I
don't
think
it
has
them
there,
but
it
doesn't
really
say
what
the
purpose
of
them
is.
C
I
would
rather
like
either
split
that
spec
pr
into
two
one
like
go
into
this
api
version
and
one
that
will
go
into
the
next
one.
That
also
includes
information
about
what
the
life
cycle
is
supposed
to
do
with
those
flags
and
then
split
the
life
cycle
into
two.
If
we
actually
wanted
to
split
the
work
up,
but
I
don't
like
the
idea
of
having
an
api
version
that
accept
a
bunch
of
flags
and
then
ignores
them.
C
D
Yeah
it
feels
gross.
I
think,
that's,
I
think,
that's
fine,
I
don't.
I
don't
have
a
strong
preference
for
the
spec.
I
mean
I
can
back
out
the
some
of
the
changes
that
I
made
to
that.
If
that's
the
direction
we
want
to
go
so
that
we
can
move
analyze
before
detect
kind
of,
as
is
if
everyone's
kind
of
okay.
With
that
we'll
say,
look
at
the
spec
but
yeah
I
don't
have
a
strong
preference
there.
I.
C
I
have
a
strong
preference
either.
I
think
there
are
two
options
that
work
we
can
either
do
it
all
in
this
version
or
split
it
into
two,
but
make
sure
the
flags
go
in
the
version
that
has
the
behavior
and,
as
the
person
who's
owning
this
pr,
it's
up
to
you,
whichever
one
you
want
to
do,
I
think,
will
be
fine.
D
Okay
did
it,
I
guess
you
were
the
one
who
looked
at
the
spec,
it
sort
of
suggested,
adding
the
fields
from
the
rfc
or
splitting
it
out.
Is
there
so
there's
no
pushback
from
you
now
on
kind
of
taking
those
fields
back
out
and
putting
up
another
spec
pr
for
the
for
those
fields
built
on
this
spec
pr,
I
guess
which
still
needs
to
be
reviewed
by
more
people.
C
D
All
right
cool:
I
can
rewrite
that
spec
pr
to
kind
of
match
the
work.
That's
actually
already
been
done
in
this
outstanding
pr
for
the
life
cycle,
and
then
we
can
match
that
against
the
this.
This
updated
spec
pr
and
then
the
other
spec
vr.
We
can
match
against
the
rfc
bits
that
have
not
been.
A
B
E
I'm
sorry
to
you,
I
may
have
missed
it
before
the
bug
that
I'm
currently
working
on.
Isn't
we
don't
plan
on
getting
it
in
for
this
life
cycle,
release
the
mismatched
cache
image,
oss.
B
E
B
E
It's
a
a
little
intertwined
but
yeah
for
the
for
the
most
part.
It's
that
to
the
the
second
image
util
pr
is
not
strictly
necessary,
but
it's
it
enables
test
coverage
back
in
life
cycle
for
the
bug.
E
A
B
B
I
don't
think
we
had
an
environment
that
we
could
use
to
verify
that
the
kate's
chain
is
being
used
and
we
were
kind
of
wondering
if
we
need
to
yeah.
I
don't
know.
D
Yeah,
how
much
are
we
testing
ggcr
like
yeah
like?
Do
we
trust
that
adding
the
keychain
will
do
whatever
ggcrs
you
know?
Keychain
is
supposed
to
do
since
we're
just
sort
of
it's
default.
Constructor,
there's
really
nothing,
no
inputs
or
yeah.
How
do
we?
How
do
we
want
to
test
this?
I
guess.
C
Given
that
it
is
gcr
functionality,
I
definitely
don't
think
we
need
an
automated
test.
If
you
wanted
to
do
a
manual
test,
I
think
the
project
has
a
gcp
account.
If
you
ran
techton
there,
you
could
probably
set
up
a
situation
like
this,
like
maybe
javier
has
been
working
a
lot
on
technology
like
reaching
out
to
him
and
working
with
him.
It
wouldn't
be
too
much
effort,
but
if
it
is
like
I'm,
okay,
trusting
it
as
long
as
it
doesn't
break
anything
else,
right.
G
I'm
wondering
for
this
particular
issue:
does
it
have
to
be
on
aws
just
based
on
this
one
line
here
or
could
it
be
on
any
is
if
so,
I
have
some
some
case
clusters.
Maybe
I
could
have
a
hand
in
acceptance.
D
Yeah,
it
could
be
on
any
of
the
major
cloud
providers
like
it
has
aws,
google
and
azure
loaded
by
default
as
like
keychain
credential
providers.
Basically,
the
other
things
I
I
would
expect
to
see
that
would
maybe
be
problematic
would
be.
D
The
things
like
I
found
here
like
the
the
kh
chain,
depends
on
a
couple
different
libraries
one
per
provider
built
by
you,
know
other
people
and
like
the
aws
one,
for
instance,
used
k
log,
which
then
it
through
a
log
that
was
like
something
about
deprecation
in
my
cluster
or
something
right
and
some
aws
something
another
and
the
so
I
had
to
set
the
logger
to
discard
it,
but,
like
the
cage
chain,
library
doesn't
use
k
log,
it's
used
by
you
know
the
implementation
of
the
aws
credential
provider,
so,
like
the
google
one
may
use
a
different
logger
which
we
might
have
to.
D
A
C
A
D
Maybe
I'll
tag,
maybe
I'll
tag
both
of
y'all
on
that
pr
to
sort
of
get
a
plan
forward,
because
I
don't
have
the
current.
I
don't
have
a
google
or
east.
B
I
think
I
think
that's
all
for
release
planning.
Oh
I
have
so.
I
have
one
other
thing,
real,
quick.
I
I
noticed
that
actually
many
of
the
issues
that
we
have
closed
and
and
are
currently
working
on
are
still
pending
the
final
api
spec
and
I
know
we
kind
of
said
we
wouldn't
ship
a
life
cycle
until
the
spec
had
been
finalized
at
the
same
time.
B
So
I'm
wondering
if
there's
any
way
we
can
kind
of
expedite
review
and
approval
of
this
the
spec
stuff.
I
believe
we
have
a
pr
for
everything
except
we
went
to
yale's
point.
We
might
make
some
changes
to
the
opt-in
layer,
caching,
one,
but
everything
else
should
be
open
pr
at
the
moment.
B
All
right,
I
I
think
that's
that's
really
it
for
release
planning
next
standing
item
is
needs
discussion
and
I
believe,
there's
nothing
there.
So
we
can
move
on
to
our
other
standing
item.
Rfcs
that
relate
to
the
implementation
team.
We
have
one
which
came
up
in
the
working
group
yesterday
for
process
specific
working
dir.
B
I
was
thinking
about
this.
I
I
remember
emily.
You
had
raised
some
concerns
around
the
profile
d
and
exec
d
needing
a
consistent
directory,
and
I
I
raised
that
in
the
working
group,
but
then
afterwards
I
started
thinking
like
you
even
have
process
specific,
execute
right
process,
specific
profile
d,
so
it
kind
of
that
starts
to
feel
even
more
complicated,
and
I
don't
know
if
that's
something
we
need
to
to
look
at
more
closely.
C
Yeah,
I'm
worried
that
on
the
if
you're
using
a
back
process,
this
might
not
work
it's
fairly
straightforward
for
a
direct
process,
because
the
only
thing
we
execute
in
advance
of
the
process
is
exactly
and
we
start
a
separate
process
for
each
of
those
and
collect
the
results
and
use
it
to
set
the
environment
in
the
final
process.
So
we
could,
you
know,
run
each
of
those
and
whatever
working
dura
the
build
pack
wanted
and
one
build
pack
changing
a
processor
wouldn't
affect
other
ones,
but
the
way
back
profiles
run.
C
B
It
was
interesting
because
forrest
had
mentioned
that
one
of
the
motivators
for
having
this
feature
is
because
right
now
they
are
doing
cd
into
the
directory
for
their
start
command,
and
he
said
you
know
this
is
a
limitation
if
your
stack
doesn't
have
a
shell
right.
Well,
in
that
case,
you're
going
to
use
exactly
anyway.
So
maybe
we
could
scope
it
to
just
that.
I
don't
know
that
makes
sense.
B
C
Well,
I
just
have
to
run
right
now,
guys
sorry
I'll
see
you
all
later
and
yeah.
I'm
sorry!
We
didn't
get
to
your
issue,
but
we
talk
to
me
this
afternoon.
B
F
Almost
got
there,
it
was
so
close.
I
I
can
bring
this
up
like
in
this
forum.
I
hope
that
someone
I
I
just
wanted
to
that.
Emily
will
be
in
this
conversation
because
she
put
the
first
pr,
but
I
would
love
to
get.
I
mean
others,
ideas
and
comments
about
my
thoughts.
Natalie.
Do
you
want
to
open
the
arab
zero?
Do
you
want
me
to
share
my
screen?
Well,.
F
Okay,
so
I'm
working
on
the
rfc
that
relates
to
the
opt-in
layer.
Caching,
pr
we
talked
in
this
forum
a
few
weeks
ago
about
the
three
flags
launch,
build
and
cash
and
all
the
options
of
like
all
the
mutations
that
can
be
with
the
three
flags
and
I
reviewed
a
hair
pr,
and
I
feel
that
there
are
some
edge
cases
that.
F
Do
not
appear
here
and
I
think
that
the
spec
should
be
very
clear
and
should
contain
even
edge
cases
that
they're
not
actually
that
I
mean
on
edge,
but
just
a
few
cases
I
mean
do
not
theory
here.
So
if
we
are
looking
at
the
launch
layers
the
build
layers
and
the
cache
layers,
then
maybe
I
can
pull
the
table
that
now
we
created
after
the
discussion
that
we
had
a
few
weeks
ago.
F
So
the
first
thought
that
I
had
was
that
under
the
launch
layers
here
I
will
explain
everything
that
relates
when
launch
equals.
True,
it
won't
be
that
easy
to
write
it
because
again,
it
really
depends
on
the
up
to
the
other
two
flags.
F
F
F
F
I
mean
the
best
thing
to
do,
but
this
is
not
the
format
that
we're
using
so
anyway,
I
feel
just
a
little
bit
stuck
because
what
appears
in
the
existing
pr
is
not
clear
enough.
I
mean,
I
think
it's
not
clear
enough,
but
I'm
not
sure
how
to
change
it.
So
it
would
be
clear
enough.
D
A
D
D
To
have
and
it
lets
you,
you
can
still
kind
of
extend
it
with
some
caveats
with,
like
you
know,
asterisks
and
stuff,
if
you
need
to
with
the
footnotes
so
like
I
feel
like
it,
it's
probably
the
best
way
to
visualize
it
for
newcomers
for
sure,
and
even
me,
because
I
still
don't
know,
I
still
don't
understand
the
chart,
all
the
way
to
be
honest,
go
re-read
it.
F
So
it
means
just
I
mean
you're
talking
about
just
changing
the
current
format.
Just
to
have
a
table
with,
I
mean
probably,
will
change
a
little
bit
table.
That's
not
like
good,
although
it's
great,
but
you
know
we'll
make
some
small
changes,
but
just
put
it
here
right.
This
is
what
you're
talking
about.
F
B
E
Mostly
just
a
plus
one
also,
the
platform
spec
has
a
bunch
of
tables
in
it,
and
I
always
find
this
really
useful
to
read
from.
F
D
A
D
D
Okay,
it
looks
like
he
was
talking
about
the
run
analyze
before
detect,
so
maybe
he
was
just
checking
in
on
that,
but
we
sort
of
went
over
that
earlier.
So
maybe
we
addressed
his
concerns
with
getting
it
unlocked
with
the
spec
stuff,
all
right,
so
I'm
gonna
have
to
bounce
myself.
So
I'm
gonna
leave
and
I'll
see
you
all
next
time.
B
E
E
Do
we
it's
you
should
have
to
put
in
like
a
proposal
or
something
that
would
get
accepted.
B
B
B
E
I
mostly
I'm
super
interested
in
in
how
to
do
it
for
windows,
but
I
might
be
the
only
one.
B
All
right,
I
guess
we
can
move
on
to
the
retry
fetching
images
actually
mica
since
you're
here.
Maybe
you
have
some
thoughts
on
this
then
I
can
share
my
screen
dan
isn't
here,
but
he
very
bravely
took
on
this
issue,
which
it
can
now.
I
opened
it
to
look
into
why,
when
you're
running
a
bunch
of
build
pack
builds
in
parallel,
you
sometimes
get
eof
errors
and
he
dug
into
it,
put
his
findings
here
and
is
kind
of
unsure
about
how
to
proceed.
B
So
I
think
we're
sort
of
in
the
help
wanted
phase
of
wanting.
You
know,
people
with
experience
and
those
issues
to
to
opine.
A
E
Yeah
dan
had
reached
out
about
testing
strategies
for
it,
and
I
saw
that
there
was
some
low-level
stuff.
You
could
do
to
get
it
in
and
reproduce.
This
kind
of
situation
like
make
every
fourth
connection
fail,
or
something
like
that,
but
I
don't
know
that
I
could
waste
too
much
in
on
the
proposed
solution.
I'll
have
to
think
about
it,
though
comment.
G
I
I
do
think
his
first
comment
there
about
the
easiest
solution
being
a
little
bit
gross.
I
don't
know,
I
think,
that's
me,
being
a
subjective
mischaracterization.
I
think
I
think
that
is
the
easiest
solution
and
I
think
that
you
know
that
is
the
solution.
People
would
opt
for
a
lot
of
times
just
because
it's
covers
a
lot
of
things
right,
not
just
this
eof
error
but
like
other
networking
errors
right.
That
could
possibly
happen,
and
those
are
just
my
thoughts
there
thinking.
E
G
Maybe
I'll
put
that
maybe
I'll
put
it
in
a
comment.
Maybe
that's
a
better
place
for
this
kind
of
thing
and
I'll
point
to
other
examples
in
other,
in
other
code
bases
where
that
sort
of
method
is
employed
and
again
it's
it's
not
just
because
of
the
eof
errors.
Just
because
networking
you
can
have
plenty
of
other
random
types
of
errors
and
you
sort
of
want
to
catch
all.
That's
the
reason.
Sorry,
I'm
explaining
it
out
loud,
because
I
was
trying
to
get
my
thoughts
on
it
right
now.
B
A
B
A
Yeah-
and
I
can
share
my
screen
for
that-
I
suppose
I'll
just
open
up
the
pr.
A
A
E
Sorry
for
the
black
and
white,
I'm
trying
to
fix
that
right
now,
the
the
gist
of
the
changes
are
to
make
image.
Util
support,
non-local
host
scenarios
for
the
registry
helper-
I
don't
know
if
y'all
have
used
the
registry
helper
too
much.
I
got
into
the
guts
of
it,
but
for
some
reason
I
feel
like
I've
had
touched
a
lot
over
the
past
year,
or
so
it's
this
one.
E
This
is
my
branch
version
of
it,
but
it's
used
this
one
in
particular,
is
used
both
in
the
image
util
tests
and
then
the
life
cycle
tests.
You
mainly
call
docker
new
docker
registry.
This
is
the
new
signature
for
it,
but
you
make
a
new
registry
instant
instance,
and
then
you
start
it
and
then
you
stop
it
and
in
the
meantime,
while
it's
running
you
can
you
can
query
the
registry
instance
that
it
returns
for
the
port
and
then
write
images
to
that
port
by
default.
It
there's
no
authentication
on
it.
E
E
So
some
changes
were
made
to
it
before
to
make
a
different
constructor
that
automatically
puts
authentication
on
it
and
it
sets
these
environment
variables
inside
of
the
registry
container
image
to
enable
the
authentication
tells
it
to
use
that
written
and
password
and
stuff
so.
But
the
changes
that
I'm
trying
to
do
here
is
to
make
it
work
more
than
just
on
localhost
to
open
it
up.
So
that
say,
for
instance,
our
analyzer
tests,
which
are.
E
Create
other
containers
run
analyzer
in
its
own
container
and
then
has
analyzed
to
reach
out
and
talk
to
the
registry
container
right
now.
It
does
that
through
a
local
host,
just
like
network
hosting
about
those
connections,
so
they
think
they're
on
the
same
local
host,
and
so
when
analyzer
is
talking
to
the
registry,
it
thinks
it's
on
a
local
host
port
like
most
windows,
things
that
cool
functionality
doesn't
quite
work
so
for
windows.
E
The
analyzer
container
needs
to
reach
over
through
a
real
network
and
then
talk
to
that
registry
registry
container,
technically
analyzer
running
in
the
container
can
actually
talk
to
the
host
support
to
its
own
host's
listing
port,
and
so
a
lot
of
the
changes
are
to
to
make
that
scenario
work
where
a
container
can
talk
to
another
container,
but
through
a
port
on
the
host.
E
So
the
bit
of
change
that
lets
that
happen
is
through
this
docker
hostname
function.
When
you
make,
when
you
do
registry
start
the
container
that
it
creates,
won't
assume
it's
localhost.
If
any
of
these
conditions
are
met,
you
have
a
docker
host
environment
variable
set.
In
that
case,
it
assumes
it's
going
to
be
on
that
ip
address
of
that
docker
host.
E
If
this
hosts
a
docker
dot
internal
name
resolves
then
use
that
one.
This
is
a
new
new
rule
that
deviates
a
little
bit
from
the
original
implementation
that
will,
it
will
query
the
demon
this
is
assuming
it
has
access
to
the
demon.
It
will
try
to
query
the
demons
and
look
at
its
insecure
registries
entries
and
look
for
any
with
a
slash,
32,
meaning
it's
an
exact
iep
address.
E
It
will
use
that
one
and
the
logic
there
is
that,
in
order
to
have
a
insecure,
insecure
registry
instruments,
your
demon
needs
to
be
okay
with
writing
to
it.
And
so,
if
you
went
through
the
hassle
of
putting
that
entry
in
your
insecure
registries
for
your
demon,
then
that's
very
likely
to
be
the
demon's
ip.
E
So
it's
a
little
bit
of
a
trick
where
you
can
let
the
demon
tell
you
what
its
ip
is
without
a
docker
host
and
without
a
host.docker.internal
there
there's
not
really
another
easy
way
to
look
it
up
through
the
demons
api
that
I
could
configure.
But
this
is
kind
of
a
maybe
a
little
tricky,
but
it
kills
two
birds
with
one
stone.
If
you
have,
if
you
set
up
your
demon
right,
then
it
knows
where
to
look
for
that
registry
and
where
it's
going
to
be
listening.
E
If
all
those
fail
go
back
to
localhost-
and
this
is
what
would
happen
on
linux
if
you're
using
dash
dash
network
host,
then
as
long
as
none
of
the
rest
of
these
are
set,
and
I
dislike
the
linux
default,
ci
default-
probably
all
of
your
workstations
default-
none
of
these
will
be
set.
Then
it
will
it'll
work
for
that
network
host
scenario
that
some
of
those
might
be
assumptions
if
you
all
feel
like
any
of
those
rules,
are
not
accurate,
feel
free
to.
G
Yeah,
I
I
do
have
some
thoughts
around
the
third
one
for
sure
the
exact
ip
match.
I
I
guess
the
thing
is
right:
there's
a
you're
making
these
rules
based
on
conventions
and
the
other
three
ones
make
sense.
An
environment
variable
docker
host
this,
this
alias
local
right.
Those
make
sense
right,
but
this
exact
ip
one
we're
sort
of
assuming
user
behavior
on
that
right,
and
it's
not
clear
that
that's
a
convention
that
people
put
slash
32
in
their
instinct,
your
registries,
you
could
put
all
entries.
You
could
put
zero
zero,
zero,
slash,
zero.
E
No,
I
think
that's
right
on
the
money
like
it's.
It
definitely
feels
sort
of
surprising
it
also.
For
me
it
was
not
the
original
thing
that
I
went
with.
I
just
happened
to
notice,
as
I
was
trying
to
set
up
over
here
in
lifecycle
on
the
corresponding
pr
draft
pr
that
will
use
this,
that
I
was
adding
some
duplicate
or
what
felt
like
duplicate
code
for
the
the
github
actions
workflow.
E
E
So
the
way
that
we're
doing
that
over
here
is
looking
at
the
ip
address
of
the
of
the
host
we're
trying
to
do
this
most
secure
version
of
that
which
is
look
up
the
ip
address
of
the
host.
So
not
everything
can
write
or
not
everything.
Can
you
use
the
demon
to
write
to
an
insecure
registry
somewhere?
It's
like
just
pick
this
very,
very
specific
ip,
which
is
write
to
your
own
right
to
registries.
E
On
this
exact
machine,
so
we
look
it
up,
we
put
in
a
slash
32
and
then
what
I
was
about
to
do
was
to
add
the
code
to
trigger
this.
This,
the
host.doctor.internal
rule.
A
E
It
was
back
here.
I
was
trying
to
write
the
code
to
trigger
this
host.internal
rule,
which
would
be
define
host.docker.internal
on
this
on
this
windows
machine.
But
then,
as
I
was
doing,
that,
I
realized
that
it's
writing
the
exact
same
value
with
the
exact
same
intent
in
two
different
places.
E
So
you're
totally
right,
it's
like
it.
It
should
be
documented.
It's
also
not
intuitive,
definitely
could
be
overly
clever,
but
it's
nice
in
that
it
is
a
single
source
of
truth.
If
you
follow
this,
this
pattern
and
it's
as
secure
as
possible
that
we
can
get
away
with
technically
if
you
did
want
to
have
a
wide
open,
insecure
registry
entry,
which
technically
is
the
case
in
my
workstation
right
now,
you
could
do
0.0.0.0
and
have
a
host.doctor.internal.
E
G
G
Space-
I
don't
know
if
other
people
who
are
going
to
be
consuming
this
and
running
these
tests.
I
don't
want,
I
don't
want.
I
wouldn't
want
code
that
fits
one
person
very
well,
that's
that's!
What
I
feel
like
is
happening
right
here.
I
would
I
would
want
you
know
you
would
have
to
have
your
hook.
You
know
you
have
a
workaround
right.
You
have
workarounds
to
to
get
around
this.
Maybe
it
sucks
for
you
right,
but
but
the
the
opposite.
E
E
Yeah,
I
think
that
makes
sense
now
the
catch
is.
If
we
don't
do
this,
we
can't
run
tests
on
windows,
and
that
feels
worse
than
sort
of
the
tricky
bit
of
the
code.
It's
definitely
like
cost
value
trade-off
and
my
feeling
is
like
the
most
secure
way
to
allow
us
to
run.
The
windows
tests
is
to
have
a
very
tight
and
restrictive
insecure
host
allowance.
That's
like
the
one
I
think
intractable
bit,
so
the
most
specific
that
you
can
make
that
possible.
E
Feels
like
the
easiest
one
to
justify
in
terms
of
jump
through
this
extra
weird
hoop
for
windows,
but
don't
make
your
machine
insecure
while
you
do
it
a
I
guess
what
we
could
potentially
do
if
we
wanted
to
narrow
it
down
and
not
support
or
remove
some
complexity
out
of
here
is
remove
some
of
these
other
cases,
maybe
remove
host.internal.
E
Technically
we
can
have
it
not
even
try
to
read
from
the
demon
host
too,
and
we
could
just
say
to
to
your
point:
you
know:
don't
configure
it
for
mika's
machine
configure
it
for
configure
it
one
way
that
we
support
and
document
the
heck
out
of
it
like
for
ci.
This
feels,
like
the
easiest
rule,
to
write
and
the
least
insecure
way
that
we
can
leave
rci
runners
up
there.
So
I
would
also
be
fine
to
remove
all
the
other
cases,
except
for
this
one.
G
Like
I
said,
you
have
good
points
and
I
I
really
think,
there's
a
there's,
a
cost
value
thing
being
being
discussed
here,
and
I
know
you
have
a
hard
blocker
on
like
how
do
you
run
your
windows
test?
I
would
like
I've
said
my
piece.
I
would
like
other
people
to
weigh
on
it,
especially
people
who
deal
with
this
go
a
little
bit
more.
I
feel
like.
E
E
With
but
like
there's
a
point
at
which
I
probably
will
just
remove
these
pr's
and
not
have
the
tests
run
on
lifecycle,
so
I
I
don't
know
what
the
right
trade-off
is
for
time.
Investment
versus
come
up
with
a
way
to
do
it
versus
have
the
acceptance
test
not
run
for
windows.
A
E
I
mean
I
like
well
since
we're
all
vmware
employees
here,
like
our
team
is
dissolving,
like
I
don't
know
what
team
I'm
going
to
go
into
next,
I'm
sorry,
I
realize
we're
recording
this
too
I'll
stop
talking
apart
from
team
dynamics,
I
there
are
pressing
forces
outside
of
outside
of
this
way.
I
would
like
to
get
something
up
there,
so
I'll
yeah
I'll
stop
talking
there's
one
other
aspect
to
this
might
be
interesting.
B
Switch
for
what
it's
worth,
I
I
feel
good
about
these
pr's.
I
there
was
some
part
of
that
I
didn't
understand,
which
I
hope
to,
but
in
general
I
think
this
is
a
good
approach.
E
Yeah,
I
wonder
if
this
would
be
good
like
a
demo
topic
or
something
like
that.
I
do
really
want
the
experience
of
running
the
tests
against
windows
to
be
as
close
as
possible
to
linux.
So
I
would
like
to
show
how
close
they
are.