►
From YouTube: kubeadm office hours 2019 06 12
A
Okay,
hello,
everybody,
this,
the
sequester
lifecycle,
cubanÃa
office
hours
today
is
the
12th
of
June
2019.
It's
pretty
close
to
the
release
of
waffa
waffa
115,
so
we
basically
have
to
discuss
some
topics
related
to
the
release
and
other
than
that.
If
you
have
any
agenda
topics,
please
add
them
to
the
agenda.
I'm,
going
to
post
the
link
in
chat.
A
A
A
The
docs
are
a
pretty
good
state,
something
messed
up.
Clearly,
the
the
dogs
in
a
pretty
good
state,
all
the
pair's
merged
I'm
keeping
the
issue
in
the
k,
/q
baby,
Emory
problem
until
we
already
with
everything
there
also
I
wanted
to
say
that
the
promoting
ha2
beta
is
in
a
good
state,
with
the
minor
exception
that
concurrent
join
of
control.
A
There
is
this
blackout
and
basically,
if
another
member
tries
to
join
at
the
same
time,
we
basically
have
a
problem
and
errors
are
thrown
and
to
my
understanding,
the
or
the
current
war
crowd
is
fully
tried,
and
we
got
some
confirmation
from
Jason
that
that
this
might
be
one
of
the
good
options,
but
we
are
still
waiting
from
there
for
a
response
from
the
Edgehill
imitators
to
see
what
they.
What
comments
they
have
and
later
in
the
agenda
I
have
a
basically
agenda
item
for
this,
so
we
can
discuss
it
again.
A
The
test
signal
is
in
a
good
state.
The
basically
most
of
our
tests
are
solid,
green,
with
the
exception
of
the
upgrade
test
we
are
which
are
kind
of
flaky
phablets.
You
set
the
pr
for
that
today,
and
hopefully
this
is
going
to
solve
the
flakes.
With
regards
to
the
release.
Later
again,
we
have
to
evaluate
one
like
what
is
what
is
going
on.
Why
are
we
having
these
flakes
Rafa
said
the
pr
for
external
HDD
this
by
the
way,
is
currently
failing.
A
Only
on
the
master
job,
I
I
haven't
looked
exactly
on
why
this
is
happening.
It
is
probably
something
related
to
setup
and
not
to
the
you
know,
to
kubernetes
incubator,
specifically,
it's
probably
something
related
to
setup
in
kinder,
but
it's
not
release
working.
We
have
to
just
look
at
it
when
we
can
something
else.
A
Sotiris
is
that
so
the
schedule
for
the
release
is
if
we
have
tears
that
have
to
merge
in
the
in
master
today
and
if
they,
if
you
want
to
make
them
to
be
basically
also
fast-forward,
that
in
the
115
branch
like
we
have
to
do
it
today.
Otherwise
we're
going
to
have
to
cherry
pick
and
the
cherry
pick
deadline
is
the
13th
13th
of
June,
but
is
the
dance
Thursday
and
another
PSA
here
is
that
the
release
of
our
15
is
going
to
be
on
Monday
the
17th.
A
So
this
next
agenda
is
about
Daisy
back
out
and
we
could
discuss
this
with
possibly
referring
Fabrizio.
So
last
night
I
spent
a
lot
of
time.
Basically,
firing
like
big
kubernetes
I
mean
not
big,
but
a
che
kubernetes
clusters
or
like
like
one
at
the
time
and
looking
at
potential
errors
and
I
found
out
that
we
can
flake
with
the
blackout
scenario
that
explained
earlier
and
I
also
implemented
some
patches
in
kubernetes
for
cube,
ADM
KK
I
mean
so.
A
Basically,
one
of
the
patches
is
pretty
much
to
add
an
exponential
back-off
for
the
member
art
function,
so
the
memberid
function
is
a
function
that
we
have
in
the
HDD
client
in
cuba,
DM,
where
we
had
a
new
member,
and
if
you
think
about
this,
when
you,
when
you
add
a
member,
the
blackout
happens,
the
new
memory
joining
you
try
to
add
a
new
member,
but
there
is
this
error
that
is
being
thrown.
I.
A
Think
the
the
most
consistent
there
all
I
saw
was
the
the
density
question
is
a
healthy
and
my
retry
patches
were
able
to
solve
this
and
I
was
able
to
get
concurrently
joining
control
planes
to
work
I'm,
also
seeing
some
flakes
with
regard
to
joining
worker
nodes
and
I
think
this
is
still
related
to
the
same.
It's
the
problem
because
I
don't
have
other
explanation
for
it,
but
the
error
was
different
there
or
was
that
we
are
trying
to
basically
patch
a
config
map
when
we
join
a
worker
mode
and
I'm.
A
B
Iii
think
we
on
this
cycle.
We
have
to
take
all
this
hate,
say
history.
It's
be
blackout.
I,
see
different
things
here,
because
when
I
study
in
the
member
to
the
cluster
and
the
other
is
when
we
really
bright
a
static
manifests,
so
it's
City,
it
really
starts
and
it
starts
to
sink.
What
happens
if
we
are
running?
If
we
are
going
to
mass
to
control
planes,
they
both
see
the
the
clusters
healthy.
B
We
have
it's
one,
it's
its
member
and
then
both
start
I
guess
that's
that's
fine,
but
I
would
like
to
double-check
that
and,
as
for
the
worker,
I,
don't
I
I
have
problems
seen
why
the
worker
doing
in
problems
have
to
do
anything
with
it
CD.
If,
if
it's,
he
has
been
up
for
some
time
or
does
it
happen
always
when,
when
the
worker
joined
right
afterwards,
yeah.
A
I
look
I.
The
flakes
were
very
rare
in
this
regard,
with
the
worker
goes.
Basically,
the
the
API
basic
operations
that
fails
is
under
a
the
impotency
incubate
a.m.
and
it's
spud
snowed
and
basically
it's
you
know
it's
a
basic
API
operation
and
I
was
able
to
catch
it
only
a
couple
of
time.
What
I
created
is
I
create
a
a
bash
script
that
creates
clusters,
tear
them
down,
create
clusters
and
I.
Then
I
look
at
the
logs
and
this
only
happened
a
couple
of
times
and
I'm,
not
convinced
it's
also.
A
It
said
city
related,
maybe
it's
again
related
to
I,
don't
know,
maybe
again
the
load
balancer,
slash
config,
my
problem,
it's
manifesting
in
my
desk
now,
but
what
I'm
gonna
do
is
I'm,
going
to
probably
write
a
small
guide,
how
to
reproduce
this
in
kinder.
The
only
problem
is
that
I
hope
that
people
have
more
powerful
machines,
because
three
control
plane
nodes
require
a
lot
of
RAM
I
have
a
like
a
VM
that
is
16
gigs
of
ram
here
so
yeah
with
kinder
it's
some
of
these
problems
are
reproducible.
A
B
Just
must
command
one
thing
that
we
have
to
agree
upon
is
to
what
extent
we
want
to
have
this
retry
logic,
basically
everywhere,
where
we
are
approaching
Lee
when
we
are
approaching
the
AP
server.
To
what
extent
cube
IBM
has
to
retry
everything
I
get
that
on
other
components
that
have
control
loops
they
they
are
just
going
to
retry,
because
that's
how
they
work,
but
I
can
maybe
I'm
being
a
one-shot
application.
I
mean
to
what
extent
it
makes
sense
for
us
to
retry.
B
How
long
should
we
retry,
because
failures
also
should
be
pretty
fast
right.
So
for
me,
this
also
boils
down
to.
Should
we
assume
that
the
load
balancer
is
well
configured,
so
we
are
not
going
to
read
an
API
server
that
is
not
healthy.
Yet,
for
example,
this
old
I
mean
this
defines
how
complex
our
code
base
will
be,
and
this
case.
D
B
D
Reason
why
I
personal
debt,
your
patch,
is
that
there
is
already
a
bunch
of
logic
that
exists
inside
of
the
client
library
itself
for
doing
automatic,
exponential
decay
in
retries
and
backups.
So
if
you
actually
do
a
test
with
just
the
client
in
a
very
speedy
environment,
you
will
see
that
it
actually
in
set
logging
values
to
help
write
this
code
like
four
years
ago,
so
the
it's
actually
deep
inside
the
client,
so
it
will
automatically
decay
and
try
again
on
certain
conditions.
D
That's
the
key,
though,
is
certain
conditions
right,
so
it
will
like
if
you
get
certain
error
codes
returning
from
the
HTTP
server
like
too
busy,
but
if
you,
if
you
get
other
types
of
errors,
which
is
why
I
was
trying
to
key
in
on-
is
that
those
other
types
of
errors?
Those
are
not
retried
and
they're
percolated
back
up
to
the
client.
So
if
we're
getting
other
errors,
such
as
like
some
here
at
CDE,
artifact
occurring
back
up
the
chain,
then
then
the
behavior
would
have
to
retry
higher
the
stack.
B
Yeah
I'm
principle
we
yeah,
but
there
is
some
context
here.
So
we
will
retrain
on
the
humanized
are
responsive
from
the
AP
server.
Jordan
said
why?
Why
don't
we
do
this
for
literally
any
kind
of
error,
but
in
any
case,
even
if
we
only
did
this
for
an
authorized
errors,
it
seems
to
me
it
seems
really
weird
to
try
to
retry
it
on
authorized
error
right.
Maybe
you
can
retry
on
some
other
on
some
other
kind
of
error,
but
unauthorized
is
like
a
final
state
right.
B
It's
not
like
if
I
retire,
one
more
time
is
going
to
succeed
so,
and
this
all
boils
down
again
to
to
the
load
balancer
not
being
properly
configured.
So
it's
not
performing
alt
text
on
the
new
IP
service
that
are
coming
up,
so
the
AP
server.
When
it's
in
utilizing
its
it.
Can
you
can
answer
you
with
an
unauthorized
response,
so
so
here's.
D
Where
Jordan
and
I
fundamentally
disagree
and
we've
talked
about
this
at
length,
and
this
is
a
principle
and
property
of
distributed
system
right
like
fail-fast
right,
if
you
get
an
error,
that
percolates
up
is
not
the
API
server,
returning
a
condition
that
this
one
called
returning
condition
that
is
well-defined
and
do
that
you
should
just
bail.
Failing,
is
better
than
trying
to
retry,
because
you
might
be
masking
problem
higher
to
stack
the
the
specific
condition
of
your
trying
in
a
joint
I'm
cool
with
that
that
that
seems
like
a
very
specific
use
case
scenario.
D
B
Absolutely
agree
on
that,
so
the
thing
is
that
we
wanted
to
fix
this
for
this
release
for
115
the
commit
has
the
explicit
the
specific
comment
about.
Please
revert
this
so
we
we
should
find
the
root
cause
of
this
and
you
know
either
have
a
statement
in
which
you
have
to
properly
configure
balancer
or
go
to
the
root
cause
of
the
issue
and
remove
this
workaround
completely
I.
A
Think
the
some
of
these
errors
that
come
from
the
PI
server
is
super
confusing
and
I.
We
might,
it
might
be
very
difficult
for
for
us
to
find
the
exact
causes
without
people
from
API
machinery
so
also
I'm,
very
confused
about
the
airbag
error.
Specifically,
it's
like
it
doesn't
make
any
sense.
We
are
getting
there
by
terrorists
when
we
join.
D
It
depends
on
how
fast
you
create
things
internally
inside
the
API
server.
There
is
a
separate
section
of
code
that
actually
initializes
the
default,
our
Beck
rules,
and
if
you
were
to
do
things,
you're
really
really
fast,
like
super
fast
like
as
soon
as
you
create
you
do
it.
It's
popped
like
be
even
before,
like
the
pop
API
server.
Con
is
spinning
up
the
workers
trying
to
join
the
API
server
goes
through
like
it's,
not
a
finite
state
machine,
but
it
is
a
set
of
internal
implicit
states
where
it's
not
fully
loaded.
Yet.
A
The
dependent
of
the
state
it
just
throws
the
current
error
of
the
state.
We
basically
try
to
access
it,
it's
very,
very
confusing
and
we
I
don't
know
how
we
should
implement
this
logic
of
concurrently
joining
many
workers
and
many
control
planes
and
handling
all
the
possibilities
of
the
different
errors
that
we
might
get
it's.
It's
super
difficult
to
do
well,.
D
You're
doing
it
in
a
test
environment
where
it's
like
it's
very
fast,
it's
on
the
same
machine
right
so
like
that's,
that
is
inherently
different
than
most
deployments.
The
the
multi
a
master
join
is
something
that
we
want
to
enable
the
multi
worker.
All
hitting
at
the
same
time
is
something
that's
been
pretty
well
about
it,
but
when
you're
doing
them
together,
that's
not
a
scenario:
that's
really
been
tested
heavily.
It's
always
usually
been
like
yeah.
D
It's
always
been
a
slow
roll
right.
It's
like
you've,
you
set
up
your
masters
and
then
you
blast
out
your
workers,
so
the
so
long
are
set
up
to
control
planes.
The
new
classmate
workers
that's
best
fully
tested
at
scale.
We've
done
that
a
number
of
times
the
adding
everything
together
at
once.
That's
not
going
to
really
tested
any
multiple
control.
Planos
really
fast.
That's
not
been
something.
We've
really
tried
to
strive
for
in
the
past,
but
I.
Do
you
think
it's
a
good
user
story
because
we
want
to
have
I,
took
more
declarative
model.
A
Yeah,
so
currently
it's
I
was
very
happy
to
get
like
three
three
controllers
joining
concurrently,
so
I
think
we
should
all
hold
this
beer
and
basically
like
experiment
and
try
to
find
the
actual
problem,
the
actual
problems,
but
I,
don't
think
we
have
other
solutions
for
now.
So
I
think
we
should
merge
it
in.
Have
this
work
around
for
a
while
and
potentially
enable
concurrent
join
of
control
planes
and
go
back
to
this
eventually
and
try
to
find
the
real
problems.
What
do
you
think.
D
A
A
B
:,
so
this
yes
not
lost
coming
from
my
site
so-
and
this
is
my
two
cents
I
do
not
do
what
extent
we
can
fix
this
at
cube,
ABM
level,
we
have
to
see
what's
the
root
cause
or
the
root
cause
for
the
different
issues,
but
I
I
honestly,
don't
know
if
we
have
core
please,
like
the
load
balancer
to
be
sort
of
out
of
our
control.
I,
don't
see
to
what
extent
we
can
really
fix
this
other
than
we
try
and
basically
everywhere,
and
this
something
I
wouldn't
like
to
see
so
either.
B
A
Yeah
definitely
I
guess
we
also
have
to
set
up
a
multitude
of
what
balancers
the
popular
was
and
see
what
behaviors
they
have
specifically
in
kind.
The
transition
from
H
a
proxy
to
nginx
definitely
broke
something,
and
nobody
understands
how
and
why
this
happened
and
we
basically,
we
don't-
have
to
have
n
to
a
signal
for
different
mole
balances,
but
we
just
have
to
sit
down
and
test
some
of
them
and
figure
out.
What's
going
on.
A
All
right,
so
this
is
something
that
I
wanted
to
discuss
with
rusty,
I
guess
in
Tim.
If
he
wants
to
participate.
Basically,
the
Seagate,
the
Sikh
arch,
the
sicker.
He
texture
group,
which
is
responsible
for
code
organization,
really
wants
to
move
ki
beti
amount
of
of
KK
and
move
it
to
staging,
which
is
kind
of
the
same
thing
because
keep
a
Liam
still
lives
in
staging.
But
anyway,
I'm
seeing
people
request
that
we
move
the
public,
HIPPA
DM,
API
types
to
a
separate
venerable
repository
and
currently
a
saw
problem.
A
That
one
of
this
is
something
that
we
can
do
anything
as
a
compromise
to
unblock
some
people,
but
I
saw
a
problem
already
is
that
we
import
constants
from
the
k
/c
in
the
cube
ATM
packages
to
the
basically
the
api
package,
and
this
is
something
that
we
have
to
stop
doing,
because
this
is
going
to
block
the
mentoring
of
the
public
ApS.
So
so
Tim.
Do
you
think
that
we
can
have
this
as
a
compromise?
You
don't
comply
with
the
demands.
D
A
D
That's
a
reasonable
thing,
I
think
that's
an
actual
importable
thing
like
most
everything
else
inside
cube.
Atm
is
not
important.
The
type
should
be,
though
I
came
with
the
types
moving
over
and
then
setting
a
port
boss
restrictions
to
make
sure
we
do
the
depth
the
surgical
depth.
Removal
that's
needed.
I
have.
A
A
collection
of
questions
related
to
staging
in
the
publishing,
but
with
regards
to
moving
only
the
types
I
am
assuming
that
we
have
to
move
them
to
k.
/
q
ATM
somehow,
but
there
is
already
content
in
there
that
we
don't
want
to
move
out.
So
maybe
the
publishing
body's
going
to
nuke
our
content,
so
I'm
like
I,
have
no
idea
what's
going
to
happen
so
I
have
questions
for
them.
Yeah.
D
So
even
then,
I
think
the
import
boss
restrictions
alone
would
help
to
prevent
the
current
problem,
because
import
from
location,
a
versus
location,
B,
is
irrelevant,
I
think
the
dependency
graph
problems
aren't
real
and
those
are
important.
We
should
not.
If
you
import
types,
you
shouldn't
drag
in
portions
of
key
okay
that
are
not
related
to
well
well
defined
tags.
A
A
So
that's
I,
guess
that
so
a
Rasta
wallet
your
comment
in
terms
of
the
import
stuff
we
are
doing.
Currently
we
are
importing
stuff
from
basically
CMD
cube
ATM
inside
the
API
package,
at
which
I
think
is
wrong.
The
api
budget
package
itself
should
be
almost
final
with
respect
to
Cuba
diem
itself.
What
do
you
think
I.
E
A
F
F
F
D
A
Yeah
but
I
think
that
change
is
good.
We
just
have
to
wait
after
the
I
mean
after
tomorrow
we're
going
to
open
the
master
branch
again,
so
ideally
I.
Think
more
people
should
not
do
this
because
I
don't
think
we
ever
had
this
use
case
before,
but
overall
consensus
here
is
that
the
PR
is
good,
I,
think,
okay,
sure
god.
F
That's
fine
with
me
the
other
issue
that
the
next
one
that
I
listed
was
that
I'm
currently
working
on,
so
there's
been
good
progress
made
on
the
dual
stack
front
for
kubernetes
and
you
know
I
would
love
to
contribute.
The
changes
to
cubed
en
I
have
some
have
a
local
branch.
That's
sort
of
getting
to
the
point
of
you
know
at
least
like
sort
of
parsing
multiple
IP
addresses
in
the
various
configuration
Flags
there's
some
minor
changes
that
need
to
be
made.
F
F
F
A
What
we
can
do,
I
think,
which
creates
a
ticketing
cube
ADM
in
the
cube,
ADM
repo,
and
we
can
track
the
progress
there.
I
haven't
looked
at
the
cap
and
the
latest
details
there,
but
I
know
a
developer
that
is
currently
working
on
ipv6
and
those
tax
reporting
kind
project
and
if
you're
not
kind
is
already
using
cube,
ATM
and
the
particular
developer
is
integrating.
Basically
the
ipv6
signal
test
signal.
A
G
D
F
D
A
Okay,
so
basically
you
can
set
the
pair
for
Kabbalah
disabilities
and
we
can
look
at
it
there.
We
haven't
look
at
the
details
and
what
has
to
change
in
ipv6.
So
you
might
see
a
lot
of
comments
there.
Just
so
you
know
no,
no,
worries,
okay
and
when
you
create
the
Pyrrhic
in
the
pinger
zones
like
the
sequencer
lifecycle
channel-
and
I
can
also
bring
this
developer.
Who
is
working
with
all
kinds?
And
we
can.
H
Well,
not
not
just
mine,
it's
for
those
who
don't
know
yet
it's
the
work
was
started
like
some
time
ago,
and
there
is
a
cap
about
it,
and
it's
about
bringing
structured
output
like
in
the
omelet
Jason
and
even
got
in
plate
format
to
the
cube
bottom
and
I
would
like
to
share
so
I
came
up
with
this
proof
of
concept
PR,
where
I
defined
a
new
radii
group.
It
was
like
one
of
the
of
the
proposed
way
to
solve
this,
and
I
would
like
to
share
like
pluses
and
minuses.
H
H
Not
good
thing
is
the
publication,
because
I
played
with
this
token
token
list,
like
mode
and
I
end
up
in
duplicating
the
bootstrap
token
structure
like
almost
100%,
just
because,
like
first
of
all,
I
was
hesitated
to
change
the
existing
API
and
the
second
and
the
bootstrap
token
token
type
in
the
cubed
may
PII
is
not
a
runtime
object
so
as
well.
We
were
going
yeah
exactly
so
like
just
because
of
this
two
lines.
H
H
D
H
The
structure
of
this
type,
which
is
kinda
separated,
so
they
they
will
probably
end
up
in
converting
it
into
the
cue
button
type
to
maybe
pass
through,
like
cubed
meteor
or
like
do
like
or
just
call
some
cue
button
functions
whatever,
because
they
they
most
of
them.
They
operate
with
either
like
internal
cubed,
my
PI
or
public
one,
and
that's
that's
not
good
from
my
point
of
view.
So
I
would
like
to
to
hear
your
opinions
guys.
A
H
They
want,
for
example,
to
call
something
like
to
like
to
join
the
cluster.
They
will
end
up
and
that
functionality
I
don't
know
for
sure,
but
most
probably
it
would
require
bootstrap
talking
in
this
in
like
from
cubed
my
ti
group
so
and
what
what
they
would
have
is
the
the
token
in
this
format
in
from
output
API
group.
So
they
will
probably
end
up
converting
one
to
another
before
calling
something
like
to
join
the
cluster
or
like
whatever
they
want
to
do.
A
So
I
need
to
go
back
to
the
motivation
to
copy
the
API
object
into
that's.
Basically,
the
problem
here
is
that
we
have
to
have
it
under
output.
That's
the
problem,
yes
and
yeah,
but
what
what
happens
if
we
graduate
the
bootstrap
talking
API
from
qadian?
That
is
the
official
currently
in
two
separate
repo
as
a
runtime
object
and
what
happens
if
we
output
to
that
directly.
A
H
Would
be
better,
but
I
must
say
that
in
some
like
in
some
output
so
like
to
output,
for
example,
bootstrapped
tokens,
we
we
operate
with
this
structure
but,
for
example,
to
output
list
of
images.
We
don't
so
we
like
anyway,
we
end
up
of
creating
something
like
versioning
structure.
So,
and
should
it
be
I
mean
like
in
this
output
group,
or
should
we
just
forget
about
this
output
group
and
concentrate
on
this
cubed
group
of
states
move
to
staging
understand
what
I'm?
Yes,.
A
So
I
think
we
should
basically
still
try
to
keep
the
API
slash
output,
separate
group.
In
there
we
can
have
the
list
of
tokens
as
a
runtime
object
and
cube.
Atm
can
still
import
this
basically
separate
group
to
be
able
to
output
the
structure,
but
also
use
it.
Potentially
one
day
when
we
have
you
know
a
cube,
ATM
actual
API
with
functions,
I
think
we
can
still
use
it.
Do
you
think
there
is
a
problem
with
that.
H
Well,
if
we
keep
this
API
group
and
we
will
have
everything
which
is
supposed
to
be
like
you
know,
subject
of
output
here
then
no
problem
if
we
end
up
off
like
using
Cuban
API
group
for
some
kind
of
outputs
and
this
a
group
for
another
kind
of
outputs,
then
Dedan-
that's
not
not
very
good
design.
From
my
point
of
view,
yeah.
A
B
H
How
to
do
it?
I
mean
conversion
is
actually,
as
this
is
api
group,
it
has
internal
it
all
type
and
public
type,
and
there
is
this
conversion.
Boilerplate
generate
it
automatically,
but
I
don't
know
how
to
like
generate
like
this
kind
of
conversions
between
two
different
API
groups.
I,
don't
know,
even
if
it's
possible,
so
that
that
machinery
probably
works
just
with
one
group,
but.
H
A
So,
in
terms
of
the
output
group,
I
think
we
should
basically
deal
as
not
much
with
the
API
machinery
stuff.
We
we
like,
we
appreciate
the
basically
the
versioning
that
it
provides
for
us,
but
if
we
keep
the
conversation,
the
conversions
and
stuff
like
that
to
minimal
I
think
it's
going
to
be
better
for
us.
It's
just
I
I!
Don't
understand
why.
Why
should
we
should
convert
to
internal
type,
because.
A
B
Very
for
avoiding
code
duplication,
the
idea
and
the
idea
at
least
two
four
four,
all
internally
pis
or
even
if
you
import,
those
EPS,
is
to
use
the
internal
types
I
think
so
you
leave
to
the
AP
machinery
all
the
magic.
All
the
magic
is
stuff
about
converting
the
external
to
internal
and
you
always
work
with
internal.
B
That's
my
understanding,
and
that's
that's
why
this
is
the
the
program
right,
because
if
we
are
parsing
the
output
type
and
that's
worsened,
then
we
have
the
internal
output
type,
but
that
is
not
the
same
types
used
in
the
functions
right.
So
we
have.
We
will
have
to
convert
somehow
between
these
internal
types.
H
D
Still
need
to
get
the
externalized
runtime
object
for
bootstrap
tokens
taken
out
of
committee
and
before
we
can
really
even
do
this,
we're
still
block
there.
I,
don't
know
how
we
get
unblocked,
that's
been
sitting
for
a
month,
so
we
need
to
like
put
LoJack
on
API
machinery
folks
and
put
them
down
where
to
go
because
I
think
that's
probably
a
prerequisite
for
making
some
of
the
rest
of
this
stuff
go.
Mm-Hmm,
there's.
A
Also
low
repository
where
we
can
put
the
basically
the
poster
token
API
object,
because
apparently
the
coaster
boosts
up.
The
repository
does
not
accept
API
types.
For
some
reason,
I
haven't
had
a
chance
to
ask
why.
So
he
correctly
don't
even
have
a
location
for
the
post,
apocalyptic
I'm,
still
not
convinced
that
we
are
going
to
have
a
problem
with
the
commercials
because
we
can
convert
between
different
types,
but
the
separate
groups.
A
H
No,
it's
just
because
it's
two
almost
identical
structures,
but
they
like
two.
So
if,
if
if
you
get
the
JSON,
for
example,
output
and
you
unmarshal
it-
you
have
a
structure
of
this
type
of
from
this
API
group
from
output,
API
group.
And
then,
if
you,
if
you
want
to
use
some
cue
button
functions,
do
you
bad
meet
the
eyes,
maybe
in
future,
even
those
most
probably
will
work
with
cue
button
types
from
from
another
API
group,
and
you
will
end
up
with
some
like
silly
code
that
that
just
creates
object
of
this.
D
A
We
shouldn't
duplicate
that
the
types
at
all.
That's
that's
the
main
point
we.
So
if
the
boss
kept
talking
object
is
an
exception.
It's
okay!
You
should
leave
it
somewhere
outside
of
qadian.
We
can
use
it
as
a
direct
writable
type.
You
know
we
can
use
it
as
a
direct
type,
but
the
rest
of
the
types
like
token
list
is
I
mean
like
image
list.
These
should
be
only
in
output.
I,
don't
see
why
they
should
be
in
the
cube,
ABMA,
Piro
image.
H
List
is
is
not
a
problem,
so
so
far,
I
just
I,
don't
know
how
how
many
like
this
kind
of
types?
If
it's
only
one
bootstrap
token,
then
no
problem
here,
especially
if,
if
there
is
a
plan
to
make
its
runtime
object,
then
it
can
be
just
reused
in
in
output
machinery.
But
if,
if
it's
more
than
one,
if
it's
like,
then
then,
then
we
some
problem
yeah.
A
H
A
Basically,
this
is
the
supposedly
the
repository
way
we
should
put
the
actual
bootstrapped
token
object
and
I
need
to
ask
hippie
machinery.
The
original
the
creator
of
this
readme
did
not
respond
to
me,
but
I
really
suggest
that
you
start
with
other
times
like
that.
You
know,
apart
from
images
photos
you
can
work
up.
I.
H
H
A
H
H
A
H
H
H
H
H
A
D
A
H
And
API
server
address.
Yes,
yes,
like
two
things
I'm
missing
and
that
that's
why
I
would
propose
not
to
you
know,
mix
talking
output
with
the
cue
button
join
comment
output
so
that
there
should
be
some
other
place,
but
the
the
idea
is
that
we
can.
We
can
get
it
without
running
in
it.
That's
not
a
problem
to
get
there,
for
example,
talking
with
the
longest
time
TTL
and
then
those
two
parameters,
the
cash
and
the
API
server
address.
That's
we
can
get
it
any
time.
H
D
A
It's
doable
for
the
joining
worker
ones,
but
for
control
playing
dots,
you're
still
missing
the
certificate
key,
so
only
it
currently
only
in
it
has
it
so,
basically,
even
if
we
have
to
I
think
I
think
the
following:
if
even
if
we
have
to
create
a
separate
output
objects
specifically
for
in
it,
I
think
it's
fine,
because
it's
you
know
it's
a
it's!
A
composite
of
a
couple
of
potential
commands.
One
of
them
already
exists,
that's
billion
token,
with
print
joint
command.
The.
A
A
H
A
H
A
A
You
can
do
that,
but
this
is
probably
gonna
be
buffered,
I.
Think
because
you
know,
there's
I,
don't
think,
there's
a
to
do
it,
maybe
maybe
there's
a
way.
I
think
I
drew
had
ideas
about
that.
Maybe
there's
a
way
to
hack
around
it
to
print
the
output
and
right
through
a
JSON
at
the
same
time,
I
think
is
possible
already
if
you
overwrite
the
standard
streams,
but
this
is
not
recommended,
but
you
know
if
we
are
able
to
see
the
output.