►
From YouTube: Kubernetes WG IoT Edge 20220907
Description
September 7, 2022 meeting of the CNCF IoT Edge Working Group. Discuss edge native white paper draft. Discuss joint potential of joint activity with group associated with Eclipse edge related open source projects.
A
Just
Okay
so
welcome
to
the
September
7
meeting
of
the
kubernetes
or
the
cncf
iot
edge
working
group.
We
press
record
a
little
late.
We
missed
a
discussion
on
the
potential
of
having
joint
meetings
with
the
eclipse,
Edge
group,
but
we're
moving
on
now
to
discussing
the
edge
native
principles,
white
paper
so
I
think
last
time.
I
kind
of
drove
this
discussion,
but
in
the
interest
of
getting
some,
maybe
diverse,
approaches
to
this,
maybe
I'll.
If
there's
a
volunteer,
I'll
hand
off
to
drive
this
discussion
to
somebody
else.
This
meeting.
A
B
I
didn't
know
if
we
wanted
to
spread
it
even
further,
but
it's
totally
fine
with
me.
Okay,
great
I,
will
share
my
screen.
Then
do
you
mind
giving
me
host
access
as
well.
B
I
know
a
lot
of
tabs
there:
okay
I
think
we're
there.
B
So
in
the
past,
what
we've
done
when
doing
this
is
kind
of
look
through
comments
and
resolve
them
and
then
continue
where
we
kind
of
left
off,
as
a
group
I
think
we
can
continue
with
that
flow
and
see
if
there's
been
any
new
ones.
B
B
I,
don't
think
we
have
any
new
threads.
So
with
that
being
said,
we
left
off
I
believe
around.
We
were
discussing
so
okay
to
recap,
where
we
left
off
last
time,
we
are
going
through
the
principles
themselves,
starting
off
by
kind
of
narrowing
out
down
our
our
definition
and
description
of
the
principle
and
we
kind
of
bottlenecked
at
management
at
scale,
with
a
discussion
around
is
management
at
scale
already
captured
between
centrally
observable
and
declarative,
orchestration
and
I.
B
Believe
we
tabled
the
decision
as
of
whether
to
keep
management
at
scale
to
once,
we've
kind
of
flushed
out
the
other
principles.
B
Please
correct
me
if
I'm
wrong
anyone
there
and
then
we
looked
at
centrally
observable
so
with
centrally
observable.
Let's
see
if
we
finish
that
one
off
at
skill,
vendor
agnostic
monitoring
techniques.
B
A
The
part
you
highlighted
in
blue
I
think
that
there's
a
couple
cup
components
of
observability
that
normally
I
see
in
the
category
and
one
is
metrics,
which
would
be
the
Prometheus
we've
got
there,
but
the
other
is
logs
and
since
it
doesn't
call
out
logs
I
think
that
maybe
this
could
be
made
a
little
stronger
here.
In
terms
of
you
know,
if
you
want
to
go
there
and
try
to
itemize
the
category
of
observability.
D
A
Fluent
D
can
be
used
to
transport
logs,
but
it's
a
I'd
say
it's
a
component.
You
know
and
something
the
so-called.
The
first
version
was
the
elk
Sac,
where
log
stash
was
the
thing
that
captured
the
logs
with
kubernetes,
fluid
D
and
fluent
bit
are
lighter
weight
and
a
lot
more
popular
by
my
observation,
but
it
it
fills
a
niche
of
transport
of
the
logs,
but
it
doesn't
do
say,
search
or
analytics
of
the
logs
or
a
UI
for
actually
utilizing
and
taking
maximum
advantages
out
of
a
search
capability.
A
So
it's
a
building
block
I'd
be
hesitant
to.
If
we
wanted
to
start
calling
out
actual
components
that
could
be
a
pretty
long
list.
I
have
nothing
against
it.
If
I
don't
think
we
can
possibly
list
everything
out
there
in
the
category.
So
if
we
where
to
start
it
I'd,
say
cncf
projects
only
that
fall
into
the
observability
category,
yeah.
B
And
I
don't
know
if
we
need
to
in
description,
be
calling
out
projects.
I
feel
like
description
is
that
one
sentence
that
someone
reads
out
of
context
and
lives
forever
and
these
projects
may
not
live
forever.
They
may
not
be
the
established
default
for
metrics
or
logging
forever,
so
I
think
maybe
like
for
example.
Example
in
this
one
I
think
we
were
going
to
move
this
out
into
kind
of
paragraph
section
about
it
later.
If
we
have
that
so.
A
Yeah,
so
in
getting
back
to
observability
I
think
it's
fair
to
call
out
that
this
involves
metrics
logs
and
there
might
even
be
some
other
categories,
so
Kate
you're
an
authority
on
this
more
than
I
am
but,
for
example,
making
Edge
devices
observable
could
arguably
be
part
of
this
category,
so
the
whole
idea
of
device
twins
might
actually,
if
you
wanted
to
put
it
in
a
category
we
already
have.
A
Maybe
this
is
the
one
it
belongs
in
right,
or
do
you
think
it's
maybe
a
different
one?
No.
B
I
agree,
but
this
once
again
gets
to
our
question
of
the
intersection
between
observation
and
management,
because
which
I
think
was
is
an
interesting
part
of
whether
we
need
that
management
section
is
because
digital
twins
are
both.
It
gives
you
state
and
lets
you
declare
a
state,
so
I
think
it's
a
little
tricky,
but
I
think
that
is
fair.
To
put
that
in
observable,
like
I,
think
there
is
a
management
component
to
observability,
potentially
okay.
So
how
about?
If
we.
A
You're
right,
let's
call
that
out
I
like
that
statement.
Observability
is
a
you
know,
a
building
block
or
a
precursor
to
management
and.
A
And
then,
let's
call
out
can't
how
about
something
like
categories
include:
metrics
logging
and
the
monitoring
aspect
of
device
twins.
B
Excuse
and
I
think
once
again,
we
should
Circle
back
to
what
is
Edge
native
about
this,
like
what
do
we
have
to
add
with
this
principle,
because
I
think
that's
really
what
to
focus
on
in
general
anytime,
we
get
to
a
long
list.
Is
we're
trying
to
help
Edge
native
developers.
D
A
I
think,
what's
different
from
traditional
Cloud
native,
is
that
the
pipeline
of
these
things
that
you're
observing
is
tolerant
of
being
interrupted,
so
in
other
words,
you've
got
queuing
aspects
here
that
you
wouldn't
typically
find
in
a
traditional
Cloud
native
public
Cloud
hosted
data
center
situation,
it's
tolerant
of
low
and
intermittent
bandwidth
as
the
transport,
so
the
observability
is
going
to
tend
to
use
different
transport
mechanisms.
A
B
So
it
sounds
like
we're.
Trying
to
one
thing
we're
trying
to
get
to
here
is
what
is
unique.
Is
the
scale
and
variety
of
what
you're
observing
is
to
a
further
extent
and
therefore
makes
Central
observability
more
important,
because
there's
so
much
more
to
capture.
A
Yeah,
the
other
thing
is
I,
think
that,
because
no
person
is
typically
no
person
is
there
to
take
observations
of
certain
forms
of
State
it
may
be.
Observability
is
of
more
value,
not
less
when
compared
to
the
public
cloud.
B
A
Or
even
have
the
option
too,
you
know
like
hey
I,
haven't
even
heard
from
this
thing,
and
it
could
be
sadly
enough
that
an
edge
situation
it
could
be
literally.
The
device
is
gone
like
somebody
stole
it
and
that
isn't
that
isn't
going
to
be
a
common
scenario,
hopefully
in
your
public
cloud
data
center.
The
other
thing
is
that
there
are
really
a
lot
of
these
observability
things.
Both
you
know.
Logging
in
particular,
have
security
issues
and
it
isn't
uncommon
for
those
things
to
use.
A
People
to
you
know
whether
they
should
is
debatable,
but
I
think
there
are
many
people
who
are
leaving
those
relatively
open
in
terms
of
Transport
feeling
that
you
know
if
it's
in
the
data
center,
it's
a
secure
trusted
environment.
So
there
aren't
so
many
worries
about
it.
But
you
know
you
if
you've
ever
had
the
experience
of
going
to
like
black
hat
conferences,
one
of
the
first
things
somebody
would
do
if
they
managed
to
land
in
a
protected
Enclave
is
go.
A
Try
to
find
the
lock
server,
because
you
can,
you
can
go
to
one
Central
Point
and
get
a
lot
of
information.
I
mean
the
the
key
word
here
is
centrally
so
this
becomes.
You
know
a
supreme,
a
valuable,
Target
and
I
think
in
an
edge
scenario.
Maybe
you've
got
to
up
your
game
when
you
operate
observability
to
make
sure
that
you're
running
this,
you
know
with
all
the
maximum
security
switches,
flip
Dot.
D
Foreign,
maybe
we
can
split
the
description
into
two.
One
is
what's
different
scale:
lack
of
reliable
connectivity,
lack
of
Manpower
and
then
what
it
means
more.
It
means
more
need
for
centralized
management,
a
more
reliable
or
a
different
type
of
a
transport.
It
could
also
have
implications
on
the
architecture
you
might
actually
have
distributed
observability.
If
there
are
certain
aspects
that
are
real
time
and
critical,
you
might
actually
observe
them
at
the
edge
and
and
handle
them
at
the
edge
while.
D
This
this
is
a
sort
of
a
concept.
That's
that's
kicked
around
in
lfn.
Is
you
have
the
concept
of
closed
loops
and
open
loops,
so
open
Loops
means
You,
observe
them
in
a
dashboard
or
a
screen
closed
loop
means,
The,
observed
metrics
or
logs
or
or
events
trigger,
and
management
or
orchestration
action
automatically,
and
you
can
have
this.
Of
course,
the
open
loop
is
only
centralized,
but
the
closed
loop
could
be
distributed.
So
you
could
have
real
time
observability
and
response
to
that.
D
Those
observations
at
the
edge
or
Central
in
an
open
loop,
of
course
is
is
Central,
so
I
think.
Maybe
that
would
be
a
useful
way
to
deal
with.
It
is
first
describe
what's
different
or
what
are
the
attributes
and
then
what
it
implies
from
a
technical
point
of
view,
and
then
we
leave
out
specific
projects,
at
least
for
now.
Maybe
we
cover
them
in
a
later
section
or
something.
A
B
Yeah
so
I,
like
that
idea
of
focus
in
this
definition,
try
to
name
the
attributes
without
it
becoming
an
insanely
long
list
and
then
also
try
to
summarize
some
of
the
actions
that
results
in.
E
Hey
Kate
is
the
closed
loop,
open
loop.
Would
that
be
part
of
orchestration?
It
should
be
the
next
oh
road
down.
So
it's
what
was
just
said:
the
separation
at
first
we
were
talking
about
the
observability
and
then
now
we
get
into
okay,
You
observe.
What
do
you
do
and
that
is
the
orchestration.
A
I,
don't
think
it
is.
You
know
orchestration
as
typically
treated
in
kubernetes
is
the
placement
of
your
app
workloads
and
kubernetes
also
has
a
concept
called
controllers
or
operators
which
are
the
control
loops
that
operate
on
your
Declarations
of
desired
intent
and
those
are
considered
to
be
separate
from
orchestration.
So
I
think
that
this
open,
you
know
the
open
and
closed
loop
is
more
in
the
operator.
A
Category
or
controller
category
of
kubernetes
was
to
drive
these
Loops,
but
Amar
one
I
see
you
nodding
your
head,
but
you're
the
one
who
brought
up
the
concept.
So
maybe,
if
you've
got
remarks.
D
However,
if
I
had
to
you,
can't
put
it
in
all
three,
if
I
had
to
say
which
doesn't
have
the
strongest
Affinity
to
I,
think
it's
observability,
I
I
would
probably
put
it
there
and
open
loop
is,
is
only
observability
it
has.
It
doesn't
really
relate
to
management.
I
mean
human
beings,
use
that
to
then
manage
and
orchestrate
but
again,
I'd
probably
put
it
in
observability.
F
So
our
prakash
here,
oh.
D
F
So
I
would
like
to
add
that
liveliness
of
the
particular
CNF
is
important,
so
whether
it
is
alive
so
observatively
observability
contributing
to
local
awareness
of
a
CNF
being
live
is
important
because
we
generally
check
heartbeat
Etc.
So
I
will
put
that
absolutely
as
the
ammunition
as
all
components,
either
of
control
or
monitoring
or
but
more
importantly,
for
the
survival
of
that
CNF
in
the
local
context
is
very
important.
A
Okay,
I
think
we
made
good
progress
on
on
this
role.
Kate,
if
you
wouldn't
mind
since
you're
editing,
maybe
just
delete
right
now
the
Prometheus
fluent
D
links,
since
we
discussed
that.
Maybe
we
don't
want
to
go
into
the
details
of
specific
projects
here.
A
A
Think
it's
okay!
If
we
have
the
comment
there,
that
if
anyone
who
wants
to
can
come
back
in,
you
know
we
in
the
within
the
next
week
and
make
an
attempt
at
cleaning
this
up
along
the
lines.
With
of
what
was
discussed
here,
I
know
I
for
one
I'm
I'm.
If
I'm
writing
I
need
a
little
more
think
time
and
stuff
to
go.
A
Put
the
things
down,
maybe
even
in
a
separate
notepad
before
I'm
happy
that
it
reads
nicely
and
I'm
not
prepared
to
come
up
with
my
best
work
on
the
fly
during
a
zoom
meeting
or
Frederick
you're
the
author
on
this.
What
what's
your
suggested
technique
for
coming
out
with
production,
ready
text?
Here?
Oh
gosh
yeah,
you
brag
about
your
book,
we're
gonna
hold
you
we're
gonna,
expect
a
demonstration
of
your
Wizardry.
Oh.
C
Gosh
yeah,
the
the
the
technique
I
think
is
is
is
essentially
what
you
are
doing
right
you,
you
put
it
in
a
verbose
way
and
then
you
try
to
make
an
actual
sentence
or
you
know
something
you
would
put
on
the
slide
with
it.
You
know
if
it
was.
You
know
the
the
big
point
that
you're
trying
to
make
about
something,
at
least
for
me.
It
worked.
It
worked
pretty
well
as
a
technique,
but
of
course
this
has
a
tendency
to
to
lead
to
loss
of
nuance
so
to
speak.
C
So
you
need
to
be
careful
about
that,
but
I
would
I
really
I
would
recommend.
Maybe
keep
keep
your
column
with
with
the
full
verbose
version
and
maybe
create
a
third
one
in
the
table.
That
would
be
the
synthesized
version
and
and
and
try
to
actually
synthesize
in
one
sentence
or
two.
What's
there.
E
And
to
that
point
right,
it's
like
capturing
the
net.
It's
almost
like.
We
have
enough
to
write
the
paragraph
because
writing
a
description
for,
what's
going
to
be
a
paragraph,
is
going
to
be
harder.
You
got
to
summarize
something
that's
not
written
and
we're
inevitably
kind
of
getting
into
the
details
beyond
the
scope
of
the
description,
but
we
want
to
capture
those
because
that
becomes
part
of
the
paper.
Hopefully,
that
made
sense.
E
A
C
Yeah
or
in
in
documents
that
we
worked
on
in
our
group,
what
what
we
did
is
that
we
had
the
section
of
raw
materials
at
the
at
the
end,
where
all
of
those
brainstorming
sessions
or
notes,
or
things
like
that
that
were
that
weren't
ready
for
prime
time
are
just
accumulated
in
case.
We
want
to
refer
to
them.
That's
another
way
to
do
it,
but
then
maybe
you
lose
the
link
to
the
actual
context
in
the
in
the
document
that
you
get
with
the
comment.
A
B
I
think
the
one
benefit
of
trying
to
nail
down
a
definition.
Is
it
it's
hard
and
might
point
out
the
fact
that
category
is
too
broad
I
I'm
wondering
this
is
centrally
observable
whether
there's
two
things
going
on?
B
We
have
obserability
and
then
we
also
have
monitoring
and
I'm
wondering
if
those
should
be
separated
or
if
we
need
to
get
even
broader
and
say
like
a
central
hub
like
a
location
for
observation,
monitoring
and
state
reconciliation,
because
if
it's
almost
like
they're,
we
need
to
keep
breaking
it
down
smaller
or
get
bigger.
Is
what
I'm
I'm
curious
about
yeah.
A
I
think
that
some
of
the
things
thrown
out
here
would
lead
me
to
believe
that,
instead
of
splitting
this,
we
do
need
to
keep
it
together
and
a
couple
aspects
of
that
are
that
one
of
we
want
to
point
out
what's
different
here
when
it's
destined
for
Edge
versus
a
centralized
public
Cloud,
traditional
Cloud
native,
and
this
aspect
related
to
observability
of
the
precarious
nature
of
the
transport
of
all
of
the
stuff,
is
I.
Think
the
key
thing
leading
to
the
difference
between
traditional
Cloud
native
and
it
spans
all
of
these
aspects
of
observability.
A
You
know
logs
metrics
Etc
the
so
if
we
want
to
call
out
the
differences
between
traditional
and
Edge
native
I,
think
keeping
it
together
is
useful.
The
other
aspect
of
why
you
brought
up
this
idea
of
is:
is
it
all
about
hoisting
this
up
to
a
central
location,
but
the
idea
that
was
brought
up
of
the
open
and
closed
loop
implies
that
in
some
scenarios
it
isn't
all
about
Central
that
you
would
want
to
have
the
closed-loop
reaction
to
some
of
these.
A
Maybe
if
it
isn't
at
the
edge
itself,
it
might
be
at
a
mezzanine
tier
that
is
below
the
top
tier
where
the
management
takes
place.
You
know
so,
in
other
words,
maybe
a
physical
site
at
an
edge
location
has
four
independent
devices
and
either
a
regional
or
a
localized
place
to
host
control.
Loops
and
you'd
have
a
tiered
architecture
for
these
control
loops.
So
I.
B
I
think
that's
a
really
good
point
is
centrally
observable:
is
the
most
important
part
for
engineered
applications,
because
the
orchestration
and
management
and
state
reconciliation
can
happen
in
various
parts
and
locations,
but
there
needs
to
be
some
way
to
observe,
what's
happening
centrally,
so
I
think
that
makes
sense
to
call
that
out
as
the
what
are
the
principal
and
then
put
in
that
that
what
we're
observing
is
all
of
this
that's
happening.
A
A
The
column
itself,
as
opposed
to
the
call
I,
think
we
need
to
call
out
the
differences
between
traditional
Cloud
native
and
I
think
I'm
trying
I'm
trying
it.
You
know
just
at
the
seat
of
my
pants,
coming
up
with
what
the
Crux
of
that
difference
is
and
I
think
it's
support
for
async
data
collection.
Maybe
somebody
can
come
up
with
a
better
term
for
that,
but
the
idea,
meaning
that
you
can't
expect
that
you've
got
synchronous
capture
of
all
this
observability
data.
A
D
I
mean
I
I.
Think
differences
in
my
mind
boil
down
to
three
right:
the
scale
the
transport
which
is
you
know,
yeah,
less
reliable
and
whatever
we
want
to
call
it
and
constrained
Manpower
or
maybe
in
some
cases,
lack
of
Manpower,
constraint
or
constraint
is
good
enough.
I
guess
those
I
think
those
three
I
mean
I,
don't
know
if
there
are
any
others.
A
Though,
don't
use
the
term
Manpower,
because
the
project
overall
has
a
stated
policy
against.
D
A
D
B
D
And
that
leads
to
security
async
would
what
we
were
just
talking
about.
It
leads
to
distributed
slash,
Central,
distributed
and
Central
architectures.
C
There's
one
thing
that
hasn't
been
highlighted:
I'm
not
sure
if
it
fits
there,
but
typically
especially
if
you
have
a
very
important
Edge
infrastructure,
you
will
have
some
sort
of
out
of
the
out-of-band
connectivity
to
be
able
to
troubleshoot
outside
the
regular
channel
if
something
happens
or
something
and
that's
something
that
you
don't
see
typically
in
Cloud
right.
So
it's
that
something
that
should
be
surfaced
somewhere
actually.
E
A
Vsphere,
you
would,
you
would
typically
have
things
like
you
know:
Dell
Hardware,
it's
idrax
on
HP,
it's
ILO,
where
you
can
get
to
Hardware
out
of
band
for
management.
Ipmi
things
like
that.
Now
the
cloud
providers
have
those
they
just
don't
give
it
to
you
as
a
tenant,
but
they
they
do
have
out
of
band
management
of
the
servers
and
things
that
are
going
on
in
these
public
clouds
they're.
Just
not
because
of
the
security
issues.
C
Yeah
for
sure,
but
but
from
an
application,
developer
perspective
you
well,
it
doesn't
matter
that
they
are
there,
since
you
do
not
have
access
to
them
right.
Yeah
I
mean
even
in.
A
C
End
you
want
to
take
that
into
account,
because
that's
even
even
it's
that's
not
necessarily
your
device,
but
you
may,
you
may
have
a
need
to
let's
say
restart
all
of
those
nodes
that
would
be
on
those
locomotives
for
whatever
reason
right,
yeah.
A
B
Yeah
and
I
think
that
one
fits
under
management.
If
we
keep
that
principle.
C
Yeah
sorry
so
I
brought
you
on
on
a
little
detour
there.
Sorry
about
that.
B
Is
this
about
the
infrastructure
and
how
we
manage
infrastructure?
Or
is
this
about
how
we
manage
the
application?
Are
those
the
same
thing
yeah?
But
when
we
talk
about
like
node
management,
that
gets
to
possibly
a
different
category
than
talking
about
application,
I
I,
don't
know,
but
the
scope's
hard
to
nail
down
there.
A
Yeah
I
I
actually
suggest
yeah
I'm,
leaning
now
towards
we
were
almost
killed
management
but
I
think
maybe
we
keep
it.
But
since
we
in
the
row
below
we
called
out
that
observability
is
a
precursor
to
management,
we
should
probably
move
management
one
row
below.
If
you
can
do
that,
you
know
just
so.
People
read
them
in
the
preferred
order.
A
You
know,
I've
I've
heard
that
there
are
some
attempts
by
people
like
Amazon
to
maybe
offer
a
paid
service
where
you
can
hire
a
vendor
to
take
care
of
that,
but
still
somebody
is
doing
it.
It
didn't
managing
that
infrastructure
didn't
magically
disappear
from
the
planet.
You
know
into
true
virtualization
it
their
their
physical
things
that
can
break
and
need
maintenance
and
somebody's
got
to
do
it
and
I
think
that
it's
easier
to
ignore
that
in
traditional
Cloud
native
than
it
is
at
Edge
native.
D
Point
and
I
mean
we're
seeing
that
in
the
nephew
Community
where
nephew's
goal
is
infrastructure
and
application,
so
they
they
use
the
term
deployment
instead
of
orchestration
deployment
and
management
of
heterogeneous
Cloud,
slash,
Edge
infrastructure
and
Network
Services,
there's
nothing
preventing
applications,
so
so
I
think
that's
an
important
point,
and
even
if
your
node
itself
is
managed,
let's
say
you
take
AWS
outposts
or
or
something
else
where.
B
D
That
is
somebody
has
to
do
that.
So,
even
if
you
say
that
I
don't
have
I,
don't
want
to
deal
with
the
The
BMC
and
the
ipmi
firmware,
and
you
know
the
BIOS
and
the
operating
system.
There's
still
a
lot
of
infra
management
that
you
as
the
user
or
an
A
managed
service
provider
on
behalf
of
the
user,
has
to
do
so.
I
think
addressing
infrastructure
and
applications
is
important,
and
you
know
the
cas
layer,
the
kubernetes
layer
and
the
various
plugins
we
could
lump
with
the
infra.
D
A
Yeah
the
other
aspect
of
management
I
I'm,
debating
where
this
belongs,
but
a
big
problem
at
Edge
is
onboarding.
You
know
it's
day.
Two
is
where
we
focus
most
of
this
discussion
this
morning
so
far,
but
the
onboarding
thing
is
perhaps
a
tough
challenge
too,
where
yeah
FedEx
delivered
a
box,
you
plug
it
in
you,
don't
have
any
skilled
staff
there.
Yet
you
still
need
to
bring
this
on
the
air
so
that
it
can
be
used
as
a
trusted
resource.
Even
though
you
lack
trusted
people
to
trigger
the
process
and
I
think
it
is.
A
It
falls
under
this
management
category
perhaps
closer
to
infra
than
apps,
because
you
know
the
onboarding
is
to
get
a
platform
ready
that
can
host
apps,
but
I
think
that
device
onboarding
is
very
different
in
Edge
versus
traditional
Cloud
native,
and
it
needs
to
fall
into
either
some
row
of
this
table
or
we're
missing
the
row
in
the
table,
because
that
it's
a
key
aspect
and
it
needs
to
have
a
home.
B
So
we're
getting
some
good
ideation
going
here,
I
think
Frederick's.
Point
of
having
a
notes,
column
might
be
useful
thoughts
on
me,
adding
a
notes,
column
or
using
the
same
column
and
adding
a
notes
section
within
this.
Like
it's
a
notes
line.
D
D
A
F
D
A
D
Maybe
it's
worth
splitting
because
then
I
think
it
allows
separate
treatment
of
infra
and
apps.
B
I
think
we
went
with
applications
to
deliberately
narrow
scope,
potentially
when
we
were
first
discussing
this
to
avoid
discussing
infrastructure
concerns
so
with
an
ion
application,
developers
are
deploying
their
apps
on
edge
infrastructure.
What
do
they
need
to
keep
in
mind,
but
maybe
that
was
too
optimistic
of
a
view,
believing
that
the
infrastructure
and
the
application
was
decoupled
might
have
been.
The
learning
that
we
just
arrived
at
is
that
there
is
no
Edge
native
application
principle
there's,
because
you
need
to
keep
in
mind
my.
A
Soapbox
attitude
is
that
just
like
kubernetes
itself,
it
advertises
itself
as
this
abstraction
layer
that
allows
you
the
app
developer
to
have
it
easy
and
ignore
all
of
these
differences
and
difficulties
that
the
layers
below-
and
that
holds
true
as
long
as
things
are
working
a
hundred
percent,
but
as
soon
as
something
as
soon
as
soon
as
something
malfunctions.
All
of
a
sudden,
you
know,
maybe
every
app
developer
doesn't
need
this
skill
set.
But
somebody
in
your
organization
has
to
at
least
be
familiar
with.
A
What's
going
on
under
the
cover
sort
of
the
idea
that
you,
as
the
driver
might
be
able
to
take
an
abstraction
of
steering
wheel
and
gas
pedal
and
Brake,
and
that's
all
you
need,
but
it's
inevitable
that
at
some
point
something
is
going
to
go
wrong
and
some
mechanic
will
have
to
get
involved
to
open
the
hood,
and
you
can't
ignore
the
fact
that
there's
things
under
the
hood
so
I
think
we've
diverged
already
to
start
talking
about.
What's
under
the
hood
here,.
D
I
think
in
in
Edge
native
I
think
the
applications
are
a
more
sensitive
in
nature.
They,
the
reason
they're
on
the
edge,
is
their
need,
lower,
latency
or
higher
throughput
or
or
something
of
that
sort.
So
I
think.
For
that
reason,
a
developer
at
the
edge
does
need
to
worry
about
the
lower
layers
more
than
in
a
cloud
context.
D
So
you
do
need
to
worry
about
gpus
and
dpus
and
sriov
and
fpgas,
and
you
know
other
other
infra
consideration,
storage
performance,
so
I
think,
while
in
a
cloud
context
you
might
get
away,
I
mean
it's
a
web
application.
It's
you
know
it
doesn't
matter
to
that
degree,
but
in
an
edge
even
an
app
developer,
even
if
I
think
I'm
I'm
sort
of
agreeing
with
your
point
and
Stephen
and
sort
of
upping
it
one.
Even
if
an
average
app
developer
needs
to
know
what's
under
the
hood.
A
If
you're
willing
to
spend
a
bunch
of
more
money
to
paper
over
a
problem,
you
know,
maybe
you
didn't
do
it
efficiently,
but
if
you
did
it
half
as
efficiently
as
you
could,
you
could
still
live
with
it
if
you
have
horizontal
scalability
and
are
willing
to
throw
money
away
at
edge
things
just
break
because
you
can't
get
away
with
those
inefficiencies
same
thing
to
cover
up
availability
issues.
A
If
you've
got
the
horizontal
scalability
and
don't
really
care
where
something
runs,
you
can
cover
up
design
issues
if
you
will
at
Edge,
if
your
app
is
tied
to
IO,
that's
only
at
one
location,
you
have
no
options.
A
A
So
I
think
for
now
is
the
resolution.
We
talked
about
combining
management
with
orchestration
but
I
think
we're
leaning
more
towards
there's
a
justification
for
keeping
them
as
two
separate
rows
and
let
maybe
just
leave
it
at
that
for
now.
Yep
they're
certainly
related
and
there's
common
aspects
to
both
of
these
rows,
but
I
think
it
maybe
is
Justified
to
keep
them
as
separate
rows.
D
I
think
the
point
there
is
each
application
has
its
management
tooling.
D
So
the
question
is:
is
that
the
best
way
and
if
you
have
hundreds
of
applications
you're
using
hundreds
of
vendor
provided
tooling,
some
manage
one
instance
at
a
time
some
manage
a
fleet
of
instances
or
do
we
want
to
go
to
a
vendor
neutral
way
to
manage
applications
and
infrastructure,
using
techniques
like
kubernetes
and
nephew
or
I
mean
we
wouldn't
mention
those
names
but
I'm
just
so
that
I
think
I
had
originally
introduced
that
thread,
and
that
was
my
intent,
that
what
are
we
recommending?
Are
we
saying
vendor
provided
tooling
or
vendor
neutral
methods.
A
Well,
I
I
think
another
issue
here
at
Edge.
That's
different
from
cloud
is
that
some
of
these
aspects
of
the
infrastructure
become
vendor
specific.
Just
because
we
have
Hardware
involved
and
I,
don't
think
you've
there.
There
are
some
initial
at
attempts
at
so-called
open
Hardware
things
like
the
Raspberry
Pi,
where
theoretically
anybody
can
make
these
and
they're
published.
You
know
reference
architectures,
but
that
hasn't
become
as
widespread.
You
know
as
it
has
in
software,
so
some
of
these
are
going
to
cross
over
to
being
vendor
specific.
D
B
I
think
this
is
a
good
point,
though
I
think
basically
we're
saying
there's
a
lot
going
on,
for
example,
device
onboarding
you
have
to
you,
don't
have
as
much
horizontal
scaling,
you're
more
tied
to
certain
infrastructure
and
maybe
we're
putting
a
little
bit
of
orchestration
in
here
too
we're
listing
out
the
things
that
are
of
concern,
but
ideally,
when
you're
making
an
edge
native
application,
you
find
some
better
neutral
paths
that
encapsulates
all
these
issues
for
you,
so
maybe
having
something
like
saying
that
it's
really
important
to
continue
the
cloud
Paradigm
of
deferring
this
management
to
an
entity
so
that
you
can
focus
and
Abstract
away
those
details.
B
I,
don't
think
that's
a
bad
recommendation
to
make
I
don't
know
if
it
should
fall
in
the
principle,
but
I
think
that's
interesting.
F
F
When
we
talk
about
12,
Factor,
Cloud
native
to
HVAC
age
native,
how
is
it
different
in
the
sense
what
the
constraints
are
and
therefore
what
we
need
to
bring
whether
vendor,
neutral
or
technology
specific
rather
than
vendors
neutral,
but
we
have
to
spell
it
out
for
the
deployment
model
to
be
effective.
Otherwise,
at
the
architectural
level
there
is
no
problem.
We
can
just
talk
about
the
whole
thing
what
exists,
but
when
it
comes
to
deployment
models,
you
will
be
forced
to
look
at
what
is
it
so
my
take
would
be.
B
I
think
one
thing
that
I
got
from
that
is
kind
of
along
with
centrally
observable,
with
observability
being
a
precursor
to
management.
We
have
here
that
good
management
is
a
precursor
to
deployment
on
the
edge
because
we
are
so
tightly
tied
because
we
do
need
to
manage
the
infrastructure.
D
So
one
thing:
that's
still
on
my
mind,
based
on
what
prakash
mentioned
is
so
if
we
broaden
the
scope
of
the
paper,
can
we
still
keep
the
Persona
as
the
developer,
so
developer
Centric?
It
may
not
be
just
aginative
application
principles
but
Edge
native
applicative
principles
from
a
developer's
point
of
because
I'm
a
little
concerned.
If
we
go
completely
Broad,
then
there
are
operational
considerations.
There
are
I
mean
there
are
just
so
many
personas
that
get
involved
and
and
so
many
that
it
could,
it
could
become
a
a
very
long
paper.
B
I
think
sorry
go
ahead.
F
Yeah
I've
heard
that
we
have
developers
perspective
as
well
as
devops.
First,
two
two
perspectives
will
have
to
take
if
we
have
to
be
able
to
recommend
something
is
deployed
with
best
practices,
so
the
white
paper
size
and
all
we
can
Define
saying.
Okay,
we
are
going
to
do
only
20,
Pages
or
40
pages
beforehand,
and.
F
Yeah,
maybe
eight
six
eight,
so
what
we
do
is
we
can
have
all
the
discussions
and
all
but
narrow
it
down
with
the
every
summary
so
that
we
can
point
if
somebody
is
interested
in
looking
at
more
details
here.
That
way
we
can
yeah
so
to
boil
down
to
six
and
eight
is
very
hard.
I
would
put
it
still
it's
so
then
the
question
is:
do
we
include
devops
perspective
at
all,
which
is
we
have
to
be
able
to
express
in
breeds
both
yeah
I.
A
Think
maybe,
starting
with
a
broad
overview
that
covers
the
whole
landscape
and
then
even
if,
if
to
maintain
five
or
six
pages
really
all
we
have
are
these
tables?
It's
still
useful
to
put
the
overview,
and
then
we
can
have
additional
white
papers
down
the
road
that
cover
developer
perspective.
Ops
perspective,
say:
I
had
a
thought
that
I
left
as
a
chat
comment,
but
just
kind
of
brainstorming.
Here
on
this
concept
of
we
got
two
rows.
Management
and
scale
is
the
one
that
we've
been
discussing,
and
then
we
have
declarative
orchestration.
A
But
the
thought
occurred
to
me
that
maybe
we
can
Factor
this
to
divided
under
into
two
different
categories.
One
I
would
call
management
of
infrastructure
and
platform
and
by
platform
infrastructure
I'm
thinking
more
like
physical
devices,
Hardware
Etc
and
platform
would
be
Baseline
fixtures
that
support
applications,
but
aren't
applications
themselves,
so,
in
other
words,
laying
down
an
operating
system
laying
down
a
container
runtime.
A
Maybe
the
if
you
choose
to
run
kubernetes,
it's
laying
down
the
kubernetes
worker,
node
components
and
management
of
that
at
a
scale
at
scale
is
kind
of
the
precursor
that
gets
you
in
the
position
to
manage
apps.
The
reason
I'm,
the
second
title
would
be
app
management
at
scale
is
really
to
me.
Orchestration
is
that
traditional
Cloud
native
idea
of
I
have
a
centralized
pool
of
servers
where
I
could
run
things.
I
have
a
centralized
queue
of
workloads
and
I,
just
randomly
assign
them
to
maximize
efficiency.
A
That
is
the
traditional
idea
of
orchestration,
I,
believe
and
that
isn't
really
what
goes
on
at
Edge
they're
more
like
one-to-one,
mappings
and
I.
Think
just
like
just
like
we
discussed
when
we
touched
on
observability.
A
There
are
aspects
potentially
where
there
are
end
tiers
here,
going
on
where
these
control
loops
might
have
a
control
Loop.
That
goes
all
the
way
up
to
the
top
pub
Cloud
layer,
but
other
localized
control,
loops,
I
think
when
it
comes
to
apps
it's
a
little
different
than
the
pub
the
cloud
native
with
regard
to
things.
A
B
I
personally,
like
that
division
I
think
it's
a
reasonably
large
enough
change
for
us
to
table
that
since
we're
at
time
and
maybe
carry
that
on
I
put
them
under
there
under
each
of
them.
I
think
that's
where
you
were
looking
at
them
Steve
and
then
maybe
we
can
pick
up
next
time
discussing.
B
Do
we
put
all
the
infrastructure
and
platform
concerns
in
one
principle
which
would
help
us,
because
we
can
keep
that
small
and
focus
on
applications
elsewhere,
developers,
application
developers
and
then
put
this
specifically
like
you're,
saying
orchestration
means
application
management
and
the
cloud
native
context
put
that
in
a
separate
section
but
I
think
since
we're
over.
It
might
be
good
to
table
that
to
next
time.