►
From YouTube: Advancing Jupyter, Kubernetes, and Globus at ALCF
Description
July 11, 2019 Jupyter Community Workshop talk by Rick Wagner (Argonne National Laboratory, University of Chicago, Globus)
A
I'm
Rick
Wagoner
I
work
for
Globus,
which
means
I
work
for
a
lot
of
different
places
and
I've
met
many
of
you
in
different
roles
at
various
times.
At
the
moment,
water
is
some
work,
we're
doing
for
the
AL
CF
to
enable
data
services
for
the
various
projects.
User
communities
at
that
are
not
just
a
LCF,
but
also
with
a
little
bit
of
an
argon
focus,
because
we
do
have
the
Advanced
Photon,
Source
and
they're
a
large
data
generator.
There's
other
groups
that
leverage
the
system.
A
So
it's
quite
a
diverse
mix
and
what
I'm
focusing
on
is
something
that
came
out
of
I.
Think
the
leadership
team
at
the
AL
CF
and
its
many
other
parts
of
argon,
maybe
Rick,
Stevens,
Ian,
Foster
or
Mike.
Looking
at
what
does
the
asdf
need
to
provide
to
accommodate
a
lot
of
these
large
data
sets
and
you
have
the
concept
of
the
large
scale
systems,
the
leadership
class
systems?
Maybe
there's
a
new
one
coming
to
our
on.
A
Maybe
there's
a
new
building,
that's
been
extended
on
to
our
building
240
that
will
be
dropped
in
place
and
there's
large
scale
simulations
I
work
for
e3
SM,
the
climate
modeling
group.
That
is
part
of
the
do
e
and
we're
producing
data
at
all
leadership,
centers
and
its
massive.
That's
just
one
example
out
of
many,
and
then
the
data
is
gets,
subsets
incentive
systems
like
Cooley
and
others
for
analysis,
and
now
we've
got
new
capabilities
like
machine
learning.
A
Coming
out
that
we
want
to
be
able
to
look
at
that
data
in
new
ways
and
also
then
you've
got
you
know
the
photon
sources
and
others
that
are
providing
data
that
my
baby,
it's
going
in
the
other
direction,
the
data
lands,
and
then
it
goes
to
Cooley
and
maybe
elsewhere.
And
how
do
you
incorporate
this
across
it
and
then
provide
the
scientists
with
something?
Besides
the
CLI,
you
know,
I,
coming
from
an
HPC
and
supercomputing
background.
A
I
totally
understand
that
when
we
talk
about
the
new
systems,
you
are
going
to
have
those
cutting-edge
people
that
are
on
the
command
line
and
trying
to
deal
with
the
systems.
But
for
you
know,
day
to
day
science,
even
at
the
highest
level,
you
want
to
provide
familiar
tools,
easy-to-use
tools
across
the
systems
and
that's
becoming
very
that's
very
challenging
at
those
levels.
So,
as
we
go
sort
of
from
right
to
left,
maybe
it
is
the
webinar
face.
Is
it
stupider?
It's
galaxy.
It's
visuality.
A
We
visit
things
like
that,
so
anyways
we're
gonna
talk
about
what
we've
done
for
in
this
case,
looking
building
on
one
of
the
services
that
have
been
has
been
at
the
LCF
for
a
while,
which
is
petrol,
I'll
talk
about
that
in
a
second.
So
this
data
is
all
over
the
place
and
the
science
is
collaborative
right,
especially
the
leadership
computing
facilities.
They
don't
host
science,
this
per
se
by
default,
the
scientists
live
somewhere
else.
They
might
actually
be
Argonne
staff,
in
the
case
the
SF
occasionally
but
usually
they're
at
some
other
University.
A
Do
the
operations
on
it
and
then
share
it
out
as
part
of
a
publication,
especially
given
the
restrictions
where
it's
got
to
stay,
usually
currently
locked
up
behind
an
SSH
wall
with
two-factor
and
things
like
that,
and
we
believe,
and
what
we've
been
driving
towards,
is
Jupiter
living
at
the
center
of
this
as
a
engine
and
facilitation
point,
and
so
the
way
we
see
it
also
is
it's
fundamentally
a
lot
to
do
with
off
identity
and
access
management
and
multi
services.
All
these
api
is
that
we're
calling
things
like
that.
A
This
is
my
Globus
hat,
so
I'm
gonna
run
through
how
many
of
you
have
not
heard
of
Globus
all
right.
Everyone
else,
just
wait.
A
sec
I'll
be
as
fast
as
I
can
all
right.
So
what
is
clovis
and
for
those
of
you
who
have
heard
of
Globus
I
want
to
make
sure
you
hear
about
it
in
this
context
in
particular,
what
globe
is
off
provides
in
securing
REST
API
and
for
Identity
and
Access
Management?
Not
just
the
I
go
to
my
browser
or
I,
submit
a
transfer
and
stuff
like
that.
A
So
fundamentally,
globe
is
tries
provide
access
across
tiers.
You
know
we
provide
endpoints
that
you
install
on
site
and
then
we
move
data
around.
We
talk
to
a
lot
of
types
of
storage
systems
box
planned
by
the
way
it
goes.
It
is
working.
So
we
try
to
talk
to
a
variety
of
storage
systems
so
that
when
it
comes
to
the
data
movement
you're
not
dealing
with
the
interface
we
do,
part
of
Globus
does
live
in
the
cloud.
It's
the
part
that
manages
the
transfer.
It's
not
the
part
that
sees
the
data.
A
These
servers
get
deployed
on
your
laptop
on
a
server.
The
data
moves
between
that
that
it
provides
for
the
knowledge
that
the
data
is
the
transfers
are
faster.
You
can
do
a
science
TMZ
model
when
it
comes
to
security.
Yes,
we
see
some
things
like
file
names.
No,
we
don't
see
the
data.
We
also
can
leverage
that
to
put
access
controls
in
the
cloud
users
have
our
both
sides
and
users
have
the
ability
to
create
what
we
call
a
shared
endpoint.
A
That
just
needs
data.
Alright,
that
just
drove
me
crazy,
I
understand
that
what
works
in
the
current
systems,
but
that's
only
because
how
file
systems
are
built
and,
what's
expected,
you
should
not
be
enabling
additional
accounts
on
the
system
unnecessarily
and
I.
Consider
that
fundamentally
unnecessary
might
be
also
why
I
switched
jobs.
A
Alright,
one
of
the
things
you
can
do
with
closed-
and
this
is
really
what
this
is
about-
is
develop
app
services
and
workflows.
Globus
auth,
one
of
the
things
that
we
had
to
develop
to
support
all
of
this
in
terms
of
identity,
access
management.
You
can
write
your
own
REST,
API
and
secure
them
with
Globus
off.
You
can
then
build
things
that
our
clients,
whether
they're
on
the
command
line.
A
We
have
a
nice
toolkit
for
that
now,
whether
it's
a
portal
we've
had
that
for
a
very
long
time
get
tokens
talk
to
those
REST
API
is
do
stuff.
A
lot
of
the
simple
cases
are
just
talking
to
the
existing
globus
REST
API,
but
really
you
want
to
be
able
to
writing.
Api.
Is
that
you
you're
writing
yourself
or
can
do
the
things
that
you
want
to
do
so.
The
challenges
of
building
a
platform
like
this
even
on
your
own,
is
how
do
you
have
unified
logins?
How
do
you
protect
all
these
REST
API
communications?
A
Is
it
the
classic
goal
out
good
I'm
gonna
generate
a
token
for
you
download
the
token
I've
got
yet
another
thing:
that's
generating
managing
tokens.
As
what
galaxy
does
is
a
lot
of
their
services?
Does
it's
simple
to
do,
but
it
means
you're
managing
yet
another
service,
that's
issuing
credentials
for
authentication,
and
so
how
do
we
instead
maintain
that
where
you
have
a
single
place,
managing
identities,
for
example
the
universities
and
not
having
to
have
it,
be
tied
to
just
the
framework
you're
using
it's
like?
Oh
great,
now,
I
get
tokens
from
galaxy.
A
How
do
I
get
em
over
here?
What,
if
I,
want
to
write
a
tool
that
talks
to
both
now
I've
got
to
talk
to
different
things,
so,
let's
make
it
web
friendly
and
easier
for
users
and
developers
which
in
my
mind,
means
let's
not
make
another.
Idiosyncratic
research
itd
only
solution,
we're
really
good
at
that.
Fortunately,
I
think
the
tide
has
shifted
and
we
are
now
moving
more
towards
what
is
out
there
and
available
in
the
commercial
space
so
Globus
off,
not
so
much
transfer
but
off.
A
A
Instead
of
using
Shibboleth
you'll
see
that
you
can
easily
enable
an
oo
IDC
plug
in
to
the
globus
or
to
Jupiter
hub
and
talk
to
Glovis
off,
which
means
immediately,
you
can
enable
logins
with
your
institutional
provider
more
than
likely
so
once
you
start
using
Globus
off
now,
you
can
start
calling
out
to
these
different
things
and
part
of
the
story
is
about.
How
do
we
do
that
with
Jupiter
hub
all
right?
A
So
here's
our
starting
point
and
building
block,
which
is
petrol,
so
petrol
is
a
programmatically
access,
storage
system,
it's
maintained
by
the
AL
CF
when,
within
the
evaluation
section
jl
SE,
and
it's
intended
to
enable
collaborator
projects
to
work
with
their
collaborators
around
data
to
solve
this
problem
of
how
do
we
not
keep
adding
POSIX
accounts?
So
it's
a
large
scale
now
3.2
petabytes
system.
It
was
built
on
gpfs.
A
It's
now
on
SEF,
it's
very
fast
as
multiple
40
gig
connections
to
several
nodes,
and
what
we
do
is
we
take
one
of
these
shared
endpoints
one
of
these
extractions
and
we
put
management
of
it
under
the
control
of
a
PI,
and
we
say
all
right.
This
is
your
storage.
You
are
responsible
for
who
has
access
to
it?
You
are
responsible
for
who
you
can
delegate
granting
access
to.
A
You
manages
data,
your
you're
in
control
of
it
and
from
there
the
pis
decide
who
can
I
allow
to
manage
this
subfolder,
that's
a
folder
who
can
read
and
write
the
data,
and
but
they
can
open
it
up
to
their
collaborators
because
of
where
the
data
is
hosted
and
the
policies
around
it.
It
means
that
we
can
have
a
area
that
is
open
and
fast
and
accessible.
A
A
So
we've
got
a
bunch
of
these
projects
and
we've
built
data
portals
for
them
and
we're
trying
to
see
how
far
we
can
get
and
providing
kind
of
a
semi
homogeneous
interface
to
it
and
by
the
way,
one
of
the
nice
things
about
petrol.
Is
it
leverages
the
fact
that
we
can
do
HTTP
access
to
data?
You
don't
just
have
to
go
through
the
globus
endpoint,
the
GCS
loads
connect
server
endpoints
on
there
also
will
stream
data
over
HTTP.
A
So
that
means
that
if
you
don't
have
an
endpoint
on
a
system,
you
can
at
least
do
puts,
gets
back
and
forth
to
get.
Data
simplifies
a
lot
of
these
smaller
use
cases
or
the
case
that
somebody's
on
a
random
system
on
a
login
node.
They
don't
know
the
end
point.
All
I
needed
to
do
is
pull
down
a
small
file
over
HTTP
and
authenticate.
We
cannot
do
that.
A
We
grew
petrol
recently
over
the
last
year
and
we
started
extending
what
we've
got
available
for
Jupiter
hub.
A
LCF
operates
a
production
Jupiter
of
instance,
which
is
served
on
on
a
single
virtual
machine,
and
it
suffers
from
a
lot
of
the
typical
cases
of
the
kind
of
simple
way
to
provide
Jupiter
up
to
a
group
of
users.
You
spin
up
a
big
VM.
A
You
spin
up
a
large
node
whatever,
and
if
anyone
remembers
from
Finding
Nemo,
these
seagulls
you
give
any
set
of
users
a
shared
resource,
and
it
is
mine,
doesn't
matter
how
many
there
is
all.
How
long
will
my
job
to
take
to
spin
up
must
be
instantaneous,
because
that
cluster
is
mine?
This
has
been
going
on
for
decades.
It
will
continue
to
go
on,
and
so
once
you
know
that
happens
they
over
subscribe
it
so
over.
A
On
the
other
side,
we're
trying
something
different
on
petrol,
cube,
we're
trying
to
spin
up
a
scalable
conversion
of
0
to
Jupiter
hub
and
we've
got
it
running
now,
and
it
includes
clovis
off
the
other
one.
This
nice
single,
shared
instance,
running
on
node,
does
have
the
ability
to
submit
batch
jobs,
so
you
can
do
things
like
he's
parcel
or
desk
or
other
things
to
spin
up
more
resources,
but
primarily
the
notebook
kernel
itself
is
running
on
a
shared
resource.
We're
trying
something
different
over
on
petrol.
A
We've
been
flowing
data
from
the
ApS
like
the
neuro
cartography
from
Bob,
because
three
directly
off
of
the
beamline
to
petrol
with
a
system
and
then
a
workflow
along
the
way
that
sends
it
back
to
do
machine
learning
on
it
for
Bank
segmentation
and
other
Center
finding
and
stuff
like
that.
So
petrol
itself
has
been
by
giving
us
a
point,
a
lynchpin
or
a
course.
It
has
was
very
useful
to
start
building
around
so
taking
taking
a
data.
A
Centric
approach,
I
can
say,
has
been
fundamentally
valuable,
all
right
so
kubernetes
is
it
still
very
new
for
us,
but
we're
making
again
steady
headway.
It's
new
for
a
lot
of
places
and
I
think
that
you
know
there's
questions
about
how
does
kubernetes
fit
into
an
environment
where
people
are
mostly
used
to
HPC
systems.
They're
used
to
I
login
under
my
POSIX
account.
I
do
an
LS
stuff
like
that
and
I
do
it.
You
know
on
this
shared
resource,
that's
all
mine,
so
we
have
we
repurposed
some
nodes
within
the
jl
SE.
A
A
And
now,
with
this,
we
kind
of
have
two
nice
paths,
one
we're
going
to
set
up
a
test
about
a
test
area
where
we're
going
to
clone
a
lot
of
the
repositories
that
we
have
with
a
sample,
Jupiter
notebooks
from
parcel
DL
hub
the
material
data
facility,
globus
itself,
others
we
can
find
and
we'll
start
getting
code,
letting
people
evaluate
what
does
it
mean
when
I
have
these
tokens
in
the
environment?
The
other
thing
we
can
do
is
we
can
take
our
work
and
spin
up
other
versions.
So,
for
example,
I
know
the
LCF.
A
A
There
are
issues,
data,
access
and
management
in
the
kubernetes
context.
Where
you
know
yes,
we
are
working
towards
being
able
to
mount
some
of
petrol
on
it,
but
then
we
concept
of
RA
we've
got
this
POSIX
mount
from
a
service
account.
How
do
we
mount
it
into
the
pods
who's?
Actually
executing
that?
If
you
open
it
up
to
the
same
groups
that
have
access
to
petrol
in
the
project,
areas
do
mount
it
read-only
for
them
so
that
they
can
say
all
right.
A
Yes,
Clovis
is
good
for
doing
stuff
like
that,
but
it
does
mean
that
stuff
of
moving
the
data
back
and
forth
and
sometimes
making
the
data
more
accessible
via
mount,
is
more
appropriate
Identity,
Management
I
in
the
case
of
Jupiter
hub,
we
restricted
the
IDP
down
to
the
ALC
F
and
we're
mapping
and
logging
users,
but
they're
pretty
much
tied
to
a
service
account
at
some
point.
You
know
we
have
to
say:
is
this
appropriate
for
all
use
cases,
or
is
it
only
appropriate
for
training
and
other
science
project
cases,
but
it
so?
A
This
is
the
nice
thing
about
having
a
testbed
to
evaluate
that
where
we
can
put
real
science
projects
on
there
with
some
bunkers
in
place
in
terms
of
the
type
of
projects
they
are
alright.
So
let's
talk
about
Jupiter
of
itself
in
the
context
of
tokens
and
what
we'd
like
to
see
more
up.
So,
as
I
said,
we've
got
the
production
one
and
it's
tied
to
the
LCF
IDP,
just
a
straight
walk,
and
most
of
you
are
familiar
with,
and
now
we've
got
this
globe
with
on
Globus
off
version.
A
A
It's
a
way
to
give
you
credentials
to
call
REST
API,
and
with
that
now
you
sort
of
got
this
ecosystem
of
the
rest
of
the
world
and
services
that
you
can
call
out
to
at
least
services
of
a
secured
focus
off.
This
model
could
easily
be
replicated
if
you're
logging
in
with
some
other
OAuth
or
DC
system
you
different
scopes.
A
Well,
what
we
use
them
for,
of
course,
is
the
ability
to
grab
data
from
petrol
watch,
something
like
parcel,
which
is
a
auto
parallelization.
You
know
creates
a
graph
of
your
code
and
quickly
runs
it
and
by
the
way,
in
some
cases,
it's
very
much
faster
than
tasks
so
I'll
throw
that
out
there.
It's
also
developed
by
Kyle
chard
from
Globus
and
I,
think
they
do
good
work
tasks.
Oh,
awesome,
ok,
all
right,
ok,
so
I
said,
as
I
said,
we
could
we
used.
We
contributed
the
globus
off
plugin
for
OAuth.
A
Oh
ID
c
allows
you
to
specify
scopes
like
you
can
say
you
know
it.
You
know
you
can't
talk
to
transfer
and
get
data.
Yes,
you
can
talk
to
transfer.
Yes,
you
can
back
to
global
search.
No,
you
can't
and
the
tokens
go
into
it.
One
of
the
things
we
contributed
as
part
of
the
ALC
F
work
was
the
ability
to
restrict
the
IDP,
not
just
that
the
user
has
to
have
an
identity
with
that
IDP,
but
that
the
user
has
to
log
in
with
that
identity
using
globus,
--is
new
clothes
Louis
on
snooze
session.
A
This
was
critical
for,
like
the
leadership
computing
cases
is,
globus
tries
to
be
like
login
with
any
of
your
identities
and
will
have
mapped
them
so
that
you
people
don't
know
who
have
to
know
who
you
are
everywhere.
In
this
case,
it's
like
no,
you
really
have
to
be
in
a
LCF
user
and
you've
got
to
prove
it
to
us
to
log
in
here,
because
you're
gonna
execute
code
on
our
systems.
So
we've
added
that
you.
A
Do
not
need
a
subscription
for
that
globus
off
is
one
of
our
free
services.
Most
of
Globus
is
free,
most
of
Globus
uses
free.
That's.
Why
we're
appreciative
of
people
like
Mike
for
paying
us
to
do
work,
because
we
don't
make
so
much
money
that
we
can
just
give
away
time
to
the
LCF
and
but
no
Globus
auth
and
the
sessions
API
is
totally
open
and
you
can
just
use
it
so
that
plug
in
and
I
don't
know
if
that
code
has
landed
back
upstream,
but
we'd
be
glad
to
contribute
it.
A
So
what
happens
you
login
tokens
flow
into
the
database?
Nick
I
think
helped
to
actually
work
on
the
secure
attributes
in
the
user
database
that
they
weren't
totally
there
and
then
run
this
one
or
they
get
put
into
the
notebook
server.
And
then
you
can
call
that
the
stuff,
those
are
you
not
familiar
with
OAuth
no
IDs.
He
talked
to
me
over
copy
been
doing
every
one.
A
So
the
idea
is
you
get
your
data,
you
pull
your
data
set
in
you
analyze,
you
spend
a
little
plot
and
then
you
put
it
back
somewhere
and
you
share
it
out
to
your
flat
raters.
This
is
my
joke.
Is
it's
2:00
p.m.
you've
got
your
adviser
once
a
plot
by
4:00
p.m.
for
group
meeting.
How
do
you
get
it
somewhere?
They
can
see
it
quickly.
A
So
if
you
want
to
try
it,
if
you
go
to
Jupiter
demoed
a
Clovis
org,
that's
the
one.
The
globe
is
uses
for
its
training
and
then
there's
a
Jupiter
hub
example.
That
does
exactly
that.
It
downloads
the
data
set
plots
of
flawed,
puts
it
up
on
a
public
shared
endpoint
that
we
have,
you
can
access
it
and
then
it
will
give
you
the
URL.
A
So
all
right
good
got
four
minutes
to
tell
you
what
we've
learned
number
one
since
we're
sitting
at
nurse,
since
we
all
have
a
badge
we
had
to
get
through
the
gate.
When
you
do
something
like
this,
this
is
new.
This
is
novel
yeah,
it's
cool
we
can
do
in
our
laptops,
but
at
the
DOA
facilities,
especially
cybersecurity
is
something
where
you
can't
just
beg
forgiveness.
You
know
this.
Is
you
can't
just
tell
them
I
have
to
do
this?
You,
you
really
need
to
make
them
your
partner
right,
you.
A
You
know
they
have
legitimate
concerns
and
a
job
to
do,
and
they
are
not
your
enemy,
yeah
I.
Think
most
of
us
here
in
the
room
are
mature
enough
to
appreciate
that
so
make
them
your
partner
tell
them
what
your
goals
are
find
out,
what
their
concerns
are
for
their
facility,
your
facility
and
policies
and
work
with
them
and
and
then
you
can
find
solutions.
In
our
case,
you
know
it
was
all
right.
A
If
we
do
the
IDP
restriction,
we
know
who's
logging
in
yes,
they're
going
to
a
shared
account,
but
we're
isolating
them
in
the
container
etc,
and
that
was
adequate
for
the
TLS
C
environment.
We'll
evaluate
that
we'll
look
at
the
code,
some
more
and
we'll
decide
you
know.
Can
we
start
to
scale
this
up
to
other
data?
A
Also,
I
am
a
big
fan
of
open
source
and
I
really
enjoyed
working
with
openness
FS,
but
I
will
say
that
gpfs
and
luster
make
our
life
far.
It
is
a
lot
of
it
does
come
down
to
the
data
and
it
does
come
down
to
the
fact
that
you
know
you
want
to
hit
that
with
POSIX
access
to
go
fast
and
that's
still
why
we
provision
POSIX
accounts
on
the
systems.
A
It's
hard
to
manage
access
control
without
that
to
data
and
I
would
but
I
do
think
that
we
can
look
more
at
what
the
NSF
space
has
done.
At
least
here
within
the
leadership
computing
facilities
at
the
science
gateways
model,
the
service
accounts,
the
communities
accounts
one
of
the
things
about
Globus
and
why
we
have
this
sharing
model?
Is
you
just
grant
them
access
to
a
separate
system
and
pull
it
in
kind
of
they
will
click
get
model?
A
We've
recently
also
so
Glovis
auth
enabled
ssh.
This
is
ssh
where
you
are
on
the
command
line
and
when
you
issue
your
login
request
to
remote
system,
you
are
sending
a
token
and
oauth.
Oh,
I
DC
token
to
a
cam
module.
That's
with
part
of
the
same
sshd.
This
there's
no
custom.
Sshd
thing:
it's
a
Pam
module
that
talks
to
a
new
talks
to
the
SSH
client,
then
talks
to
Globus
off
and
says
alright.
This
is
our
Q
Agner
at
UChicago
edu.
Are
they
allowed
on
to
Midway
at
the
UChicago
RCC?
A
It
validates
me
and
it
lets
me
in
and
now
then
I
see
what
that
is.
You
can
build
a
portal
that
allows
you
to
log
in
to
multiple
systems.
You
get
the
right
scopes
for
you,
Chicago
for
Comet,
at
SDSC,
for
bridges
at
PSC,
potentially
other
systems
and
from
jupiter
hubs
case.
That
means
that
well,
if
we're
already
getting
tokens
through
Globus
off,
can
we
start
to
on
on
a
variety
of
resources
that
have
globus
off
the
navel?
A
That's
pretty
awesome
and
we've
been
working
on
a
Jupiter
lab
extension
for
a
while.
It
was
developed
by
a
student
about
a
year
ago
and
I've
got
an
intern
who's
revisiting
it
and
updating
it,
because
we
look
as
typescript
and
NPM
evolved
and
stuff
like
that,
so
she's
gotten
it
working
and
now
we're
trying
to
make
it
more
user-friendly.
So
it
is
out
there,
and
my
last
thought
is
yes.
I
know.
Minder
hub
does
a
lot
of
this,
but
within
this
space
of
security
concerns,
I
do
think.
A
So
now
you
deal
with
the
environment
encapsulation
and
the
I,
like
this
version,
I
like
that
version
I,
don't
like
your
widget
I,
like
my
widget
things
like
that,
and
so
we
give
the
ability
to
pull
users
in
to
different
tools
and
one
things
we
did
develop
early
on
for
the
a
LCF.
That's
kind
of
lingering
to
come
back
is
the
singularity
spawner.
A
B
A
Thank
you.
That's
that's
like
we.
We
wrote
this
a
while
ago
and
thank
you
so
that
the
statement
was
for
the
purposes
of
those
ends.
The
Agra
Angra
prepared
aggregate
maddox
platform
which
got
presented
in
Clovis
world
a
while
ago.
At
that
point
they
weren't
using
Globus
auth
in
it.
So
thank
you
for
the
playing
it
and
the
fact
that
they're
using
Globus
in
their
environment
is
one
of
the
first
cases
I've
heard
in
the
wild,
if
Globus
off
being
deployed
in
a
Jupiter
hub
and
leveraging
other
Globus
services
within
that.
C
A
Let's
take
I'll,
give
you
the
case
of
the
kubernetes
cluster.
In
that
case,
when
I
log,
in
as
our
Wagner
had
an
'm
RP
wagner
at
a
LCF
da
da
on,
though
that
go,
and
I
launched
it.
The
underlying
processes
that
it's
running
on
on
are
not
associated
with
you.
It
is
an
untrusted
account,
but
my
login
and
the
execution
of
the
notebook
server
on
that
environment
in
that
little
container
is
logged
in
the
same
way
as
if
I
were
logging
on
through
ssh
to
another
system.
So
this
is
so.
A
The
impersonation
problem,
it's
things
like
that
and
then
in
this
case,
so
in
that
case
all
of
the
users
and
groups
and
stuff
like
that
live
within
Globus
and
all
the
trust
and
stuff
have
to
happen
within
Globus
groups
and
other
things.
And
so
that's
also
the
do
you
trust
external
services
and
we
should.