Add a meeting Rate this page

A

You.

A

You.

B

Hey chili, oh.

C

You do I was life yeah, not too bad thanks in here.

B

Moving on I was just on a LFA ie, so there is this open source summit going on right. The the Linux Foundation well so I was on that presenting on that LFA a mini summit, quite a bit of interest around here on the ML ops topic right, so I want to. Sometimes you know we should make this. Maybe cross I, don't know I mean the like. Sometimes you know it's like this, whether this makes sense in CD foundation or it should be in something like LFA a foundation right, because.

C

Well, I think you know, there's a there's a strong reason to have this conversation going on with.

C

The CI CD side of things it's clear from the conversations that I've been having that we we actually need to get that that group understanding the needs of ML ops and starting to think about how they can extend the capabilities of solutions in those spaces. So there's definitely a benefit to having that conversation running.

B

Yeah yeah I mean I think my thing was more like you know, I mean if you look at the general attendance I'm saying whether it makes sense in a CD foundation or should it be I mean this particular sink technically to me, maybe you know belongs more and maybe it's a cross cross foundation thing which we can pour.

C

An idea better idea is to just reach out for a broader audience.

B

You came in.

A

Hi, how are you oh you're,.

B

Doing good, so, what's the interest which brings you here, Kim and I I mean tell you if you have certain topic right, I, don't have anything specific. You know at this point in child I also.

A

Don't have anything specific I'm working in a machine learning team at a proof, point which is a enterprise cybersecurity company and we're working on a new environment for doing machine learning inference, and so we've been looking around at different resources and I saw this meeting. That's an available place to come here about ml, ops. Sorry.

B

Are you creating your own machine learning inference platform? Are you taking something from open source? Was that working we.

A

Are we're on AWS but we're yes, so just kind of in the early design stages figuring out what exactly we want to do? Most of our products are kind of internal facing, so our goals are mostly just to expose simple REST API to the rest of the company, for you know basically doing product enhancements for features that our company offers in other places so and yeah. That's kind of a working on so.

B

Is there an inference service I mean? Is that platform or you saying it already exists, and you are essentially focused on building a place on top of it to expose like.

A

I said yeah, we are looking at using sage maker for serving models and for doing a couple of other things, but we're just trying to figure out, especially since we have kind of unusual security requirements in terms of customer confidential data and that kind of thing. We are just kind of evaluating different options for things we might do.

B

Okay I mean so I choline, one of the groups of it in the shuffle community called KF serving I. Don't know whether you have heard of the project as such and.

A

Familiar with cupola I haven't looked into compressor very much yeah.

B

So, that's you know one of our users. He essentially migrated from you know in a similar situation as yours, but migrated and moved off and from AWS and move the entire deployment in-house leveraging cap serving right and part of the reasons was same I. Think cost was also a factor to be played because they they host this dungeon game right, which is hugely popular, and you know for that level and then the need, you know a lot of GPU capabilities right, so the cost was becoming a barrier plus in general. They needed.

B

You know more I would say several as an auto scaling characteristics right, but yeah.

A

Yeah sure that's that's super captive I mean.

B

Oh I can just give you so this is the stack right 4k of serving. If you are interested in it, you know you can reach out right so, ideally built on top of you know: Kuban IDs, as is everything in queue flow right, but we leveraged Canada, vana steel under the covers right for variety of reasons. I think the two key things which are important right, which initially what the drivers was K native.

B

You know bringing these surveillance capabilities of scale to zero and giving you know the auto scale ability based on request based queuing, then sto, which gives you you know much more control over doing canavero all say, be testing pin roll awesome. So those were the reasons right for basing it on that stack. And then you know having common stock or tensorflow PI Taj scalar and actually boost onyx, runtime and media stencil out here.

B

Try it on server and providing you know, pluggable interfaces for not only prediction: I mean prediction is one of the key things right, but a lot of the times you were doing pre-processing and post-processing. You know, after getting the user input and before giving the user output explained ability and then you know advanced level capabilities which are being integrated like drift, detection, anomaly detection, so yeah I think it's it's pretty promising. You know.

B

So if you are interested, you know please reach out to us and it can walk you through a lot of more details around. You know what it does, but yeah.

A

Thank you appreciate that yeah.

B

Okay,.

B

Anything else on your dog Terry.

C

No we're making steady progress on on the roadmap.

C

Ideally, we could do with some extra input. I was hoping that we were gonna, get some contributions on the security stuff from last week, but I haven't seen anything come in yet so I'll need to chase up on that, and it would also be good if we could get get a few people involved too.

C

Working on larger projects with very large data sets because I'd like to get some input on the data side from people who have yeah really experiencing the kind of challenges that you get when you're trying to manage very large data sets in in these sorts of situations.

C

You.

C

You.

B

Yeah I think was very revealing whatever saying ready.

C

Sorry everything locked up, then for a while yeah.

B

Okay, sorry I was saying that you know that's where I think the the sort of domain expertise you're looking for all the users for the ML ops, probably l, FAI foundation- is you know, maybe the right one to actually try and reach out right. So I think it will be wise to figure out. You know if this can be made of cross-pollination between those two groups, because I believe C D foundation. You know the users are here for a particular reason.

B

You know it's mostly CIC D pipelines and that's where the expertise is, but if we need to reach out to a wider audience, which is essentially looking at ml nai and data right I, don't think the audiences is in the cd4 patient yeah for the users.

C

I mean at the moment most of the interest is in the in the a version of the sig we're seeing much much larger attendance is in in that group.

C

Maybe.

B

What we can do is, you know, just make it that one before it right and we can switch from this time. Let's just have that one and was over I mean if people have a really neat watching they can attend. Even at the time is you know a bit inconvenient to mean that probably makes more sense right rather than having to calls well.

C

I think this is one way you know clearly you've got a group working on on things in in this space, so we don't want to disrupt that, but it would be nice if we could grow this. This group in in the US and then there seems to be more activity going on in in the opposite, timezone, but again a lot of it or just.

B

Promotion.

C

And once once, we've got a few more of these conferences going, we can, you know, hopefully get some more people involved.

B

Cool make sense, right and I think I think the time you have might work because most of the people I work with are on the Pacific time zone. So or maybe so what time does your meeting fall for the Pacific time zone? It's pretty early in the day for Pacific yeah.

C

And I can go and look at it again.

C

Oh.

B

Yeah.

C

So.

B

That becomes like I mean typically, like you know, a lot of the evening meetings we have in the Pacific time zone like whenever we need to interact with, for example, you know the China side of the house. It's mostly. We have a lot of 535 o'clock Pacific cause right which get very well-attended from the Chinese or Asia Pacific side, so I think if we can find a sweet spot there right, which is a bit inconvenient for both the sides right, but it can be one call.

B

It will eliminate, also confusion and right and because to me you know there was I mean at least from my perspective, Terry. There was this. This whole specific project right which I was driving and and to the point what I wanted to get out of it is. Is you know my first phase is complete, the project is ready, and then you know if there are not enough technical missions currently right there or people. You know interested in technical missions here, right, I would rather join.

B

You know the the other group which you have you know proceed from there. Yeah.

C

Just starting to really promote this group, so the first of the roadmap announcements went out about a week ago now, so the expectation is that we'll be promoting this on the conference circuit and trying to build up more of a collaborative working environment. So I think we're still early days on this at the moment,.

B

Yeah yeah make sense, make sense right weekend and obviously there will be a lot of things and I tend to use this call right. I mean whenever I need to get some discussion going on with the counterparts on checked on site or Google site within you know the the queue flow or petrol community right. So that's how this serves me right. I.

B

Make sure that if I need those folks together, right, I ping them before hey, let's sync up: this is a time which is blog, so yeah I think that's that's precisely I mean what I would be looking for right and if there are, you know more interest coming from a queue flow perspective. I can definitely, you know, keep on doing more deep dives. There.

C

Yeah I think we're we're at a stage now where we of the sort of boring but relatively straightforward, work on the roadmap and now we're getting into the more challenging issues where actually, we might benefit from from setting a subject area in advance and circulating an agenda and just doing a call for contributions on on that area. Yeah.

B

So I'm pretty bad in that teri, so I'm hoping you know, if that's I mean you know, this is probably you can you can take the lead and drive as I said too.

C

Many balls up.

B

In there, like you, know this, this whole key flow pipeline, Techtron project plus, you know the whole trusted AI umbrella and then you know I'm leading the IBM and Red Hat data, any open source alignment, so so many balls up in there to be very wide and broad general-purpose topics right. So I think that's where you can. You know possibly said the agendas except alright. If I do have certain requests, you know like okay, I need, maybe 15 minutes block for discussion. Let's say you know the tensorflow extended and kkf be intact on integration.

B

Then I can you know just slide it into that agenda? Right, yeah.

C

All right: well, let's not waste any more of anyone's time. Today. Oh saying that and we we've, we just had a new joiner, welcome.

B

I.

B

You David, can you hear us.

C

Okay, unless you have anything else, I think we'll probably wrap it up at this point them.

A

Thank you, I appreciate it. Thanks.

A

You.
youtube image
From YouTube: CDF SIG MLOps Meeting 2020-07-02b

Description

No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).