Add a meeting Rate this page

A

Hello.

B

Hello how's it going.

A

Yeah good thanks in here.

B

I can't complain uh all good I got um just uh this week or today. Actually, access to uh the open ais, gpt3 um they've been handing out uh accounts, um so I've been playing with it all afternoon and wow. It certainly is a thing.

B

It's certainly um I'm still trying to get my head around what it means and what it can do, what it can't do, but uh the few shot idea is quite interesting like because it's trained on a common crawl set, which I don't know what that is, but I gather it it's a good chunk of open web, so you give it enough hints in of what the topic it is and a few examples, and then it sort of digs into it.

B

Like there's some examples of writing sql, and I thought I would do one and I swapped the sql and told it I was doing graphql and I did you know good semantic graphql, and you know no one that I know I've done that before, like that it knew about graphql and things like that and uh the way it can classify things and generate things and yeah, it's a it's a bit in some ways.

B

It feels like a weapon like it could be weaponized like for generating content and uh like how easy it would be to to have a few shots to train it, to classify something, for example, a resume or something, and then have it pick things out of that based on some context. He said um it's uh yeah, it's interesting, do you have access to it or have you applied for access to it? I.

A

I haven't played with it yet no.

B

No, it's uh supposed to look there. Current version of it is only is like a sass, so it's api only like a very simple deceptively, simple api. That's like text in and text out, but you can even have it. You know work with tabular data.

B

If you tell it that it's tabular data and put little table markers in and um yeah, it can translate between, say python and javascript, which seems pretty good at the top programming languages, because um they're they're often crawled like the information's crawled off github um and I've heard I haven't tried it yet. I started looking at log output. I didn't have much results with that, but I've heard when people hit bugs and exceptions um there was a company that was playing around with it.

B

If it's hitting some bug, that's in some public bug report. It can often summarize that, for you that's another thing it does very well is summarize large chunks of text. um Depending on how much summary you want like. I was trying that with my son, it was fascinating like we're giving it. You know information about the french revolution or uh franz ferdinand he's into history and then having it summarize in one line and it's like. Oh that's, pretty good like it. It plucked out the points from from wikipedia.

B

So it's uh it's a force to.

A

What the quality um remains like on on things like that, because if you think about it, the amount of of bad code, that's posted on the internet fast.

B

Ways.

A

The amount of good code yeah.

B

So.

A

Will it will it learn to code really badly? Yes, in terms of what it's uh it's actually picking out,.

B

Yeah, it's um certainly for code generation. I haven't seen anything like it. I think for more structured data that it you know it wouldn't be too hard to beat it. If you know the domain but yeah I had it. I was giving it globs of python that I was working on and it sort of would suggest like reasonable function names in that. But then it wouldn't really go much deeper and hilariously.

B

At some point I would be put a bit more code in and all it would suggest was doing more indentation, which I thought was pretty funny for person like it's just like. Oh, I see you like spaces and tabs have some more spaces and tabs like it did like it wasn't the the depth. I could tell it wasn't going super deep, but it was like invoking functions that it had written before that had sensible names with you know, using an appropriate parameter like conventions like using df for data frame like it was.

B

It was some ml code, um so it kind of it could kind of match and recognize this sort of area. You're writing and I think it's for smaller chunks. It's interesting um and maybe css apparently does a good job there and maybe some markdown html writing sql statements. You know sort of those little expression, things something where there isn't a higher. You don't need to understand the higher level uh like stuff. That's a bit more context-free. I think it's probably but yeah. It will be interesting, but.

A

There's also a good way of of discovering niches where machine learning solutions could actually add value in specific areas. So, for example, if you had a machine learning model that genuinely understood the correct spacing for a yaml document um that would far exceed any human capability, yeah.

B

Yes, that's right, I mean yeah being able to both write and generate and comprehend, which seems to be pretty good at both yeah. That's that's. I mean it's a funny example, but it's not not unreasonable. It's like being able to describe in in sort of your own terminology the context you said of like I want to change these things to this, and then it says.

B

Oh, this is actually how you do it, you know and it because it knows and and then it could do the reverse and take a gob of yaml and then describe it as a one line thing of you know. This is what it does like a you know, kubernetes config or something like that's, not an unreasonable thing, and it seems to be able to do things like that. It seems to be able to digest it down and and summarize things it's um it's, it was pretty good. I even gave it um it was.

B

I gave them some output of some building from uh some code being built that was uh ultimately maven and it gave this summary this is. I was telling it to do a summary. That was that's one of the things you can do and it was sort of describing oh, it's using maven and it's checking. If there's it was saying in plain english like checking, if there's a jdk version eight and installing it it was doing all these things. That was, it was like. Well, that's when I look at the logs.

B

That's what I would have said, partly from what I see in the logs and also from what I know of how maven works, and so it's got that common crawl context in its model, and then it's had a few shot. Examples that I've given it just just like. Just as if you've someone that's read the material before is then prompted oh we're talking about maven and javanya, um so I thought that was fascinating.

B

So the the I've got no idea of the machinery underneath like how what what what sort of models- uh but I know they it's it's, obviously a deep model and they just kind of scaled it out with just an incredible number of parameters like it's. It almost was like a research project to go. What happens if we just let it come up with billions and billions of parameters, um and you know, but but not you know fundamentally change it from a from a a deep network and uh yeah that's what they ended up with.

B

So it's fascinating that people are doing it but yeah it's. I can see why they're being super uh careful with how it's used and why it's an api, and it doesn't take much to sort of hit a thing where it sort of flags it and says we think this might be problematic the content and then you can. You can white, you know why list it, but you can tell it it's a false alarm. They're being very, I think, they're taking a lot of these things. We've talked about to heart.

B

um You know they're on the bleeding edge of it. So it doesn't surprise me but they've whoever's behind this has put a lot of thought into preventing misuse.

A

Yeah, I think it's it's going to be an interesting situation, because you know at some point it will reach a density of information. That means it it. It can be quite congruent about a broad enough range of different topics that um that it starts to to look a lot like a human, in terms of its ability to say something plausible um in in a very wide range of areas, um and then there's there's that question of you know we're.

A

We keep getting stuck on this idea that we're building super intelligences that will be, you know, intrinsically better than humans at everything. um But in reality you know practical intelligence is is always full of mistakes.

B

Yeah um yeah, it could just be doing that at a faster, more impressive scale, but doing the same mistakes like uh yeah.

A

But there comes a there comes a point in terms of you know: we we stop being able to instantly recognize the mistakes um in humans once they get past the level of experience um and they just become believable at that point, even if they're wrong- um and uh you know there will be a threshold with um with these models where the the model will appear to have.

A

You know human-like uh capabilities uh and and will start to become, you know intrinsically trustworthy as a result, because it seems to be congruent in what it's saying on on everything you throw at yeah.

B

Yeah, so you start trusting it and uh yeah yeah.

A

And it doesn't have to be perfect to do that.

B

No, I mean a human isn't having to do that. Like you have ceos and leaders- and you know everyone knows they're flawed, they know they're flawed, but at some point you go look. This is the best west option. Let's go ahead with it, you know where we don't have perfect information and so yeah. I could totally see people sort of giving over to a machine. That's yeah, it's closer to that than I thought like it was yeah. You know. Obviously, nonsense comes out, but yeah it's it's. It's a fascinating way.

B

They've done it and they've made it easy to integrate into apps and and things so it'll be an interesting thing. um It's the the whole uh process of training. That model is is interesting like it's. You know from the ml ops point of view like what what what do you even have to say about something like gpt3 like it? What was the rumor cost, like four million dollars for their training run? I don't know if that was like the total accumulated cost of the aws bill for the project.

B

Maybe there was some false starts and maybe they you know tuned hyper parameters and started again or maybe you know the mistakes were made or whether it was the final training run that succeeded and it's the model that they published or would they allow you to use like? I wonder if that's the cost is four million dollars like that's like an astounding amount of energy consumed.

A

To do that yeah, it wouldn't surprise me if that was the cost per training run.

B

Right it could get that big yeah.

B

It's um yeah, whereas I think about the the models. I'm training, you know it might be 10 minutes a training run um and maybe under under a million parameters. So pretty modest. You know it's a very structured data. It's not images, it's not raw text. um It's relatively simple, and in that case it's it's very much like to me.

B

The analogy is very much like compiling code, so I can have use normal ish test tools and apply normal practices and things, and I just the challenge- is more that it's more probabilistic in how you test things rather than rather than deterministic, um but for this that, like this is all of those other concerns that we talked about um become come to the fore. um Nothing else.

A

This brings us on to you know something that's very relevant at the moment. Really, um you know in terms of what the impact of this type of technology is going to be on devops practice in in general, um because I've been some discussion this week um between some of the cdf projects, uh where you know, there's there's quite a strong difference of opinion.

A

In terms of you know whether ml ops uh is actually part of the devops process or whether it can be safely ignored and as being somebody else's problem, and I think it's it's very clear from technologies like this- that uh there's a there's, an immediate role for this sort of technology in the devops pipeline itself.

A

Now, if, if you're, if you're taking your pipeline logs and feeding them into a model and then having the model, monitor all of your pipelines and tell you in plain english, what's going wrong and suggesting fixes uh that is going to be a massive productivity boost, yeah.

B

Is that that application of models is that, like some people use the term ai ops, for when ai technologies, you know be those trained models or some other things classified as ai is applied to operational data, so logs metrics monitoring that my understanding is, people would tend to fall at a iops, even though it's not really ops. In this case, it's like ai, applied to devops practices.

A

um Yeah, that's my.

B

Yeah sorry.

A

The the point I was trying to make really is that um the this sort of technology brings the opportunity for big productivity advances in many of the things that it touches and and your devops itself will not be immune from that yeah um and- and so you know, the work that we're doing to to try and uh align the devops and mlx worlds is actually quite important, and we need to make sure that we're doing the appropriate level of communication and evangelization to to get everybody on board.

A

With this, because the you know the the risk is that projects that that don't factor in ml, ops and ai ops into their thinking run the risk of becoming irrelevant quite quickly.

B

Yes,.

A

As you know, their users start to demand.

B

Yeah functionality that you can only.

A

Function.

B

Yeah, I think, of some of the the models I've been building and they're in uh they're a much more modest scale to sort of your neck of the woods um like because the data's more tabular, it's more modest. uh It definitely is something that you know any any developer could kind of do.

B

Who has an interest like if you've got you know any reasonable system now has a lot of data and and we're building more and more on existing databases, existing apis and existing services, like the era of like a heroku app with a fresh postgres database that you populate with a to-do list like that's 2005.

B

like we're now, building systems with huge amounts of existing data, everything's always connected online there's no offline anymore. So there's lots of other data that you can bring in very cheaply or easily.

B

So it's it's now about what what else you can do with the data- and I don't know to me, training and deploying models is feels like sort of web developments at the tone of the millennium like it was. It was a thing. People did not all developers did it, but you know then it took off it's like it sort of feels like it's.

B

It's that you know. There's the there was a blog post from someone called a software 2.0 like it's just you know. Instead of writing things by hand, you've got you know, I mean you know. A modest model would be between 100 000 and 10 million parameters that are tuned.

B

You know in a typical training run, you know that's far more than a team of developers could ever do and then that can change day by day so like it just seems like it's something that all developers should do, rather than just a team that has data scientists, and then the developers over here there'll still be that as well like there'll, be people who who really specialize in the data or you've got a very sensitive domain, and things like that, but I think there's a whole lot of low-hanging fruit that you know the apis are a bit strange, but not out of the realm of things like.

B

I think it back to um you know if you've done client service stuff in the 90s and when you came across the web, that was pretty weird like we're, both old enough to remember that transition and and probably earlier like it's. This is not really that different, like understanding what a what a data frame is and what a feature is and how that's subtly different from other things. That's not that's! Not without that's not out of the reach of a developer like, I don't think so.

B

I think I totally agree and I think there could be teams doing it that don't even have data scientists they're just um well. I know, there's people doing that they're, just working it out, there's plenty of material out there, the documentation's great!

B

um You know my only sort of complaint is I'm forever coming across notebooks and then then going oh gosh. I have to go, and you know it's just it's half the time. They don't work, and you know I I'm forever taking it out of notebooks and just putting it in regular python code with unit tests and stuff.

A

So perhaps perhaps what we, what we need right now is a. It is a model that gets really good at taking stuff out of notebooks and producing them.

B

Yeah and then I think netflix did, or someone did publish a tool that did that, but they sort of they wanted to let their data scientists stay in the notebook land, and you know that that is kind of kind of the reality for a lot of companies. I mean I mean I'm sure you know about data breaks like they're, a um huge sort of open source success story and, and I'm sure a lot of that is powered by um you know they have their own way of running notebooks and things like that.

B

uh That's yeah, I think notebooks are a reality for a while. Yet I can certainly see the benefit. um I have started using this other library called I'll post it in here um in the notes.

B

um It might have been someone here that suggested it. I don't know, but I streamlit it's a. uh Can I paste in that document.

B

It's a web framework uh written in python uh that lets you very quickly.

B

uh I can't seem to paste the document. Are you able to edit the document? Okay,.

A

Yeah yeah.

B

That's.

A

All right never never.

B

I can type, but I just can't copy and paste stream like dot io.

B

um Yeah it's this tool that kind of natively works with the constructs of machine learning libraries, so data frames data sets. um You know the various plotting tools taking simple inputs and it it does it as a as a as a responsive web app. um I think it actually uses react, but you don't ever see that you just say I want a number input or I want a slider. I want to drop down here's the data frame to pop the values. Show me the data frame or you know, pass it to this.

B

So it's kind of got this. You can very quickly put sort of uh data heavy stuff on the web. It reminds me of it's kind of like rails was to to databases back in the 2000s. You could go. Look! Here's the table, you know just show it on the web and customize this and that it's like that, but for machine learning concepts.

B

So I think that's something that things like that. Could excite developers and get people interested in doing it. Who aren't you know, sort of notebook, centric data scientists who like to have the notebook where they're you know explaining and justifying the data, whereas the developers want to mostly see the results um like being able to graph something in a notebook, is all very good and explain why you normalize this thing. This way is all very great, but a developer wants to put that in a unit test and have it pass fail.

B

um So that was interesting. um That's something I come across and start using it's very easy to host. You can host it on a you know, some sort of serverless platform. I've been using google cloud run, so you have a you know, just a single docker container that fires up when when it needs to and um yeah it's a neat tool. There's um the other thing is I found a few conferences I'll see if I can paste them in the notes here. I can't.

A

Why isn't it I'm not logged into the document, so that might be.

B

Is there a trick to logging in like.

A

You just need to be logged in in your google account.

B

I'll try it with my work. One I've currently signed it as hmm so. That's me as an australian anonymous turtle, but I'm logged in as me, I've got my icon.

B

The top right.

B

Maybe if I send it to you in the chat here, you could paste it in save some time.

B

There are three conferences that came out that I thought would be interesting to sort of the the non-data science audience um spring, um obviously, cdecon and all day, devops, um there's probably some others that have come up.

B

Since.

A

Yeah, so I'm still waiting to hear uh whether r slot for cd con has been accepted.

A

Hopefully that will go ahead. um I've started a conversation with a group in canada that are doing a uh um a devops focused conference, um so it's possible. We might be able to get slot there and uh just make contact uh on uh ai camp.

A

So I'm gonna ask around there and see if there's any interest in us doing a slot for.

B

Them, and do you do you have like, uh have you got some content that you talk about already or a slide deck or some starting material that others could use if we're, because I'm struggling to think where to start? If I I was thinking the first thing would be, I want to do sort of a either a personal or a corporate blog post talking about this, but I I I'm struggling to find the entry point.

B

um So any.

A

Past.

B

Presentations.

A

There's the cdcon talk that I did last year, um which was a an overview of what we're doing and what goals are.

A

But I I haven't, I haven't, got a specific deck, uh because what I tend to do is just talk through the through the.

B

Points yeah, I'm just trying to think of you know, what's a good uh catchy entry point um for the audiences I'm thinking about like, for example, spring one, um you know they're developers that are curious to try sort of the next thing, um so the angle for them would be well there's bound to be something in there that that is interesting like or controversial. Maybe it's always interesting to have something controversial.

B

um You know how it can be abused or something.

A

Uh-Huh yeah and I I think what we should probably start to do is watch for examples of the things that we we're predicting and and then make sure that we're producing blog content that links the example back to the roadmap and um yeah, because that's a good way of getting people to to pick up on. You know real world examples of things that they haven't considered.

A

Before.

A

So other activities are going on at the moment. uh You've probably seen I've uh created a draft 2021 document in the repo. So that's now available for.

B

Yeah, so you mentioned that.

B

And that is in the road map, 2021 yep.

A

That's all good to go now, um I'm also working uh with another cdf sig uh that is putting together a best practice document for devops and uh I'm producing some other content for that, and also going to include ml ops.

A

As as part of that document,.

B

There was something I came across today that I was wondering that was covered. You know last year was the um the ip that's contained in a model. uh The example was.

B

So I'm looking at training models on our own data, eating your own job food, applying that and then some of that model is relevant to another customer. Another customer signs up creates their own account with their own set of data, but the and the model trained on customer a is partially relevant for customer b, so you can transfer it over do another fit.

B

You know fit that to customer base data and basically transfer learn, add a bit more to the model and, as you get more customers, it gets smarter and smarter, not all of it's relevant because there might be specific things learned in a model that are relevant to participants in customer a like there's.

B

You know different users in customer, a that behave certain ways, and it's like that. They won't exist in customer b. That's fine!

B

The model can understand that, because you know it's a big space of numbers, they're, not going to think that that person is that person, but but other things are in common, like you, you there may be certain dimensions of of things that are the same um so like who owns that ip, um like you've trained a model on your customer data, another customer's data, third, fourth, fifth, it it gets smarter, you as the vendor benefit and your customers benefit, um but you're effectively, using their data to train up your model.

B

Oh yeah custom's got to know um like what. What are the issues around that? I feel maybe there's no issue, because you know this is you know this is how things like search engines and whatever work. You know the more people use it and recommendation engines. You know when you go to amazon or netflix or you search on google or whatever you're always participating in an unknown way in a model. That's retrained. So maybe it's not a big deal, but it seemed like in an enterprise setting.

B

It was more explicit that you could have discrete models like you could have separate models trained per customer, you know and and they there's no connection at all, you know you're running a sas or whatever, but that it seems like the benefit would be to to transfer, learn and grow the knowledge.

B

But then you know do you own the ip, and I think we did cover this in the ml ops, where, if you know customer b then leaves and according to you know, whatever regulation has their rights to their data to be you know scrubbed, then the model has to be retrained without their data right.

A

Like that's.

B

That's that was one of the scenarios we covered. I think, but but I didn't know if we talked about like the ownership of the ip.

A

Yeah, so this is, this is actually a one of the one of the next big challenges in information technology in general um and I I know I've been diving into this quite deeply um with the ieee on on the iods roadmap.

A

um I can't remember if we've also included some of this in in our robot, but it uh it's certainly worth having a check to make sure. So the the scenario becomes very obvious when you look at industrial applications, so, for example, in the semiconductor industry, um there's uh there's a lot of work going on in.

A

Providing ai models for fabs to to help manage the the process, activities that are going on and to evaluate the quality of the wafers that are coming out. The process.

A

But what you have in that situation is quite a complex mix of ip, uh because the the fab is owned by one company and what happens within the fab is effectively their ip.

A

So their recipe for running the fab is is their ip, but the fab itself is made up from a large number of pieces of equipment that are bought from different vendors and interconnected into one or more lines within the fab.

A

So so the each piece of equipment is covered in sensors and capable of generating a large amount of data, um and so that data is, you know, intrinsically used to control the piece of equipment, but can also provide valuable information about the state of a product. That's running through the line. um So there's work going on to build models to help run the equipment better at the vendor level, but then there's also a desire to have end-to-end models that are optimizing for the product. That's.

B

That that is almost the exact same thing as what I'm looking at, but obviously not fabs and hardware. But it's that same idea. You've got you know a customer specific model and then an overall yeah.

A

But the challenge that you've got there is that each piece of equipment is effectively doing an operation based on an operating model. That is the vendor's ip.

B

Yeah.

A

And they're doing it on behalf of the uh the fab owner, who has you know: ownership of the of the arrangement and the settings for the equipment.

A

So that means that, if you're, if you're running a meta model, on top of this, there's a risk that you're actually uh effectively modeling the ip of the vendor of the individual pieces of equipment um because you're capturing inputs and outputs and and learning about what that thing actually does as a result of having access to a lot of the sensors that it it uses. Yeah.

A

So so, you've got you've got these this multiple challenge of you're you! You have some intrinsic ip over the the the line that you've constructed, but there's also inherent ip in in the in each of the pieces of equipment on that line, um and then there's a third problem, which is that you want to be able to pass this data up and down your supply chain because you're only a piece of the bigger puzzle where your.

B

End customer.

A

Actually also needs a model to understand.

B

Yeah, so so that I'd be then leaks outside and yeah yeah. This is, and I mean if the if it was all contained inside the fab owner, then you know you could understand no one would it might technically be a problem, but no one's really going to fuss about it because it's never leaving. But what, if that fab owner wants to sell that knowledge as a model or, like you said it's part of a wider supply chain, then yeah I mean if you, if you imagine that the models were hand coded algorithms.

B

You know written by a developer that wrote the millions of parameters and unit tests and whatever then I guess you could.

B

Even that would be complex, but you could probably solve that like we have copyright for that probably left libraries and there's fair use and there's api surfaces and but in in this world, there's no there's no developer tuning these models, the the the system, is kind of building itself. So, like.

A

It's a it's a commercial problem in that actually nobody's going to care that much as long as they're getting paid as a result of the the data being used.

A

So what what's what's needed is the right commercial framework to allow people to cross-license.

A

This data have some level of monitoring and trust, building to ensure that abuses aren't happening and then a payment model so that people are getting paid paid fairly for utilization of things that are intrinsically their ip.

A

So so what what needs to happen- and this is what the recommendation has been through the ieee is- is that we start to work on a standard model or you know cross-licensing um in these scenarios, so so that what you're effectively doing by building the superset of these things is you're. Creating a new product which is jointly owned by.

B

It's it's a it's a derivative work, the same as in software licenses. You consumed another one and depending on its license, you are now doing a you know, a derivative work and if it's some kind of apache like uh mit license, then that's no big deal like it's yeah sure it's a derivative work, but it's still your own. If it's gpl's and it's viral, and so on, like it's like it's well defined in software and somewhat tested, uh but yeah.

A

What I'm interested to see is where we see the first example in in the social space where a company decides to treat their aggregate data as a cooperative, where it's uh you know effectively paying its users for the contribution of their data to the overall product quality.

A

So yeah you you're you're, using a product you're paying to use it, but you're also getting um some kickback or dividend from uh your contribution to to that, um uh and I think that's going to be very interesting because it it will actually encourage more lock-in to a solution.

A

um You know, because if you, uh if you get involved in a product early on and become a you know, a member of that cooperative, then your data is actually earning you, some value and as that scales, you're, you're, you're, potentially earning um you know significant enough amounts that it makes you want to stay with the product um and you because you feel a sense of co-ownership.

A

The challenge is, in you know, building manageable solutions to uh to actually track and maintain those I mean.

B

It's yes, there's companies, I've heard of companies or or new uh companies that are looking at publishing models where the model is the ip that they sell, like natural language, understanding in a specific vertical domain, so they've spent a lot of work accumulating the data and training it. You know in medical or accounting domains and natural language understanding, and so presumably they ship you a model or you use it as a service. But let's say if they ship your model at some binary. You can then add to that.

B

You can correct me if I'm wrong, but you could, then you know load up that model assuming it allows it add more data to it. Trying to you know, adjust the parameters again and so you've created a derivative work, not just by using it by root, but by retraining it like it's yeah, I don't that's. The analogy of software is like you, you take the in that case. That's like taking their source code and modifying it. You're not like you know, linking it in that world. You're actually modifying the source code.

A

Yeah, I always have to.

B

Go back to your.

A

Yeah back to your original scenario, where you know you have a um a service provider, um that's training models and then a series of you know commercial um users who want to benefit from that model, but are also providing data through the system which can be used to improve the model.

A

um You know under the under this approach. What you could do is, you know, commercially license your relationship with each of those users such that you have a written data usage agreement that allows you to um consume.

A

uh You know pseudonymized content to to improve the model, but at the same time provides a dividend to to to that company. The the you know is paying them for the use of that data and then that company has the option to pass on that dividend to you know in individual users.

A

If, if what they're doing is aggregating data from you know, end customers, so you get this chain of um data sharing agreements with a commercial element to them, and then everybody's getting paid for the value that they're creating yeah, which makes them more likely to want to share.

B

Right.

B

All right, I'm going to sign off um next time, it'll be earlier in my time zone, so we can talk longer, because this is I'd love to go back into this. This is like fascinating. I think we should dive a bit deeper in the ip and maybe start flushing it out. um Yeah.

A

But.

B

Yeah it'll be just.

A

Writing this section on this, uh I.

B

Think it is yeah it's um because it comes up a few times when I talk to people and it'll certainly come up once people unders, it's explained to them and they're like holy crap. What are we doing here? This is yeah so good to get from it all right. Well, I'm gonna sign off and uh talk to you next time around at a better time zone, so that'll actually be really good and uh otherwise.

A

See you.

B

Soon, yep see you on slack.

B

Bye.

B

You.
youtube image
From YouTube: CDF - SIG MLOps Meeting 2021-03-25

Description

For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/