RIPE 85

Archives

MAT Working Group

27th October 2022

At 9 a.m:

MASSIMO CANDELA: Hello. So, we were planning to wait a minute but since we are already above that minute, we can just start. Well, welcome, good morning, welcome to the MAT Working Group. Together with Nina and Stephen, Nina is here physically, and Stephen is connected remotely, we are the Chair of the MAT Working Group, MAT stands for measurement analysis and tools, and this is what we will talk to. We have a really packed agenda of what I believe very interesting topics, and I think we can start immediately.

So, this time, we did slightly, I would say new thing, let's see how it goes, so we introduced a subtrack and that is about vantage point selection. The subtrack will take the first 30 minutes and vantage point selection is an extremely interesting and important topic both for active and passive measurements so if if ‑‑ whenever you do measurement at some point you have especially when you want to make it scale, you have this problem of vantage point selection, you imagine you want to find a location of an IP, you want to use latency measurement, let's use Atlas, what are you going to do, use 11,000 probes for each IP? What vantage points can you make to they are the best for your measurement. This is just an example. But the presenters will talk in more details about this. After this we will resume with presentation about tools and analysis after. Also, another thing that we did is we know that the audience of MAT is a healthy mix of operators and researchers, researchers both from academic and not so we try to strongly prefer presentation that they have, that they are appealing for both and we ask the presenter to make an extra effort in expressing what they are ‑‑ what are the guidelines and the take aways that they will have an impact on the Internet, the current Internet and the future of the Internet. Of course we also look for your feedback and I have wrote here the ‑‑ our mailing list.

I also would like to thank Aoife from the stenography and also Guy for the scribing, they are both here, so don't speak too fast and make their life a bit easier and thanks for the hard work.

And we can start with the agenda. So, we will have the first speaker, I will just start with the first, I will anyway introduce all of them. So the first two are about the vantage point selection and we can immediately start with the first presenter, the first presenter is Thomas Holterbach, obtained his PhD in 2001, new at University of Strasbourg where he works with professor Cristel Pelsser, she is a super community of this community and I say Hi, I hope to see her back soon, on how to improve Internet routing performance, security and operability, Thomas will present measuring Internet routing from the most valuable points. Thomas, are the stage is yours.

THOMAS HOLTERBACH: Thank you for the introduction and good morning, everyone. Today I would like to start this presentation with this. This is the median number of BGP routes that are collected every hour by the RIPE RIS infrastructure and you can see last year this number reached 45 million and you can see that it actually exponentially increased which means that during the coming years we can expect even more routes that are being collect, we see this increase for two reasons. First, because there are more and more vantage points but also prefixes that are advertised in the Internet and for every new prefix that is advertised there will be new BGP routes that are collected.

So of course, one of the common solution that probably we all do is we just select a few vantage points and we take the routes that they collect. So if you take this simple example where there are few ASs, they are connected to each other and then there are a few vantage points as well, what we can do is take the data from a couple of vantage points here, we will have less data to process but in the meantime we might lose some information, right?

So, selecting the right set of vantage points matters but it's challenging; it's challenging because there are they are two conflicting phenomena, we observe redundancy among the routes that are collected and this effect is amplified by the fact that the vantage points are very often positioned in the very core of the Internet, okay so this is a figure where on the X Axis you have the 70,000 something ASs and on the Y axis you have CDF of the vantage points and you can see that around 5,000 ASs so rather a small fraction of them, host 62% of the vantage points so this is the reason why we have so many redundancy between the routes that we collect.

But, we also see with those routes, like sparse BGP event visibility. We took this table from MANRS that they published in 2021 and which shows that 65% of the BGP hijack that were detected in 2021 were detected only by 20 or less vantage points. So this is only a very tiny fraction of the vantage points.

So here, basically, there is a trade‑off. On the one hand, we want to remove as many vantage points as we could in order to remove, to avoid redundancy, but on the other end we want to keep as many vantage points as possible to make sure we will observe the events we might be interested in.

Today, I would like to introduce you MVP and this is a system that selects a set of vantage points which strikes a good balance between avoiding collecting too many redundant routes but still being able to collect the routes that will allow you to detect the events you might be interested in.

So, in other words, the goal of MVP is to select a set of vantage points that maximises the utility of the data and minimises its volume. First MVP quantifies the observation of the vantage point for past events, then it measures the similarity between the vantage points for every event and finally it selects a set of dissimilar vantage points.

Let's first see how it quantifies the observation of the VPs for past BGP events. MVP computes the change induced by the appearance of new AS links in the topology using toplogical features. Let's look at the same topology. AS 3 and 4 start to peer so there will be a new AS link between those two. What MVP does here is it it computes feature values, topology features before and after the change to compute the change induced by this new link and then you do this for all the vantage points because there are many and all the vantage points they see a different partial view. So we do this computation for all the vantage points using again toplogical features.

We use a total of 20 different topological features which can be divided three categories, here I am not going into the details, we use to use many to make sure that MVP did not overfit for one particular use case but remains very generic so it remains regardless of what is your objective.

Then the second step is about measuring the similarity between the vantage points and this is for every see vent. How does MVP do it? Well here, I am sure a little example on the right so instead of taking all the topological features I am just taking two of them so we have only two dimensions here. Number of triangles and average neighbour and the dots are here the VPs, we would have many more in practice, here there are a few for simplicity. So we just K means a clustering algorithm and we can cluster the vantage points and two vantage points in the same cluster basically observe the event the same way. Okay?

Finally, MVP selects a set of dissimilar vantage points. How do we do that? Well, we take many past events, 750 in total and we make sure that every event we take is visible by at least 10 vantage points and you make sure to take events that are scattered everywhere on the Internet. Then, MVP runs the clustering that I showed you before on all those events and independently from each other.

MVP uses a pair‑wise similarity score that estimates the similarity of a pair of vantage points across all the events now. So basically this score is value between 0 and 1, 0 means the two vantage points are dissimilar whereas 1 means they are very similar. Here I am just showing an example again, so they are three events only, in practice we have 750 but for simplicity I show three events, there is the clustering for each of them. Imagine there are two vantage points, VP 1 and 2 and you can see that they are never in the same so they are very dissimilar. We do the same for VP 3 and 4 but this time you can see for event 1 and 3 they are in the same cluster. So there's similarity score is 0.67 so they are rather similar in this case.

MVP uses this similarity score to greedily build a set of dissimilar vantage points. How does it work? We first select the most similar vantage points and then we select the vantage point that is the most dissimilar with the ones that have been selected already. We repeat this process in a greedy manner to obtain a set of vantage points.

Let's now evaluate MVP. So we have here again, we focus on the trade‑off between volume and utility, and here, we actually evaluate MVP on three use cases. First, the number of discovered AS links; then the proportion of detected hijacks and new AS links; and finally, the proportion of detected transient paths.

And we compare MVP against three base lines. First a random strategy. This is probably what most of us are doing now, we select vantage points randomly. We compare to distance based is selection strategy where we try to take vantage points that are the most distant based on the topology and finally we compare it to max AS links which is another baseline where we try to select the vantage points that observed the highest number of AS links, okay?

Well, it turns out that MVP always exhibits the best trade‑off between volume and utility of the data. So let's folks for instance on this use case. Again, as I said you are interested in the trade‑off between the volume of the data here on the X Axis and the number of discovered links on the Y axis. And this is the reasons. So, as I mentioned before MVP performs all the other base lines, it even out performs max AS links because that only tries to observe the highest number of links but doesn't consider the volume of the data tool, whereas our solution, try to optimise the trade‑off between utility and volume of the data so here, for instance, if you want to collect and see 300,000 AS links, you could get those AS links with 58% less data if you use MVP compared to if you would use a random selection strategy, for instance.

For the two other use cases I invite you to look at a poster that we presented this week at AMC. I would like to conclude this presentation with this QR code so we have an alpha version of MVP which is running in one of our servers, if you want to try it out and get a list of vantage points you could use for your experiment you can scan this and try to use those vantage points. Thank you very much and I am happy to take your questions.

MASSIMO CANDELA: Thank you very much for your presentation.

(Applause)


Do we have questions? Well, do we have questions online? I have a question for you. In particular, I wonder for realtime queries, how long does it take for your solution to actually return a set of vantage points that are useful for the query?

THOMAS HOLTERBACH: So the idea is all those clustering and this technique that I described, we can compute it every day. In one of our servers and you as a user you request the results which will be very fast because you don't have to do any computation in realtime; you just request the set of vantage points that have been selected for this particular day. So for instance then you could use this tool probably for most of the experiments including tools such as BGP alerter which use some of the BGP data and that could also maybe improve the performance of these kind of systems.

MASSIMO CANDELA: Thank you very much. We have a question.

AUDIENCE SPEAKER: University of Twente. When you analyse the neighbours of each one of these vantage points, you just consider in the AS path or you are considering the feed of the information of each one of them?

THOMAS HOLTERBACH: What do you mean by the neighbour of the vantage point.

AUDIENCE SPEAKER: For example, I saw you have, for example, RRC 15

THOMAS HOLTERBACH: You mean, ah, yes, okay ‑‑

AUDIENCE SPEAKER: And you have all the other ASs that are sending feed to him, you are considering the feed as the information or ‑‑

THOMAS HOLTERBACH: Here we consider vantage point, into the collector, so if you look at the set of vantage points that you have, you should have RR something and an IP address or maybe an AS number which is the haven't a ‑‑ router in the AS which is exporting the routes to the collector. So this is not per collector, it's per vantage point, because otherwise you get all the data from the collector but in the collector there are a lot of vantage points so no, we really work at vantage points so BGP router that exports its routes ‑‑

AUDIENCE SPEAKER: Thank you

THOMAS HOLTERBACH: But that's a good point.

MASSIMO CANDELA: Thank you again. Thank you very much.

(Applause)


So we can go now with the next presentation is still on the vantage points section is from Malte Tashiro, his top of interest include Internet measurement with focus on understanding Internet topology and dependencies. He works with large set of RIPE Atlas probe, so Malte Tashiro will present better Atlas vantage point selection for everyone.

MALTE TASHIRO: I will continue the topic of vantage point selection but now for our nice RIPE Atlas. The introduction I can skip because we already had so I do a lot of work with RIPE Atlas and the quick TLDR and disclaimer, I know my ‑ office presentations are not supported so you will get the whole slides without ‑.

What I will present is a tool that provides an alternative to the existing Atlas point selection so when you create a new measurement you get this nice interface that asks you how many probes do you want and you can either select from ASs or from area and this is basically selection of methods that you can use as an alternative to the existing ones. Because the distribution of RIPE Atlas probes currently, who knows how long in the future, makes the default or the worldwide selection not that worldwide or at least it has some bias, let's say, or folks. So this is the probe distribution map that you maybe have seen before and we have our large blocks in Europe and US and if we put that into a plot distribution, sorry this data data is from February so the numbers are a bit outdated now, we think we have 12,000 probes already so more growth and if you plot this by country so this plot has 1 point per country and it says how many or the percentage of the probes within each country and the higher up you go the more countries are on your probe set and the more to the right you go the more probes are contained in a single country. So, for example, you have 150ish countries which have less than 1% of the probes and so if you would have a perfect vertical line then your probes would be perfectly evenly distributed between your countries. If we take at the tail distributions that we have the countries the most probes is Germany, actually, followed by the US, so yeah, if you look at the geographical size of the countries, very good representation. And if you then go into the AS level, so RIPE ‑‑ at that time covered 3,600 ASs but we have 360 ‑‑ 370 probes almost in the Deutsch Telecom, so one German AS that and that makes up 3% of our probes and 23 percent of our German probes, for example. And I think we mentioned it earlier, we as users cannot just simply run measurements with all 12,000 probes, so we have to do a selection, and so yeah, this is the aforementioned interface you have, so you select for example 1,000 probes and say give me please a worldwide selection, but the result is then ‑‑ on the right, what does it give us? In this example, the number of countries represented by the probe set is reduced so this is now 93 but we still have the same top three only in a different order and now relatively, so percentage‑wise even more probes are now in the US so, yeah, we wanted a worldwide selection and now, yeah, we still have the same thing but with only less countries and so if if I am a researcher and want to do measurements, the selections are randomised so every time you do that you might get a slightly different set but we tried that several times and the trend always looks the same and if I'm a network operator, for example, who wants to know how is my AS reached from the world, it might be ‑‑ it would be more like how is my network reached from the US or Germany or France?

And yes, so again, the whole slide at once. So what we now try to do is we do very simple thing, so we don't do any claims of optimality or this is the best thing but we just want to have a better thing so this is our system looks like or how Atlas looks like on abstract level, we have our probes and these probes are in one or multiple ASs ‑‑ each probe is one AS but you might have ASs with multiple probes, as we saw, and we now put these ASs into some relation to each other with a distance metric between each AS and the idea is simple so we want to avoid probes that are close together, whatever close means, depending on the distance metric, because the measurement results you will get from them we expect them to be very similar so you don't get any new information so why should we choose two probes that are directly next to each other.

And so we do that in two steps, so the first step is very broad simplification, we choose one probe per AS by design that might seem harsh but for us that was a good way of cutting down these large clusters of probes that at least in the same AS, and then in the second step, this is a bit more intricate, we then want to reduce probe clusters or probes that are close together but in separate ASs, and how we would do that in the long version of the talk I would give five minutes explanation of that but we don't have too much time so I give you the short version. We use the RIPE Atlas topology measurements to build distance metrics between all ASs that contain probes and then, based on these metrics, so for ‑‑ multiple distance metrics ‑‑ distance metrics and we select a subset of probes that are the furthest away from each other or put differently, we iteratively remove one AS at a time, the AS that is the closest to all other probes until our probe set is small enough for whatever size we choose.

And if we do that, so here we have it for two distance metricses so we use the AS path link between the probes ASs and also the roundtrip time and so this is the same plot as before, whole Atlas distribution and WW is worldwide distribution and so if if your line is higher up, the left the more even your probe distribution is. If we look at the green line that is for roundtrip time so here we get more countries so now we have 148 countries but still a lot of the probes are in this case located in Russia, so 11% of our 11,000 probes is in Russia, which I guess makes sense so if roundtrip time most of the time has some relation to physical distance so here this method then shows 100 probes from Russia, I guess because the roundtrip time connection between them is relatively large and if you then, for example, AS path links which not ‑‑ does not really look at your physical, geographical distance but more on your Internet topology space, you get a bit less countries but at least the probes are spread out a bit more because then probably within a single country you have so many ASs they are still the topology far away from each other.

I plot this here as with countries and percentages per country, but of course, the selection algorithm has no idea of the country so it only knows we have ASs and you have distances and that they then spread out over multiple countries or not is just a side effect of how the Internet topology maps to your countries and we also not arguing that having one probe per country would be the best or not so it's basically just you can pick your own distance metric that suits your use case because if you won't have one probe country you can simply select it like that, you don't need this.

So the, I guess the more important point so you can also use this, please do and also let me know if you have any ideas for other distance metrics. At the moment we have roundtrip time and IP hops so the number of hops but we found that not too useful at least, and so we update this data weekly and you can just get the probes set via a web form and we also provide you API specification you can put into the RIPE Atlas interface because sadly at the moment we cannot provide with a list of ASs so you have to do very big JSON ‑ to get one probe from one AS each and if you are a researcher or just interested you can get the historic data starting from this year in March I think and get your rank if you want to compare how they changed over time. And we also use this approach to make some recommendations of where to maybe place new Atlas probes, but don't have much time to explain that and I would recommend you look at the talk at tomorrow's plenary session because I think he has bit more sophisticated approach to that. With that I conclude my talk and I am happy to take questions and I am here around afterwards if you want to know more details, talk to me then.

MASSIMO CANDELA: Thank you very much.

(Applause)
Questions? No questions. Well, I have one. They told me I should go here. It's again me. So in various parts you repeated and also in the presentation one probe per AS and my question is: Would it be ‑‑ I mean, how difficult would it be to ‑‑ if it would be possible with your approach to have ‑‑ I mean, a set of probes per AS? So more than one? What would be the ‑‑

MALTE TASHIRO: We thought about that but that opens another can of worms. What we tried to do because we wanted to do that is to have this mesh type measurement between probes of the same AS but sadly Atlas probes are not required ‑‑ or we thought it would be the case but it's not so Atlas probes are not required to respond to trace routes themselves which would be a very nice feature maybe to know the distance between them, so if that would be possible you could do a similar thing and just see like okay if they are at least this far from each other include multiple probes because Deutsche Telekom have these very large ASs where it might be very reasonable to have more than one probe. That would be nice if the Atlas probes were required to respond to these things.

MASSIMO CANDELA: Okay.

AUDIENCE SPEAKER: Not exactly a question but thank you for ‑‑ I already tried this one because you presented in TMA so I tried some examples that you provided. One thing that is ‑‑ that will be nice in this clustering you are doing, if instead of just getting a set of autonomous systems we could get at the end how representative is that cluster in ‑‑ when we are talking about population? That is ‑‑ it would be very nice because we know that that cluster represents, I don't know, a million persons or something like that.

MALTE TASHIRO: I thought we thought about combining with the APNIC population estimates, provide interface.

AUDIENCE SPEAKER: Chris Amin from the RIPE Atlas team. The probes aren't required to respond to trace routes or in general because they exist in a very heterogenous set of environments so that's by design. Have you considered or would it be helpful to make use of the anchor measurements because all of the probes are targeting RIPE Atlas anchors, you know, large set of anchors and the anchors always are responding to the probes, maybe you can get some idea of similarity that way; I mean, you don't have the mesh between the probes but you kind of an external reference source, it could be an avenue to look at.

MALTE TASHIRO: It's actually a good idea, basically ‑‑ yeah, okay, if you have then probes who have similar ‑‑ not ‑‑ yes, if you have clear differences between that, I have not thought about that, thanks.

Michele: A question maybe to you and also to previous questions. Would you consider integrating something like this in the RIPE Atlas user interface?

MALTE TASHIRO: I would always consider that but I know the RIPE Atlas have a lot on their plate with RIPE Atlas, of course I would be up to T this is Emile Aben from RIPE also has some part in this.

AUDIENCE SPEAKER: It would probably be of little value because you are not doing that via but you are a regular user that wants to do an entire probe set, that would be interesting. Chris: It's something we could consider but it's always a question of yeah exactly which algorithm to use, it's something we can discuss.

MASSIMO CANDELA: The audio question, the person dropped out so we are back on track. Thank you very much for your presentation.

(Applause)


We can go. The next presenter, so now the subtrack finished, the next presenter is Marcel Flores, he is a senior researcher and scientist head of the research at Edgecast, he works in various topics, including caching, focuses on understanding how operational system interacts with the Internet and how CDNs can be made faster and more reliable. Marcel will present ShakeAlert detecting waves in the Internet control plane. Do we have it? There we go, perfect. The stage is virtually yours.

MARCEL FLORES: Thank you so much. So, as you mention I am from Edgio, which is the combination of Edgecast and Limelight so we are two hyper giants on the Internet, both very CDNs with really big content delivery networks delivering content all over the world and one of the challenges delivering content like that is we have really big networks that have a lot of routers and many routers means many router failures. We have hardware failures, software failures, we have lots of providers both the CDN networks peer very deeply and that means we have lot of provider failures. They have their own router failures and software failures and physical link failures, every sort of bad thing you can imagine. Even do we do a lot of monitoring on the network, we do heart beats and active measurements, making sure that CDN is up and running, any sort of external view you can get of how things are reaching the CDN will always be helpful.

So for that what we are going to look at is update BGP messages coming from RIS live and in particular we are going to look at updates that contains path with our network as the origin. The about idea being these are going to reflect changes on the paths towards us. You can imagine this diagram I have here, we have CDN peered with some networks, we announce prefixes out, they propagate through these networks and eventually reach some RIS collector and our hypothesis here is if something bad happens on the network, for example our link between one of our providers there is damaged, that will generate a whole bunch of update messages as each of these downstream networks suddenly switches to an alternative path via another provider. This is going to create a large volume of update messages.

Now, in order to test this we first needed to understand what the regular steady state pattern of updates looked like. So what we did here is, we looked at the update volumes we saw for 6 different networks and we grouped them into one minute buckets and counted the number of updates across all collectors, here we are not paying attention to what the prefixes are or anything, simply the origin network and the number of updates we see. Here we looked at two different CDN networks, content network, who ISPs and a route letter so vast variety of different network sizes and deployments and architecture and they have pretty different steady state behaviours with CDNs generating significantly more messages than the other networks. What we did also see when notable events happen on the Internet this count often increase dramatically. These networks see pretty sudden increases in the messages during an event.

So, what we want to do is we want to capture exactly those cases. So for that, we turn to outlier detection and in particular, we are going to use a density‑based detection algorithm, something pretty simple here. We are going to look for every new bucket we fill as time is progressing, are we are going to look for K neighbouring buckets within a radius R for time window W, we will consider this a regular update, if we however violate that criteria in particular we are more than some radius away from the other points we will declare this to be an outlier and a shake and this will be a sign that there's some disturbance in the control plane.

So what we can do then is we can take these live BGP updates from RIS and actually you can stuff in whatever collectors you like, and we feed them into a system we call the alert aggregator which combines them with a set of CDN metadata so this includes which CDN sites are announcing which prefixes, which providers are available at each site, information like this we output a stream of counts as a time series which we pay through our outlier detection, when the outlier detection finds an outlying point we generate an alert.

So, I should mention the ShakeAlert system is named for the United States geographical/geological surveys which is the earthquake early warning detection system and that is sort of the same idea of what we are trying to do here.

So, what we get when we run this whole system is, we can actually do what we call likely impact localisation so this is a little bit different than fault localisation so we can't necessarily or we are not trying to say necessarily where the problem happened but we are trying to understand what CDN sites may be impacted when one of these events occurs, by using that CDN metadata we can say potentially which site had ‑‑ will be impacted by changes in routing that are happening as a result.

So here you can just see a simple example where normally there's some steady state of updates but during events, I think this was an actual provider outage we see pretty significant increases that stand out dramatically.

So, the sort of final question then here is: What do these events actually mean? We all know BGP update traffic like this can be a little bit noisy, a lot of times it doesn't seem to indicate much of anything, the collectors and their peers are relatively separate set than our the eyeball networks that are generally fetching content from the CDNs so maybe they don't really mean much so to then a we decided to try and measure events on the CDN side and correlate them with the shake event so we considered four different types of events: Device resets where we see our routers reboot or a line card or a significant component restart; we look for BGP state changes where the BGP sessions with our providers actually exit the established state; we also look for cases of announcement changes these might be something we would do manually on our side during maintenance we might withdraw certain announcements or announce new prefixes and good old packet loss from site to site. Here we required a packet loss lost to five other sites in a nearby time frame and what we find is that indeed the shakes do correlate with these events with 80% on average matching at least one of these events. That said, these are a pretty broad set of events, these cover a lot of ground so they don't necessarily, the shakes are not reliably telling you one type of event is happening but instead acting as a general indicator a large event has occurred on the network that you should be aware of.

So, in this way that provides extra visibility to operators. Obviously we are not suggesting you rip out your active monitoring and trust only in BGP updates but it acts as another tool in the tool belt offering another view that is sometimes otherwise hard to get and is complementary to a lot of our existing tools. An interesting note, we found in some cases especially where you might be waiting for systems to alert you to time out that something bad has happened, these can happen quite quickly.

The other thing you can do with this is not only monitor your own network but other networks as well, right? You simply look for different origin networks in the feed which can be handy to see when there's turbulence in different parts of the Internet.

With that, I would be happy to open it up to questions.

MASSIMO CANDELA: Perfect, thank you very much. Thank you for your presentation.

(Applause)


It's question time.

Costas: Excellent work, first of all. I would like just to ask, I have seen in my small AS currently that especially with withdrawals, RIS live sometimes takes a lot of time to send the necessary updates, so did you face, in your own measurements, any such variants, I mean, big variants in delays, especially with withdrawals? So in my case, what I just noticed is that the looking glass in stat ‑‑ in the stat side of RIPE, was showing a consistent view, but I got the update on RIS live even hours later. So did you notice any such variants?

MARCEL FLORES: So I haven't ‑‑ because of how the announcements worked from the CDN we actually, mostly are looking at announcement updates here and I haven't looked too closely at withdrawals. In general for updates, there are very weird temporal patterns which I generally expect are from BGP converge enters itself. I think RIS live there are some interesting temporal components and there is a cool blog post on RIPE Labs right now and some recent work improving the timing, but definitely there are weird temporal patterns where we see some of these events in particular, we will see a big spike and then some tens of minutes later we will see subsequent spikes. I have generally attributed these to BGP itself rather than the collectors but I don't know for sure.

CHRIS AMIN: I am the lead developer on RIS live. I would indeed refer you to the blog post which talks about some of the issues. There have been even more recent improvements and bug fixes and there definitely were some issues specifically related to RIS live rather than the BGP or even the RIS route collectors, which should have been much improved so I would, yeah, consider maybe if you were seeing such delays in the past and ‑‑ anyone else, I would expect them to be much reduced so we should be always around 3, 4, 5 seconds, something, so going forward if you ‑‑ if you did see delays or weird temporal things I would be interested if they go away or continue and, yeah, we can discuss. Thanks.

MARCEL FLORES: Yes, I have to say, RIS live in general, even before the recent adjustments, was pretty impressively quick, considering how much data it's pulling in and presenting, so we found it worked very well for these live alerting purposes even a year‑and‑a‑half ago.

MASSIMO CANDELA: I share the happiness for RIS live which I also use a lot and I find in general really quick, in ‑‑ in the order of seconds. However, we should go ahead. So we go with the next presentation, the next presentation is Mariano Scazzariello, is a PhD student in the community network research group. He research focuses on studying data centre technologies including network technologies and he will present Kathará a lightweight and scaleable network emulation system.

MARIANO SCAZZARIELLO: Good morning, everyone. Thanks for joining this presentation, I am Mariano and today will present Kathará. So what is Kathará, it is a container based emulation system, virtual environment in which you can run real networking software, and in this environment you can perform tests and experiments on the network. Currently, it is used both in the academic and on the industry side. So, Kathará is completely Open Source project, written in Python and you can find all the source code on GitHub, if you want to play with it, of course and here are some project numbers. Currently we have more than 50,000 downloads and main universities around the world use it. So Kathará differently from other network emulateers container lab, is compatible with all the main operating systems and Linux distributions and we also ship Python PY package, if you want to network the automation through code.

Kathará uses really simple configuration language to describe a network scenario and basically the network scenario is a directory on the file system that must contain a file called lab dot conf which describes the topology and for each device you can have a folder which contains the real configuration of that device. As an example here, we have a small network topology on the left and on the right we have the Kathará representation. You can see the lab dot conf file which describes the topology and links between the devices and you have the directory structure, as you can see each device has a folder which contains the Quagga ‑‑ configuration and in particular the BGP configuration of the Quagga suite. So Kathará uses containers, as I said, in particular containers so one of its mere strengths is scalability, in fact you can have two configuration modes, the first one is the single configuration mode which leverages on the local docker daemon and you can deploy more than 1,000 devices on a laptop and more than 2000 devices on a commodity server. Moreover, Kathará is one of the few tools available to have a distributed mode which leverages, I distributed physical cluster in which you can deploy really huge networks.

Now, I will show several use cases where Kathará is an effective tool and of course this is not an exhaustive list; they are just few examples.

So the first one is the configuration testing. Since Kathará allows to deploy complex arbitrary network scenarios you can test network configurations before deploying them in the production network. You can think of replicating a realistic copy of your network inside the tool and then test on it and also you can test interoperability between configureses, versions and different implementations from different vendors. Kathará emulates networks at Layer 2 so you can also run different non‑IP based protocols such as IS‑IS.

So this is, this is a scenario which you can find on the GitHub and it implements BGP hierarchy as you can see, with different policies. So supposed to be an operator of IS 100, you can replicate your network inside the tool and what you can do, you can test your IGP configurations or different BGP configureses or policies or also include the different network functions and test them. So what you can do with Kathará is basically everything; I mean, it gives a friendly environment in which the operator can of course test everything from the BGP policies such as multi‑homing, load balancing or implement a route reflector hierarchy, and also completely different things such as SDN, algorithms or configurations.

Another possible use case is testing what‑if scenarios for security purposes. So, what you can do is, actually, deploy your network and analyse how the configuration changes affect your network both inside and outside, and also you can replay possible attacks or test possible attacks and test the efficiency of the implemented implemented counter measures of your network. This is another simple network scenario with eight autonomous systems and as you can see here, we have both some routers and RPKI validators and Krill deployed. This shows how it is easy to include some novel technologies inside the Kathará, and so I will also show a quick live example if the video works. So, as you can see here, we have a terminal for each device, in particular here there are only shown the relevant terminals, and what we are doing is that the client in AS 4 is pinging the web server in AS 1 and as you can see on the top right terminal, we have the BGP contra plane of the router in AS 4 and this shows that it has two announcements for the AS 1 prefix and it selected the one towards AS 3. So I would play the video.

This is the two announcements, and now AS 7 tries to eject the prefix over AS 1 to sniff the traffic but as you can see the is not successful is RPKI invalid. So this is just a simple example of what you can do in Kathará.

As a last use case, you can use the Python APIs to build complex frameworks on top of Kathará so these allows operator to build testing pipelines to automatically deploy and assess the configure ranges before deploying them in a production network. So you can think of creating a system where you can emulate your topology, your entire infrastructure, put a change in the configuration, then perform some tests on these network with the changes and if all the test passes you can automatically deploy them in the network.

And also, it allows vendors to build integration testing pipelines for both supporting protocols and network function development. For this use case I show sign he will, which is a system we developed. (S.I. B Y L) assess the roting call Independent News & Media implementation, commonly used in data centres. Leverages in the and automatically deploys computes some standard and novel metrics for analysing the protocol behaviour and we tested both some Open Source implementations including if you are routing BGP and also vendor implementation which is rift by Juniper, this is a new protocol specifically devised for topologies. During our experiment we tested the topologies with up to 1,30 routers and if you are more interested in the work you can check the paper which is linked below.

So in conclusion, Kathará allows to deploy more than one 1,000 devices on a common laptop. It also gives the operators an environment in which they can deploy huge network with no scalability constraints in the contributed mode. Also, it is possible to emulate networks for both operational and research purposes, to test both different implementations, configurations or what‑if scenarios and moreover, it can allow you to build frameworks for testing implementations and configuration on different network scenarios. So these are some contacts. Kathará is completely developed by the Roma Tre Security and Research group. We are the three main developers and below you will find some projects and QR code for the website. Thank you and I am happy to take your questions.

AUDIENCE SPEAKER: What an excellent tool. It is exactly what I needed and because I have this need to create some software emulating my environment and all that stuff. Main question: Do you support my vendor?
MARIANO SCAZZARIELLO: The same question.

MASSIMO CANDELA: Same question.

AUDIENCE SPEAKER: I had a hard time, a really hard time and I just emulated via mix with three minutes, because of the resources.
MARIANO SCAZZARIELLO: This is one of the main questions that we have actually. So, basically, yes, I mean, if your vendor ships a container, I don't know if it ships, you can just put it in the Kathará as is. We did it with Juniper rift, they just give us the container and we deployed it in Kathará.

AUDIENCE SPEAKER: The CD RP but not entire mix ‑‑
MARIANO SCAZZARIELLO: No, but we are works also on extension which you can do hybrid scenarios with ‑‑ and containers, you can use both.

AUDIENCE SPEAKER: From University of Strasbourg. Thank you for the great presentation and great platform as well. I was wondering if you use it for teaching which I assume you do?
MARIANO SCAZZARIELLO: Yes.

AUDIENCE SPEAKER: And if you do, how do you use it? What is your experience with it? What sort of questions do you ask to the students?
MARIANO SCAZZARIELLO: We use it at Roma Tre in our ‑‑ one of our courses and on the GitHub you can find a whole repository with whole network scenarios that we use for teaching and also they have the descriptions of the network scenarios and what happens in the network scenarios, how you can change configurations and so on, and we don't have any questions from the students. I mean, I don't know ‑‑ we also use it for the examination of students.

AUDIENCE SPEAKER: So you ‑‑ you give it to each student and they work and give you on their network and try to make something into ‑‑
MARIANO SCAZZARIELLO: Actually this is possible but we don't do that in our course but we know some other universities do it to split the network and each student works on a different part of the network.

AUDIENCE SPEAKER: On this I see on your your website you have more than 10 other universities (Massimo Candela)
MARIANO SCAZZARIELLO: Yes, these are the known ones. Maybe there are more but we don't know.

MASSIMO CANDELA: I think it's good.

Colin: I have a question regarding the container networking environment. Can you simulate on the links between these virtual devices, different levels of latency and packet loss to add on to the link to simulate a global environment?
MARIANO SCAZZARIELLO: Yeah, so actually in the emulation, the performance of the links are fake, let's say, but you can use TC, the TC tool on the on the containers and you can set latencies or packet losses and so on.

Colin: Do you have to do that manually?
MARIANO SCAZZARIELLO: We are thinking to include these in the type of tool bot we are still working on.

MASSIMO CANDELA: Thank you very much. I think there are no more questions. Thank you for the presentation.

(Applause)


Okay. So our next presenter is Valerio Luconi, he has been involved in several EU funded projects. Related to network performance. He is a research Internet measurement and network neutrality. He will present impact of the first months of war on routing and latency in Ukraine. Valerio, the stage is yours.

VALERIO LUCONI: Okay. Thank you for your introduction. The work that I am presenting today is in collaboration and our objective was to quantify the impact of the activities of the ‑‑ of the war activities in Ukraine on the Ukrainian Internet. This has been done from two perspectives: Routing, are to see how did the network adapt to the events of war both in physical and digital world, and latency, to see if there is a performance degradation and because latency is one of the indicators that can show how this performance, degradation is perceived by end users.

So we collected data from three data sets, from February 14, 2022 to May 7, 2022, so we collected data for ten days before the start of the war to have a baseline for comparison. The data that we collected was from RIPEstat, the prefixes of Ukraine and Russia and their gee location. We collected of the BGP updates in the considered time period and from RIPE Atlas we collected all the anchoring measurements, trace routes performed by Ukrainian probes.

So let's start with the analysis. Here, I show the number of routed ASs and prefixes per day for Ukraine in blue and for Russia in Orange, and the two vertical lines corresponds to the start of the war and to the second of the second phase of the war when the Russian troops retreated from the north of Ukraine. For what concerns routed ASs, we can see there's certain decrease after the start of the war until a minimum and then they start slowly increasing back when the second phase is starting.

Overall, we counted the 300 disconnected ASs, not all at the same time, so they experience some intermittent disconnections.

We geolocated those ASs and we found that here in the map you can see in red the locations of the disconnected ASs and in blue, the locations of the war activities, including attacks, battles and bombings. As you can see, there is a geographic correlation between the two and this for us is an indicator that these disconnections could be mainly due to physical damaging or occupation of the territories.

Then we considered the routed prefixes of Ukraine, as you can see from the plot there is initial decrease and then a sudden increase, even higher than the original number before the start of the war. We got a bit ‑‑ big dig a bit deeper on that and we found that some prefixes that were announced were completely new and the other ones were just prefixes that were splitted in smaller sub‑prefixes. /24 are even smaller, even /32. So we believe that this could be an indication of a possible defence over attacks on the BGP side.

Now let's focus on the BGP activity. We collected all the updates and divided them into announcement and withdrawals and we show them for Ukraine in blue and for Russia in Orange. So these are the updates generated for the Ukrainian prefixes and Russian prefixes. As you can see there is a peak in the number of updates corresponding to the start of the war and just a few days after for Russian prefixes, and a general increase of the activity especially on the Ukrainian side.

The activity, the increase of the activity is shared among all the prefixes, but these peaks are not for really all the prefixes. And then there is a bump which this is shared among all prefixes, between March 28 and April 9. We do not know which are the causes of this bump but we still think this is for noticing it.

Then we considered BGP hijack attacks, we considered four kinds of hijack attacks that we found in literature. I will not the enter the details because I don't have time but we did not run a proper hijack detection algorithm but simply we collected the updates that were compatible with these kind of attacks. So, these are just suspect updates.

In blue, we show the updates concerning Ukraine and prefixes and ‑‑ which were involving Russian ASs, and in Orange, vice versa. There is a general increase of activity when the war starts, especially in the first two rows which are the suspect cases of prefix ejects and sub‑prefix ejects which are the most simple way of BGP hijacking and also for the other cases which involve hijacking also the AS so announcing myself as a neighbour of target AS, here, here the increase is less clear and on the Russian side we can see some spikes but not a proper increase.

Then, let's consider latency. We collected, of the anchoring measurements made by Ukrainian probes and we divided them into three groups. Those directed to Ukraine and anchors, those directed to Russian anchors and those directed to the other European anchors. We can see that there is an increase of latency generally and also an increase of the variability of the latency but what is most interesting is between Ukraine and Russia, there is a step like increase which never goes down, and this corresponds to the BGP activity bumps that we showed before. And finally, we investigated the direct links between Ukrainian and Russian ASs, and as we can see from the plot, after the start of the war there is a sudden decrease in the number of these links. We collected also the trace routes between Ukrainian probes and Russian anchors, specifically the ones that the pairs that were showing the step‑like increase and we saw disappearing IXPs, like, for example, Moscow IX and appearing tier 1 ASs and this could be a these direct connections were dropped and the paths were transiting through other providers.

To wrap up, we can see that war deeply influences the digital domain. We saw disconnections due to physical damages. We saw an increase in BGP activity and also in suspicious activity. We also saw an increase of possible counter measures to mitigate attacks. We saw an increase of latency and performance degradations and disconnection between the two countries at war. However what was pointed out also in other wars presented at the previous meeting and also on RIPE Labs, the Internet is shown to be resilient so even if this intense war activity was going on, most of the connectivity was still maintained, 83% of the RIPE Atlas probes were still online at the end of the considered period. Just to mention this work would not have been done if it wasn't for the data from RIPE so we believe this is very precious for the community. So I finish my presentation. Our work has been submitted for review at computer networks and you can find a pre‑print on archive at the link in the slide and that's it. I will be happy to answer to any questions.

MASSIMO CANDELA: Thank you very much for your presentation.

(Applause)


It's time for questions. In the meanwhile, I have one for you, actually, and queries about two things. So the first question is when you say latency you mean ping only or you did also other type of measurement?

VALERIO LUCONI: So we did also other type of measurements, we collected the HTTP measurements and we ‑‑ so I don't have the plot here but it is on the paper, and we saw also increasing that type of latency both for ‑‑ from the average and the standard deviation. So ‑‑ and especially for the Ukrainian probes towards Russian anchors, we saw that step‑like behaviour that we high heighted for the ICMP latency.

MASSIMO CANDELA: In one of the last slides you said direct ASs drop direct connection. Do you have any more detail about that? That was particularly interesting.

VALERIO LUCONI: So, yes, we particularly see one Ukrainian AS, I don't remember the name or the AS number that was seizing all its direct connection with other Russian ASs, which were like 500. Also, we found that mainly the connections that were remaining were established by bigger ASs, bigger I mean in terms of customer ‑ which we gathered from CAIDA.

MASSIMO CANDELA: Thank you very much, thank you for your answer.

MICHAEL RICHARDSON: I guess I have two questions but the more interesting one is: Since I had to get historical data to make comparisons, I'm wondering if there was data you would have liked to have but we didn't think to collect, we collectively, the community, that would have made your analysis easier or more interesting?

VALERIO LUCONI: Okay. So that's an interesting question. I didn't thought about it because, actually, I was just focusing on collecting all that was available at that time, but you know, some useful data could be on the amount of traffic if it could be public, but there are a very few sources and I think that would not be possible to have it unless we contact privately operators, and the other one of course, our view is always limited by the number of vantage points and so on, but still, we believe that we had enough to obtain results that are somehow interesting and meaningful.

MASSIMO CANDELA: I think we should go to the next presentation, thank you very much, Valerio, and of course other questions can be ‑‑ we go to the next presenter, which is in‑house presenter from RIPE NCC, Qasim Lone, obtained his PhD in the Netherlands, recently joined the RIPE NCC in the R&D department, routing security. He has a really interesting presentation. I let you announce it, the stage is yours.

QASIM LONE: So, RIPE NCC has run out of IPv4 addresses. Well, we know this for quite a while, right? We can still get PI addresses if you go to local brokers, one IP address which I found from a booth outside yesterday is around 50 US dollars. So today, I want to talk about 268 million IP addresses that are still unallocated.

My story starts pre‑1993, maybe some people remember classful IP addresses, we had Class A, B and C as Unicast, D as multicast and E as reserved for future. As we all live in present there was no future, and still probably is no future.

So, class E became 240/4 in CIDR notation but remain reserved. There has been several discussions to repurpose these addresses as Unicast addresses or private addresses, so two IETF drafts were proposed, V fuller et al suggested to reclass... suggested re‑designate from future use to limited use, that is extension of 1918 RFCand make it as private use.

Both of them ‑‑ both of them did not progress as an RFCs and the opponents noted the demand for IPv4 is so high that millions of IP addresses will be exhausted in no time, it's better that organisations now moves to the newer version that is IPv6.

So ‑‑ but there have been reports that the 240/4 block has been used unofficially, so we looked in our RIPE Atlas probes data to find this in the wild. We took a snapshot of a single day and we looked in trace route, ping and DNS data. There were no results for ping and DNS measurements; however, we did find 14.4 million trace routes that had one or more hops that contains 240/4 IP address. Almost all the trace routes originated from two a.m. zone ASs, AS 16509 and 14618:

An example of trace route looks like this. We see very earlier on in the trace route we see 240/4 which gives us some valid addition that what we found is correct, that is these ASs are using 240/4 address space internally. We performed some active measurements, we had a website from one of the contributors that resolved to us 240/4 IP address. As expected we found 70% of trace routes towards this address had time outs but 34 probes were able to resolve this and all of those probes were hosted in AS 701. It was very similar to the Amazon ASs that were one or two hops away from the probe. So that gives, again, us a validity that this is actually Verizon business usuallying 240/4 internally.

I don't have a lot to say but I want to conclude that this has ‑‑ this work has a lot of limitation, we looked at the single day snapshot and also we did active measurement for only one IP address. But nonetheless, we found that there is actually at least two Cloud providers and bigger ones are using 240/4 internally, we expect to find more hints if you look more in the data and also future ‑‑ in the future Internet measurements data and if left unchecked it will be challenging to assign this address if there is a future for 240/4.

So I want to turn it around and ask this these questions: Why are these network providers using 240/4 internally? And while I think majority of the people in this room and overall network community degree IPv6 is the future and I want to know why is there still a market for v4 and why are hypergiants like Amazon and Ali Baba investing more in buying IPv4 addresses and do you think is this a problem or does this problem need attention?

So we had some discussions on it for, I asked for a very limited time so I don't have time to go into the details but you can read our Labs article where I tried to discuss these questions and tried to answer some of them but I am very much looking forward to, from the community, what do people here think about this? It was also posted on hacker news, there is a interesting debate, v4/v6, it reached to the front page, we got a lot of people interested in this topic. I want to open the floor for questions. Thank you very much.

(Applause)


MASSIMO CANDELA: We have the first question.

AUDIENCE SPEAKER: Do you think ‑‑ or I think that the more we put into extending IPv4 space, the less demand for IPv6, from the customers from the providers, from the end users, so it's nice we have such a block over here but I personally tried several years ago to propagate some part of this block just for tests, and I found it's pretty often impossible from some platforms because they still check the type of an IP address and just ‑‑ has been blocking sending of such announcements or sending of such packets, just from those or to receive them on such a box so we still need to change a lot of software to be able to use such a block and it's probably much better to invest to IPv6.

QASIM LONE: That's true, but there is a Unicast project they are looking into like patches and other things but, as a proponent of v6 myself, so I have nothing more that, but I wanted to see because it's out there and if you look at Unicast project I have also referenced them in my work, they are also trying to propose an RFCaround it so that made they interested to look into that block.

MALTE TASHIRO: More comment than a question. For these probes, these 30 something from AS 701 that reached some target, we should follow up afterwards because I think that's not specifically your targets but we saw in some of our measurements also that you have a large number of probes, number of them reach your target but AS 701 have 250 timeouts and then a reply, so I think that is made a problem with the probes on network configurations, not specifically reaching something but more like ‑‑

QASIM LONE: We can follow up but it was first or second hop and then there's the trace route.

MICHAEL RICHARDSON: I want to say there's a large number of Cloud provides say they support IPv6 and you combine it with darker and find yourself with IPv4 only network. And it's very sad and that's one of the reasons that's pushing people to say oh, my God, well just get a v4 but it's an internal process, we don't need v4 but we wind up with a v4 even though it's me talking to me and I could do it over v6 but they won't let me. Really I think an interesting thing is I think we should get one of the looking glasses, Internet telescopes, get them to announce this address space and see what happens. And I think if nothing else it would solve the problem of people squatting on it because suddenly their packets would go somewhere else and that would be a surprise them to them and we would have our statistics as to maybe who is asking, right?

QASIM LONE: That I think is an excellent idea, would definitely like to catch up on it.

MICHAEL RICHARDSON: You won't do it, okay.

MASSIMO CANDELA: Thank you for the question, on the Cloud part, recently a Cloud provider told me we do IPv6 when I ask why I cannot set up my Reverse‑DNS, we don't know that, it was not possible, only IPv4. Anyway, so thank you very much for your presentation.

(Applause)


And I think just closing remarks. We are on time, that's surprising. Perfect. So, let's go to ‑‑ well, basically the closing remarks are this:

Remember to vote for the presentation and if you have feedback send the e‑mail to the mailing, e‑mail of the chairs. And in particular, we would like to increase the interaction in the mailing list which recently has been a bit low. Please, if you have ideas, if you have measurement you want to share or also the ‑‑ this discussion that we did today, please share it in the mailing list and with this, we are closing and see you in Rotterdam.

LIVE CAPTIONING BY AOIFE DOWNES, RPR
DUBLIN, IRELAND