RIPE 85
Routing Working Group session
Main hall
25th October 2022
9 a.m.
JOB SNIJDERS: Good morning everybody. Welcome to the RIPE 85 Routing Working Group session. The Routing Working Group concerns themselves with all aspects of IP routing technologies. So this covers BGP, IRR, RPKI and today we have a number of fascinating presentations for your consideration. I hope you all enjoy them and learn something from them.
Our schedule is very packed. We rotate a shorter slot through the Working Groups and this time around Routing Working Group has a one‑hour slot.
So, let's get to it. The first presentation is by Alexander Asimov about his heartfelt love for route leaks.
EUGENE BOGOMAZOV: Hello everyone, we will be talking about our love of route leaks and what we want to do with that.
Sorry for the reminder, but route leaks are a type of situation when prefixes are originated and received from provider and peers and rerouted to another provider and peers. By doing so router became a link between different regions, without income. The main problem is not even the unreceived profit from these type of connectivity.
First of all, when route leak happens in a country, your packets need to transfers a longer distance and you understand that it results in a much longer delays.
Second of all, due to insufficient settings, packets can be lost because they don't get to the receiver: Also, the overall quality of traffic can drop to if you have a small SIP and your internal infrastructure could not keep with all the amount of rerouted traffic.and you will lie down.
Also, don't forget that all these packets are seen by a leaker and it can potentially spy on all of these packets, gather information from them and use them for the future.
Okay. But how often does these attacks happen? We all know that the most possible case of route leak is due to a misconfiguration. In one on config or another, and due to our settings, thousands of ISPs make such a mistake, at least once only during this year, and when we try to aggregate all of these leaks by time period, then search out all the small leaks by the number of prefixes, affected region, accepted region and so on, we can still name a dozen of global leaks as they happen during this year, and you can even hear about them because they sometimes hit the newspaper headlines.
.
But, what about affected ISPs? This slide says that nearly every prefix is affected by a small leak. It is understandable because if you leak to a full table from one, your provider to another. All these full table of prefixes can be considered as a route leak. So, it's not so interesting.
Okay, what if we filter out all the small leaks by accepted region, so we will consider only the leaks, so spread it out.
Even after doing touch filtration we can still have a significant amount of ISPs and prefixes that can still be considered as affected.
If you think that only the small ISPs fail. Not. And this is an example from and this is a classical situation when one ISP traffic from one tier 1 ISP was rerouted to another tier 1 ISP, and these keys lasted for several hours. And if a leaker ISP was a small network the amount of traffic will be such big that it could house denial of services and we see this type of situation in other years, but luckily enough, in this situation, presented on the slide, the ISP was big enough to deal with all the rerouted traffic.
How you can measure the effect of the route leaks. If you have different data tools you can try to make a correlation between effect on the data and on ongoing BGP incidents. On the left part of the slide you can see how a spike in the traffic volume is happened to when a route leak happened. On the right, you can see that if you try to find a trace packet during the incident and after the incident, trace packs will be enough and it can include different countries. Also of course there is a pin code tool and with the help of this tool you can see the incident and also you can monitor the number of dropped packets.
Of course, what to do if you don't want to be affected by a route leak. First of all, try to understand how is your data affected by BGP incident? Then try to find the guilty parties. Then with the help of Whois services or any other services, try to receive an mail contacts of these parties. Then write complaints to them and then wait for an answer.
Of course, this moth usually works, but you understand that it takes a lot of time before the problem will be finally solved.
First leak is to abuse BGP loop prevention mechanism, and to renounce your leaked prefix, and to include leaker ASN between your ASNs. What does it do? First of all, you will pass a neighbour check, then you pass origin ASN check and lastly this route will not be received by a leaker because of BGP route loop prevention mechanism, and so your renounced prefix will not be leaked any more.
And also, there is a practical solution if you are a big service. First of all you try to understand what is your region of interest. Then you need to find the most significant ISPs of these region and make a connection with them.
What to do? Announce your sub‑prefixes directly to these ISPs. Of course, it doesn't solve the whole problem but a big amount of your traffic will be sent on to you so you will not suffer as much as it could be.
So, why do we love route leaks? First of all, we see these types of incidents and gives us and dozens of other companies jobs and data to analyse, then we have so much fun to configure all our routers and try to understand what was going wrong and how to we need to fix it as soon as possible. Then of course we can write monitoring tools to monitor this type of incidents. Also we can try to create and apply different policies, what to do if the route leak happens. And lastly, of course, it provides us with the data for a picture for a different network meetings.
Now I will give the work to Alexander. As as nevertheless, we decided that we can do better. The truth about route leaks is that 99% of them are a result of mistakes. It's human mistakes. And the best way how we can read of the route leaks is read off humans, or at least make the configuration this way simple that even humans will be capable of configuring it properly.
In the old world of route leaks, route leak detection was relying on communities. They were set on the ingress and were checked on the egress, so rather simple. But the problem was that this solution was always one is make from failure.
If your customer forgets to create an ingress filter, or forgets to create an egress filter, the route leak happens. He may forget to do both, but route leak happens.
So, to fix this problem of configuration, we decided to add a new configuration paramater and it is called BGP role and the goal of these parameters is to automate both leak prevention, leak detection and also to give you a chance to control your neighbour configuration.
So, BGP role, what is it? It is your peering relation with your neighbour. You don't have many peering relations. It's a provider, customer, peer, route server, and route server client. That's mostly all. You can mark with this easily all your neighbours.
And in code, these configuration parameters is translated into BGP capability code, and this code is negotiated during their BGP session establishment. So in open messages, there is a check that provider/customer, okay, provider/peer, someone misclicks and the BGP session won't come up.
Now let's get back to the route leaks. As Eugene said, route leaks are very simple. They happen when a prefix received from one provider or peer is advertised to another provider or peer. And in other words, we can transform it in the next role. Once prefix is advertised to a customer, it should go only downstream to customer to ‑‑ from customer to indirect customer and so on, so on, so on. And to guarantee that these rule is not violated, we add a new tribute, a new BGP attribute, which is called only to customer.
How it works:
When provider is sending prefix to its customer, it sets their RTC attribute with of its own autonomous system. The customer, if this attribute is not set, also adds this attribute with failure of neighbour's autonomous system. Please be careful. So, it doesn't matter who is setting the attribute. The value is the same. The RTC attribute is not changed during its lifetime to it is first pair of customer and provider. And at the other side, on the right side, so, when we are checking, so, the customer first checks that if OTC is set it must not send its prefixes to other providers and peers, and so does the providing peers on the other side. So it's a double set, double checked.
And if this time the customer fails to configure one part of its filters, nothing happens, because the provider will be able to instantly detect a route leak.
There are some formal slides about how OTC works. I will skip them because of the limited time but you can check them after check the RFC document, it's not hard, but there is one very important point: You may not check it because you don't mess with OTC. OTC is set automatically. You are setting the roles, OTC works in code. You may take a look how it works, but you don't need to configure it.
So, this is the slide describes how OTC is set. This slide describes how OTC is checked. And now, we can talk about how ‑‑ what we do with route leaks.
The document is quite precise about what to do if you detect a route leak. You just need just to reject the root. All other techniques are wrong so don't please try to allow the local preference, you still will be abused with this.
And now on this slide you can see how hard it is to configure BGP roles on some open source software. So, the yellow part is what you need to configure BGP roles, and OTC will do all the work for you. I hope it's not that hard. And on the bottom you can see what happens if the roles are configured with mistakes when a corresponding role doesn't match, so the BGP session won't come up.
And what is happening behind the scenes? So there is OTC attribute emerging in the root but you are not configuring it, it's done in the code, it's done for you; it's simple.
So, together, BGP roles and OTC give you a chance to control your neighbour configuration. It's double checked on egress, double set on ingress. It's an attribute compared to community it's highly unlikely to be stripped. And one of the most important points, it gives a chance to detect route leaks in even several hops away. It's a transit attribute, it's a transit signal that goes from first pair of customers to providers downstream and will even if it is indirect customer, you will be able to detect a problem.
So, vendor support: At the moment I am aware of that budgets will apply to the three major open source implementations. What can I say? We are not at the end. We are ‑‑ maybe it's the end of the beginning, and to get rid of route leaks, if we don't really love them, we need this community to show desires to get rid of these kind of routing incidents, such that a similar desires that these communities showed to get rid of hijacks with roles.
So, if you are using open source tools, you can already try it, set‑up roles, there is nothing that prevents you from doing it. You are using some.Windows software, great. Send a request for your favourite vendor, ask them that you want this RFC in your future release.
And if you are a developer, even greater. There is a lot of ‑‑ a great field of improvements. You can contribute to other BGP implementations, you can contribute to BNP parses to the TCP dump implementation, BGP dump, a lot of things. And I will be very happy if at the next RIPE meeting, not me, but you, will be standing here and sharing your experience about using BGP roles in your production and how it saved your infrastructure from the route leaks.
Thank you for listening.
(Applause)
JOB SNIJDERS: Thank you so much for this presentation and working on this RFC. Are there questions from the audience on this topic or suggestions?
AUDIENCE SPEAKER: Geoff Huston. One of the more annoying route leaks is when I get an aggregate from my upstream provider and deaggregate internally and the deaggregations leak. They are not marked with anything, are they? No.
ALEXANDER AZIMOV: So, the question is about work of BGP optimisers, that are wonderful tools that gives away to deaggregate your prefix when you are receiving an aggregate.
I'm not sure how they work, if they copy all the attributes. If they don't, unfortunately we need to wait for another document to arrive from the IETF, it's called ASPA and it should fill this gap too, but still, from my experience, is the most ‑‑ the biggest ‑‑ the majority of the route leaks not originate from this sort.
JOB SNIJDERS: Yeah, would I agree with that. I think that what the BGP optimisers do is more a hijack than a leak, because they are fabricating paths. But, routing security is a multiair journey and we're not finished yet. Any other comments, questions?
AUDIENCE SPEAKER: Rudiger Volk. The most pressing question that comes to my mind on this is, why the hell did it take that many years from introducing the draft to actually getting it done? But, well, okay, I don't think it's your fault.
ALEXANDER AZIMOV: You know, do I agree it's our fault. Ours, as a community, because we can rely on some people to push new technology, but without their joint effort, so I was telling that routing security is a joint success or joint failure, and so, if you are not happy about some technology is moving slower that you want, help with that.
JOB SNIJDERS: Thank you so much.
Next up is Max, who is share with us some considerations on the topic of IRR Internet routing registry, a plain text phenomena from the early nineties.
MASSIMILLIANO STUCCHI: I'll start this train with a free Italian speakers and I'll see how Job will butcher the next names as well.
Good morning everyone, I am Max, I work at the Internet Society, and one of the projects at the Internet Society is MANRS, we work with different participants in MANRS to improve routing security.
As part of that, we look at RPKI data, but we look also at IRR data and together with some colleagues, we started looking deeper into IRR data to ask the question, do we still need that? Do we still to trust that data?
.
So here is the ‑‑
.
There are about 30 IRRs. There are about 30 databases with different levels of trust, different degrees with different ways of feeding data, and so if we start looking at all these, there are the better known ones, there is RADB, there is NTTCOM, there are the regional registry databases. To build filters, we take data from pretty much all of them, and tools leverage this. Tools leverage RADB, tools leverage the ‑‑ there is BGP Q4 by default queries the NTTCOM IRR, which by the way Job doesn't do IPv6 and we recently figured out.
But we get ‑‑ we use these tools, we generate prefix lists with data that comes from sources that sometimes, well we should ask: Can we trust that? Can we trust them? And I see Rudiger shaking, no, we'll get to that in a moment.
Sometimes the checks that are applied are very light. You know, and you guilty data on there that you would not really want to trust, so, here is a personal example. I have my own ASN, I have my own prefixes, I went to a very well known Cloud provider and they created entries for me in RADB. I left them there, that's the one in the lower right, the real BGP announcement is the one on the top left, but I find that there is that entry for a /32 in RADB and it comes out when you run BGP Q4: If you build a prefix list for my AS‑SET or my ASN you also get that entry which crippled in. Fine, it's okay. I am leaving it there as an example to all the time to show how somebody could create entries for you in some other database, and this is how data becomes untrustable over time.
.
So, together with two colleagues, we started ‑‑ we decided to compare what's in the RIRs and what's in the RADB principally, ALTDB and then NTTCOM as databases to see how the data compares between the different databases. So, we checked if there are the same objects; if the origins match; if they don't match. And we started looking into that.
So, we have preliminary data. So, there are three statuses: One is where the same object appears in the RIR and in RADB, and they carry the same original ASN, one where the two original ASNs differ, and another state where the data is only in the RIR, which is what you would like to have as final solution, because if we start looking at ‑‑ this is the total aggregate data in the world, there is a high number of objects.
The green one, the green part of the chart is the ones who are only in the RIR.
The yellow one is the matching part and you would say, okay, if I have an object in my RIR who has an original ASN and I have the same object in RADB or ALTDB, or some other database that has the same origin, where is the problem, where it's not the problem in the immediate but if you want to change something you would have to change it in multiple databases and what what if you forget to do it and data differs in the different databases?
.
Actually it happened recently when I was helping an exchange figure out one exchange that we helped during the normal work at ISOC, we were trying to debug why some routes would be filtered by the route server and we figured out that there was an entry in our RADB that no one had seen before for the same prefix and that would be filtered out because it was from a different original ASN, because things had changed related to the prefix.
So, this is the aggregate and the data is bloated by some strange data we found in the ARIN database. So let's look at the situation in the RIPE database.
And it's better, but still there is, as you can see, there is more than a quarter of the prefixes that are either not matching between the RIPE database ‑‑ the RIPE IRR and RADB, and there is a good person dates that are still matching and probably would need to be fixed. IPv6 is better because it doesn't have much of the historical issues that IPv4 has.
Then we started looking' non‑authoritative part of the RIPE Database, because we were investigating some issues in some prefixes in Africa and we discovered that many, in some countries, pretty much 80, 90% of the prefixes are actually registered in the RIPE non‑auth database and that actually delayed all of our work by a couple of days because we thought our tooling was broken because we couldn't see anything right in these certain countries that we were using as test.
.
So, as you can see, we do have still a good person days of both matching and non‑matching prefixes in the RIPE Database in non‑auth. This should all be gone by now. We should be deleting them.
Here is the comparison with ALTDB and ALTDB we decided to not just look much into it because it doesn't have an interesting number of prefixes at least for the RIPE region.
Then we bent a little bit further and considered, this is just preliminary data, because we still need more time to look into what we have. But then we went a bit further and we took the non‑matching part of the prefixes of the data that we found in the RIPE Database, and we tried to check it against what's actually in the routing table, and that still carries a quite large number of prefixes that match what's in RADB rather than what's in the RIR, so, still data differs, and you still are have some places where you should be trusting RADB for what's in the routing table rather than the RIR. And it's also true for IPv6 in a large number.
.
So, this is very preliminary data, but what are the next steps? We want to further analyse that. We actually have a tool to do per ASN, so we can give you, if you would like, after the talk if you come and see me or send us an mail, we are able to give you your ASN, like all the data related to your specific ASN. We can go per country. We can do ‑‑ we will do a comparison with RPKI as well to put it together with what actually you find in the routing table. And maybe in a future analysed by AS‑SETs, because then we can check the customer cone of a given ASN.
But then what are the recommendations that come from this?
.
The recommendation is it's better to rely on RIR data because that should be the source of all the data you want. Encourage the use of RPKI, because in RPKI, then we don't have external databases to look at.
And then we have a situation that's getting better, but legacy space holders should be allowed to use RIR services and RPKI. This is changing in some regions. So, if we look ahead, it's going to be better.
But then data is available to everyone. There is a link, we don't have a host name or a domain yet, but it will come soon. And you can get data that's generated every 15 days, the first and the 15th of the month. It's all in JSON, we have an explainer of the data, there is a summary in that, summary of all the numbers we get in the same directory and then out of that we hope we can provide for data in the future to you so you can understand how you can fix your entries, but also if you are willing to check some of this data, let us know and we can provide everything to you.
And this was really short, but if there are any questions.
AUDIENCE SPEAKER: Rob Lister. AS‑SETs are kind of still important even now that we have RPKI, because we need a way to describe a bunch of ASes, so if for route server filtering they are still quite important. The other comment I would make is, and also that the previous presentation is kind of let's do a thing, let's ‑‑ I think this only comes about because if you really want people to adopt it and to say okay let's get rid of RADB whatever, is because we're kind of forced to, okay. So, if somebody joins my IXP I kind of expect them to have these things and I am looking for this stuff in the RADB. Otherwise they are going to say this service doesn't work, why doesn't it work? Because you haven't got the right stuff in the RADB and therefore we are filtering out. So I think unless it has actual consequences for oh we need to do something here because it ain't going to work, and unfortunately or fortunately, whatever you think, and this is often driven by the kind of bigger players in the room saying okay, well the big are players are not going to put up with this any more, either you have this or we're going to peer and say oh, right, we better actually do something. So how many years did it take us to get working communities? And that was only brought about because we were saying to vendors, this doesn't do what we want, we're not going to buy your kit unless it supports RFCs or they all went out and did it in 2017, so it's a hard job, yes, it's full of nonsense, but better thaning in I suppose and that's the answer I get is ‑‑ I know it's nonsense, but do it because it's better than nothing. But I think unless it actually has some consequences for people it's really hard to kind of make ‑‑ change things for the better, but... a lot of people are saying we're not going to peer with you unless you are doing RPKI, and I think that's the only way we're going to nudge people in the right direction.
MASSIMILLIANO STUCCHI: There was a similar discussion yesterday during the ‑‑ we had a MANRS community meeting, and the idea was to try to set guidelines that would be in five years, to start work on them now, so that in five years we get something meaningful, so I guess it will take time, but if we never start, we'll never get there.
AUDIENCE SPEAKER: Yeah, the other thing I sort of dislike about the RADB is I think we do have a handful of networks that just seem to put the entire world into the RADB, so you are actually only advertising 2000 prefixes but there is 250,000 prefixes listed in the IRR RADB which ones are you going to have so my route server has to do a huge filter list for 250,000 prefixes, and they go we're only going to advertise this tiny amount. There is not that granularity there to say that we would like to say which ones are you going to announce at this exchange and some of them they just don't care, they say we're going to shove the whole lot in there, you might as well put the whole Internet in there so it really falls down at that point even if you do have RPKI, there is not in some networks they just seem to put the whole world and every customer, they just shove everything in there, they don't actually announce it, so this comparison of what's actually being announced and not announced is something to look at. Do they want to fix it? No.
MASSIMILLIANO STUCCHI: But some people put data in there to be prepared for the deaggregation or...
JOB SNIJDERS: I am closing the microphone queue, because we might run out of time. Before we go to Stavaros, I want to contribute as an individual on one remark. Can you go back to slide 6?
So, this is a really cool example of where RPKI‑based filtering applied to the IRR is helpful because in the upper left, that's your original, you created this, but the lower right object is not visible through rr.ntt.net because the NTT IR server that mirrors all databases does RPKI‑based filtering, and the object in the lower right violates the next length attribute that Max set. So, it is possible to hamper the propagation of strange route objects in this ecosystem if everybody would upgrade to IRRd version 4. Thank you for making this example. It's a cool one. Stavaros.
AUDIENCE SPEAKER: Stavaros from AMS‑IX. Max, very good job and result, thank you very much. Two comments, last requests for you. The first one you said you are going to extend a little bit your research, you have some preliminary data you are going to go further. Can you please also extend and make a check to see how many split data do we have? For example, we have seen cases where the org ID is registered in ARIN by the policy is somewhere hidden in RDB or ALTDB and then... I mean God knows how and why and where I should go and trust.
And then a second question or comment do you see a future where we don't have those secondary databases where ALTDB and stuff like and we only have a hand of trustworthy databases where we can go and fetch data, because for me as a big exchange point I would like to know these are reliable sources, I can go and fetch the data of my customers and then build trustworthy filters and not just, you know, dig around and ‑‑
MASSIMILLIANO STUCCHI: Basically you just described RPKI.
AUDIENCE SPEAKER: Yes, but it's not ready yet.
MASSIMILLIANO STUCCHI: But the person behind you was shaking his head when you were saying ‑‑ I don't know. As I said, what you described is RPKI, and then if ‑‑ we already have the tools we could move to to to make it more reliable and trustworthy I think. So, I think we should put more emphasis there rather than ‑‑ well trying to fix what's been legacy for 20, 30 years now and in the IRR.
AUDIENCE SPEAKER: Geoff Huston. I am glad your on slide 6, because that illustrates precisely my point. That first object is your authority. You're not speaking for AS 58280. You could put any number you want there and it's still valid. But you haven't got the agreement of AS 52280 to actually do it. So, I can give permissions to any AS for my prefixes, and it means nothing until the originating AS is saying this is the span of things I am prepared to announce which is the second object. Because at the moment, in this partial deployment world, if I see a bunch of prefixes coming from an AS, originating from an AS. The ones that aren't covered by ROAs, what am I meant to say? Is that real or is someone faking it? What Vulture is actually doing is saying what they are doing, the other part of the offer, you have offered, they have accepted. There is no RPKI object to describe that acceptance. And until you get that, write the draft, write the standard, do whatever you want, until you get that, if you want to do that external auditing of the complete set of announcements from an AS, you have to rely on the existing routing databases. Because without it, it's just an offer without any visible acceptance or not.
So, I would say that's totally premature to think well we don't need any of this any more until you complete the underlying sort of offer of what the crypto is meant to say. And at the moment RPKI is incomplete in that space. Thanks.
JOB SNIJDERS: Right. Max, thank you so much.
(Applause)
PAUL HOOGSTEDER: Next up is Marco with a presentation about the current state of RPKI validators. Thanks.
MARCO D'ITRI: Hi. I am a network operator but also my roots are in software development and the infrastructure software. I have been a DBN developer for 25 years at this point.
Today, I will talk about the RPKI validator softwares and how they are packaged in the system.
There is quite a selection of available software, but not all of these packages are actually maintained. Let's have a look.
There are also companionship packages, some of these validators do not implement the RPKI to router protocol because it was not fit for how they were designed, or in the case of open BGPd because they used a different interface between the routing Daemon and the validator. So, in this case, if you want to use them with routers that use RTR, you need an additional Daemon which will speak this protocol to your routers. Of these, RTR was part of the Octo RPKI suite by Cloudflare at this point, it looks like it's been abandoned and there is a new one called the stay RTR which is actively maintained.
Job kindly provided some data about which validators are actually used by networks by looking at the logs of a web servers serving roles and here we see that 80% of the Internet uses Routinator. Routinator is a great software, but it's not good when so many networks depend on a single software.
Next to that RPKI‑client and the rest are unsupported like the validator from RIPE NCC, which was discontinued officially last year.
And Octo RPKI, its future is not clear for the validator which is not actively supported at this time but I announce that the developers wrote me yesterday and they say that funding and the active development for the validator will resume next year.
And then there is RPKI‑proffer which somebody uses but I have never been able to find whom, it's a very unusually software. Somebody raised their hand. Good.
Let's have a look at these packages.
Routinator is very widely used. It's a great software, well documented. DL and NLnet Labs are behind it and they are actively working on it a lot. There are frequent releases and they offer software support contracts.
On the other hand, almost everybody is using it and this in itself is a bad thing. And since it's written in rust, it's impossible to package by distributions, we will get to this later.
RPKI‑client is part of the OpenBSD project. It's written in C. It's quite simple and essential, but they implement a lot of the more recent features of RPKI. It's developed by network operators. It uses the kind of security features that we are used to from OpenBSD software like process separation with different prejudicial. On the other hand, it needs a stand‑alone RTR Daemon, but that's fine we have a stay RTR.
Not much to say about the validator from RIPE NCC except that everybody should stop using it right now because it's not maintained any more since last year, but even before that, it was officially discontinued for sometime, so please stop immediately.
Octo RPKI was written by CloudFlare. It's not actively developed at this point even if they released a new version last week, but in the last one or two years, they only fixed the security issues. It's very simple, it's good apparently, it works fine. I'm not sure if I can recommend it because of its current status. It needs an RTR Daemon and actuallyhe developers developed the one which was go RTR which now stay RTR. It's written in go.
.
FORT validator has been developed. I think that it's it's good middle ground between its complexity and the features available. It's well documented. It does it has not been developed actively since time but the developers were always ready to fix the security issues. Hopefully it will be really be developed again starting next year.
RPKI prover, it's a very unusually software but I like having more options. This is good. It's written in Haskell which is a very unusually language, I am sorry for the fancy functional languages here and it has a very, very low adoption. I am not sure if I should package it for Debian or not since nobody appears to be using it.
My suggestions are please use two different implementations of validators. It's not hard. You have some choices. Not so many at this point but you have options. So, please pick two of these. If a significant number of networks will start using two different implementations, we will get out of these mono culture situation that we are getting into very, very fast.
My suggestion is that if you use packaged software from a Linux distribution, then it's not an operational burden to use multiple domain names because they come packaged and working out of the box.
.
A quick look at the features. You can see that RPKI client implemented all the new things that are being developed right now, most of them are still drafts. BGPSec is also implemented by router nature, the others are currently in maintenance only mode. We will see next year what it will happen.
Let's have a look at the status of this software in Debian.
The great debate, as usual, should we use software from distributions, packaged by the distribution or compiled by hand or use packages provided by the developers?
My position, and I am working on Debian, so my opinion is not totally inbiased is that everybody should use a distribution in production because you get packages which are well integrated with the rest of the system, every file is in the right place and not where the developer has dieded. It is know long held belief that developers should write software but packages are made by system administrators. These are two different jobs and it runs to their own strengths and software developers rarely are good system administrators.
When you use packaged software from distributions, then you get automatic security updates because distribution at least fixed the security issues in their stable releases.
On the other hand, if you get the packages straight from the the developers or build your own, you are going to get much fresher software. But I am working on that in Debian.
Clearly, there is some issue without people are installing software because we know from the data provided by job data, over 70% of the networks are using software which is out of date and insecure. Have a look at your deployment practices, your patching practices, because these may suddenly become a problem.
In my work for Debian, I have this goal of making it the perfect solution for network operators, and to provide everything, RPKI related. I work a bit to create a good integration between these packages and the rest of the system. I have moved the TALs to a stand‑alone packages which can be moved easily without the need to upgrade the I think he will validator packages and I also did some work to apply all the modern techniques of some boxes which are provided by system D.
On the other hand the RPKI software is still very fast moving so there are frequent releases, or at least there used to be last year, but I have also been working on providing back boards from this unstable and testing Debian distribution to the stable releases. So right now you can get the fresh software in Debian stable from back boards.
The other issues is that router nature really cannot be packaged. This is an issue not just ‑‑ it's not the fault of Routinator, but it's a problem common to larger software packages written in Rust. Rust is a totally fine language, it has great adoption recently. The problem is that there are an ecosystem of libraries, it's broken and toxic. There is these habit from Rust developers depending on specific version of libraries, something which is common also in the non‑JS ecosystem and this is very bad for our distribution because we, as Debian developers, cannot ship the.vendor libraries, all the dependencies in the software ‑‑ in the package of some software because our security team will not allow that. This is a maintenance nightmare when there are security issues, they cannot go hunting all copies of a library all over the distribution. So this is strongly discouraged and cannot really be done for all the dependencies of some package.
On the other hand, it's not really practical, and also the software ‑‑ the security team does not like this either to have many multiple releases of some libraries in the distribution. The problem is different Rust packages depend on different versions of some library because the APIs are not stable and change all the time often for some very core packages of Rust.
And so, at this point, it's practically impossible to package a complex Rust software.
I am talking about Debian, but Ubuntu has the same problem and my understanding is that Fedora and Redata also have this problem.
But you can get an up to date and good package of a Routinator straight from the developers. So, you should use that instead of building your own package.
This is the current state of the RPKI related packages in Debian. As you can see, all of them are packaged for Debian stable. There are back boards, there will soon be a back board to stay RTR as soon as they do a new release, and I have also taken over maintenance of open BGPd from Job. I will make a backport to stable of that as well.
Ubuntu, the last long term stable release of was earlier this year had the old software up to date, but this is not core software for Ubuntu. So for how it works, it will not upgrade it over the life of the distribution, so I cannot really recommend using Ubuntu for RPKI work.
And as I said, there are back boards available to Benion. This is the usual incantation to use backports, so you can use all the most fresh RPKI related packages in Debian. And I am committed to keeping up logo backports when it will be needed in the future for the life of the Debian stable. Thank you.
(Applause)
CHAIR: Thank you Marco. Are there any questions?
AUDIENCE SPEAKER: Rudiger Volk. Thanks for the review that ‑‑ okay, seeing that more than two implementations are actually available and useful obviously is good news, and having someone provide reports about well, okay, how to classify the available solutions obviously is helpful. From the days when I was operating stuff, I remember that the question of whether the current state of my RPKI validator actually had some anomaly in the data or in the operations was something that I did care for quite a lot, and back in those days, anomalies did occur quite frequently. I guess the frequency is lower today, but the RPKI system, with its complexities, both in structure and in operations, in my opinion, always will run the danger that anomalies occur very of various types, and the question: How we report alarming from the implementation about detected anomalies and potentially the heuristics used to deal with them, I always found interesting, and yes, I think talking about that it a little bit difficult, but I think it would be very helpful to kind of include that column in the reviews of available and operated packages as well.
CHAIR: Sorry, we have to wrap. Marco, can you give your reaction?
MARCO D'ITRI: That's a good idea.
CHAIR: Last question, please, we have to skip the last item in the agenda because we are already over time.
AUDIENCE SPEAKER: Thank you Marco for your presentation. Benno Overeinder. Developer of Routinator. First, my opinion is, I would like to give some credit to RPKI, or the RIPE NCC RPKI validator, they were the first and they also paved the way for the current RPKI software. So thanks for that.
Other comments: From what I understand now, is that Octo RPKI does have now a dedicated software engineer, so they are remanufacturing so maybe in the future there will be updates and more frequent updates. And for the ‑‑ your suggestion the packaging of Routinator, using the system trust anchors, that can be resolved. And finally do you but that's more a question, I am not ‑‑ I agree with you, I think the system development and packaging are two ‑‑ software development and packaging are two different activities and we traditionally rely on the Debian or the OS distributors, packageers, to package the software. But ‑‑ what do you think to go forward with the Rust ecosystem? Because it will also appear as kernel modules, for example Mozilla, Firefox system, so how do ‑‑
MARCO D'ITRI: The problem is not Rust itself but the quantity of libraries, and obviously using libraries and not reinventing the wheel every time is good but at this point, as the Debian project, we do not have a solution for this situation. Maybe we will find an acceptable way to do vendoring, we still don't know. We talk about this a lot I think earlier this year, but we have no solution yet. At some point the problem will become so big that we will have to find some solution I think.
CHAIR: Thank you Marco.
(Applause)
JOB SNIJDERS: We had one backup presentation in case there were spare time at the end of this slot. Oh how optimistic we were. If you go to RIPE 85.net and look for the Routing Working Group agenda, you can see the slides from Massimo Candela on the BGPalerter and packet so that might be worth your while to check it out. Maybe next time we have time for the lightning talk.
CHAIR: Talking about next time I hope to see all of new sunny Rotterdam in the spring. Thanks.
Coffee break.