RIPE 85

Archives

Connect Working Group.

26 October 2022

At 12 p.m.:

REMCO VAN MOOK: Good morning, everyone. Afternoon, I think, yes, afternoon. Afternoon. I am going to give the stragglers and other people deciding whether they want to join IPv6 or here, another minute to come in. So take your seats, get comfortable and put any small items underneath the seat in front of you. This is a non‑smoking session! This is Serbia, I have to point this out.

Will, what have you done?

WILL VAN GULIK: This was validated by Florence, you know.

REMCO VAN MOOK: I guess I have both of you to thank for this, I guess. All right. Let's get the show going. Welcome to the Connect Working Group, your second most favourite Working Group of the RIPE meeting. My name is Remco, I am co‑chair of this merry gathering, together with Will van Gulik and Florence who just briefly showed her face over to the side. There she is. Good to see you.

FLORENCE LAVROFF: Hi, everyone.

REMCO VAN MOOK: Florence is obviously to join us physically here but she will be looking at the proceedings online. Let's see. What do we have? I have a slide deck, I think. Let's see what else do we have. Welcome ‑‑ welcome. Scribe appointment, we have guy from the RIPE NCC who has agreed to be our scribe. The agenda, you have all seen the agenda, there is one additional small item that we are going to use, put in at the end which is something that came in through the address policy Working Group this morning so we will briefly touch on that. Proposed session format, as you may know, we've agreed a while ago that we would have some of these, part of these connect fully public and some of them a little bit less so in order to be able to open up more business orientated discussion. There's no such topic on the agenda too time so we are going to be acting like a regular Working Group in this instance.

Then on to ‑‑ housekeeping. So, I trust you've all read the minutes, can I see a show of hands who also noticed the embarrassing typo in the first sentence? There was no typo, congratulations. Nice try, though.

So, I take it no one else has any comments, in that case it's all approved. Then so thank you for the RIPE NCC for compiling them.

We have three presentations today, one by Thomas from France‑IX about 400 gig Ethernet in the field, we have Euro‑IX update done by Maria this time, because she graciously offered to stand in and we have an update on PeeringDB by Arnold. And I still don't know ‑‑ I don't know what this is.

So, I would propose that we just get started with Thomas.

WILL VAN GULIK: Thomas, the floor is yours. We are upgrading, now it's 400 gig.

THOMAS DELABY: Hello, everyone. Thank you for being here, I am Thomas Delaby, I am a network engineer at France‑IX and today I will talk about 400 gig Ethernet as deployed in the field and I was explain how we think it's going to be a set of game changing technologies in the near future.

A quick agenda, I am going to introduce who we are, who is France‑IX in general, what technologies we are using for metro distance transmission right now, why we are changing them for 400 gig Ethernet, how to use one specific technology which is 400 gig ZR which is the most interesting of the set. I will briefly introduce you to more 400 gig technologies and conclude.

So, this is a map of services offered by France‑IX, France‑IX is the largest IXP in France, it was founded in 2010 and two years ago we merged with ‑ in Lyon and made a few regional deployments, so Marseille is way back before Res poll and we launched Toulouse and Lille recently. We are offering various services, clearing, of course, hosting, VLANs and this is a map that shows the service category infrastructure as you could guess, the green links are being redundant and built in diverse ways.

A bit about traffic. So, we are hitting the two terabits bar so we are quite happy with that and the growth seems to be steady enough so that's...

Jumping in, so this is a view of our global MPLS domain, it's quite complex, that's the internal map, you won't see this on the website so it doesn't scare anyone. On the left‑hand side this is the Paris platform and that is built upon a set of two co‑allocations in the recognise tang else and a few satellite locations that are connected to the cause via dark fibres, redundant dark fibres and diverse so dark fibres runs to 2 to 75 kilometres or so and we have always been doing that, historically it was in times 10 gig and we moved progressively to expanding technologies to get 100 gig waves working on this backbone. Highest capacity links in the long distance can go up to 600 gigs and so in order to achieve this whole thing, we needed performed and reliable optical solutions for long distance. So, that's why I'm emphasising everything here.

So now and then, we were using muxponders. I guess some of you already know how this works and, if not, here is a quick reminder. Muxponders in the white box here takes your slow and grey optical signals from your IP equipment and converts it into high power and high speed optical signals on the line sight, should be high power so they can reach long distances without amplification, this is basically a time division multiplexing device and uses two main components for doing that, SFR objects and CFPD with custom chips inside in order to make the magic on the line side work and it's all based on OTN optical transport network encapsulating all your clients' suitable for exchange between muxponders.

So this is a view of muxponders in their natural environment in France‑IX, so we are using two types of muxponders, Nokia 1830 and Corian groove G 30. We have got different layouts of client ports for different needs. On the line side we have different modulations possible so we can reach further distances or we can pass through specific filters maybe, old filters that were too narrow to accept wide signals today.

As you can see here, cabling starts to be quite a challenge because there's all MPO around and keeping track of all disconnections is quite a challenge as well in the information system, we have got NetBox running and creating all those links is very, very time consuming.

So ultimately we will be ‑‑ will we stop using muxponders. We are not ditching them of course, they work very well and great technology for years but we won't deploy more. The first reason for this is cost, this costs a lot. I will give more details about this after that. They consume a lot of power. They are of great complexity. There are a lot of active components, I showed you a few of them in the last slide and every ‑‑ each of them can fail at some point so you have got the cards, the optics, the line and troubleshooting is very hard, even harden when you have multiple components and you don't know which one is faulty. Cabling, as I said, is a nightmare because MPO is expensive, is really not adapted to high density set‑ups.

IP hardware is already 400 gig ready and maybe under‑utilised because of those muxponders. And footprint is not optimised as well because when you want to scale capacity well you have to scale right units well which is not really optimal for us.

So, we are going to do 400 gig in the backbone and more importantly 400 G Z R so the model is to plug into your routers as you did with DWDM 10G optics and you have to add to boost the signal on the line but I will talk about it later. Cost. We observe that cost can be down 43% for the 400 gig and up to 53% when you reach high capacities and even more than that after that, because it's linear and the cost is diluted into the optical cost. Power consumption is decreased by 90% so that's a huge leap forward and this is, we could say it's a green technology for this. There are less active components so we are taking a lot of complexity, there's less points of failure. Cabling is standard and IP hardware is happy to operate at 400 gig, you don't waste your 400 G with 100 gig optics so it's really good for this.

In order for build 400 GZR link you need optics so there's quite a wide range available today. Compatibility ‑‑ those optics are quite special because they transit at minus 8 DBM and receive at around minus 21 maximum so that's a short optical budget, you need amplification and OS N R is also important here.

EDFA amplifier, you need to do your homework, to understand how they work, I will explain this later and you need a regular DWDM mux but you have to check your channel path, I can answer some questions about this later.

EDFA learnt by experience, we are not optical engineers, we learned everything by experience, I will be more than happy to take your remarks, suggestions, corrections, etc., we have seen that most EDFA is settable gain, you can put the gain between 0 and 30 DB usually, depending on application and model, and one building block is gain is flat across all the spectrum. There is a mode which is called automatic gain control that keeps the gain constant however you input into the EDFA but an EDFA cannot raise to the sky, there is a limit so the total command power is limited and that's what the graph here shows, the output will steadily increase with the input power but will stop eventually and the gain then cannot be automatically controlled because of the saturation so will decrease with input power. This means if you have a channel that is way higher than others it will eat up a little bit of your budget in the non‑saturated area and you don't want that. And you even more don't want that because a saturated EDFA is dangerous when channels drop. Imagine you are in the saturated area here and you have got a few channels that decide to go away for whatever reason, your hardware goes crazy, I don't know, and then what the sayings will be this on the ‑‑ on the lower part of the slide you will shift your ‑‑ the point of functioning of the system to the left, and then gain will increase. Gain will increase in an uncontrolled way because you don't control how many channels are dropping. Gain will increase, means power will increase on the remaining channels and then you can plan your objects so you really don't don't want that, you don't wanted a saturated EDFA, you need to avoid this.

Real world deployment of 400 gig ZR, this is ‑‑ well, I think you can see it better, you can zoom in on the PDF. Those are real values of the real deployment between between Paris, about 60 kilometres of Dark Fiber, it's quite good, integrated by the single stage amplifier we have put in order to boost the signal and you can see that T X power and channel numbers are tuneable from CLIs so that's important feature you have to check with your vendor in order to make it work and RX power is quite irrelevant here because you can change it as you pair your ‑ with the amplifier, what is important is the OSNR because the limit for 400 gigs is 26 DB, that's the only hard floor you get with this technology.

A bit more about 400 gig Ethernet, I promised I wouldn't talk only about 400 gig ZR, so ER 8 goes up to 40 kilometres in the band so only one fibre pair, but the price tag is down 75% compared to regular muxponder that you would put in place. So that's a great deal. And you have got LR4 that goes up to 10 kilometres with the same wavelength band and the price tag is down 90% so for your backbone interconnections when you want a cheap start well you just buy them and put them into your ‑ and off you go and there's no need for EDFA here, it just works.

Conclusions:

We observe that 400 gig Ethernet is more than ready for production, it really works, it's not a lab thing any more. Your backbone will love it because it's cheap, it consumes a lot less power, it reduces the complexity of the system so it's better maintenance, better working, you can ‑‑ the NOG is going to love it, etc. For us it lowers significantly the path for high capacities in your backbone because where another technology now you can put 400 gig ZR and usually the speed is higher than before. It means also that people will be able to roll out high capacities at lower price tag so backbones will roll easy. The only thing we don't have experience on is reliability, we have been running this for six months without any problem but maybe in the future we will see how it goes and we will come back to make a new presentation about reliability of 400 gig technologies.

And with this, I'm finished. Feel free to follow our Twitter accounts where we post a lot of news about our installations, our regions, etc., and yeah, and I am always available for questions and remarks and so on. Thank you.

(Applause)


WILL VAN GULIK: That was really insightful, I had to look up some acronyms.

AUDIENCE SPEAKER: Will Hargrave. Thank you for your presentation. I wondered if rather than using separate combination whether you had considered using some of these box where all that hard work is done for you i.e. the one new box include amplifier and all this other stuff?

THOMAS DELABY: We have been studying that and we found that generally, box are far more expensive than just building the solution yourself and they have a lot of features you don't necessarily need. For instance, a lot of them have tuneable compensation modules, in order to make 100 gig objects work but we don't need that we do not want that, it's too complex, it's too costly, we already have IP hardware with 400 gig ‑‑

WILL HARGRAVE: I am in that situation because I am using the 100 gig palm 4 and you are correct, this is not required.

THOMAS DELABY: We just made the jump.

AUDIENCE SPEAKER: From AMS‑IX. Great presentation, thanks a lot for giving us those insights. I have a quick question to you. Have you considered for the to just don't even use if you have less distance between the data centres, I think reality for the ZR is 80 kilometres I think it's less of course around 20 it, 25 kilometres, so if you have two data season terse close by maybe you can in the Dark Fiber you can connect the box directly. Have you considered that?

THOMAS DELABY: That's something I didn't mention but EDFA is actually not required so you have got enough budget to do like 10 kilometres with a standard uptake and actually we will make a deployment between Telehouse 2 and injection path 5 and one of the dark fibres is so good we don't even need an EDFA. So we will just plug the objects in and that's all. And we will start with a first link with ER 8 because since it's so cheap, it lowers the bar for entry.

AUDIENCE SPEAKER: I am looking forward for the next RIPE meeting where you can going to share some more details and experiences.

AUDIENCE SPEAKER: Rick Nelson with VIE. This is a really good talk. Do you have any experience with the 4 by 100 type break out optics where you can get much higher 100 gig but you need tiny fingers or something to get to the connectors?

THOMAS DELABY: Yes, actually we have some, I think I can show you maybe in the beginning, yeah, that's quite there is one node here, T H2 node here is connected four times objects and on the other side it's FR 1 objects so this is useful but you need to replace all your client objects on your muxponder in order to make this work. So that's a huge cost and if you can switch directly to 400 gig ZR, well, it's better because we still have to qualify those objects in the muxponder, we don't know if it's going to work and usually optical vendors are a bit slow using new so not many of them, I think ‑ does it but Nokia doesn't do it yet. So ‑‑

WILL VAN GULIK: I am closing the queue after this one.

Howard: I do have a question regarding your operational experience with ZR optics. To my knowledge they need a higher power budget to be run on your network equipment so did you have any problems and when they run do you see them operating at a higher temperature than the other four or do you not see any difference here?

THOMAS DELABY: There is definitely a higher temperature with 400 optics in general, I think SR 8 and DF 4 are getting very, very hot as well, like 75 degrees or something, so that's not specific to ZR optics. And as for ‑ switches are designed to do it, on our ‑‑ we have a rule, we have to use the first row, the upper row, for Z up plus opes and they are not supported on the lower because is not symmetrical between the two rows so that's something to look for on your equipment, can it take ZR optics it, yes, most probably, can it take ZR objects in any cage, probably not.

WILL VAN GULIK: Thank you. So that was really insightful, thank you again, Thomas.

(Applause)


And the next up we have got the Euro‑IX update with Maria Isabel coming to tell us what's going on. Thank you.

MARIA ISABEL GANDIA: Thank you. Well, thank you and as I said, I am not Bijal, she cannot be here this time so I am just doing this presentation on behalf of Bijal but all the work goes for her and make this just for her if you have questions of course I can take them, if I don't know I will ask Bijal.

For the newcomers, what's a Euro‑IX? The Euro‑IX is an association of internet exchanges not only in Europe, although it's Euro‑IX, but also in the rest of the world, we have around 70 members right now. You can see all the members in the slide. And most of them are in Europe, depending on where you live, you may be familiar with one of them, any of them can be very nice people anyway. And the Euro‑IX association is not just the members, it's also the patrons of course, we won't do anything without our patrons so thank you very much to our patrons, you can see them in this slide, too. Thank you for supporting us.

So what does the Euro‑IX association do for the internet exchanges? It's classified in three things: We have services, we have events, of course, to share information, and we have also tools and community projects. For the events we usually meet twice a year except if there is a pandemic, then we go online, of course, and we have gone to face‑to‑face meeting again in the last year, so we have met twice already.

But it's not only the forums, we also have workshops that can go back to back with the forums or separately and organised by one of the members and hackathons. And we also organisation virtual workshops around topics that are interesting for the internet exchanges and for the members of the internet exchanges.

Regarding the services, the Euro‑IX association publishes a couple of reports. One of the reports is public is the Euro‑IX ‑‑ the European IXP report that has information that is available in the website and it is useful to know what is the evolution of IXPs in Europe and another report that is internal for the IXP members which is the benchmark in report, in the IXP we have questions about financial, commercial and technical issues or technical questions, and then we can see there with all the aggregated information and anonymised information we have we can see where we are as internet exchanges, how others are doing things and how is the average for something which is also useful for internet exchange managers. We have newsletters sent to the members, mailing lists to share information, and regarding the tools and community projects, community project that you may know probably is the IXP database, the IXP database is like the point where all the data from internet exchanges is aggregated. You can just visit it here and I will go into detail of the IXP later. We have the Peering Toolbox. Peering Toolbox is a place to put all the information together about peering. Why peering is cool, why you should peer and this is mostly for newcomers, for people who want to start with the peering or in this world and no many things, it's a place where you can find all the information together coming from different sources. We also have a fellowship and mentor IX programmes which help other internet exchanges that may have difficulties or may be just starting with that, there is a mentoring IX that can mentor and can help them succeed. We have route server large BGP communities, it's a list of large BGP communities that is like not mandatory but recommended to use in all the internet exchanges to facilitate having a common ground for understanding so if you are a member of an internet exchange and you are using large BGP community, that internet exchange it's good that you go to another internet exchange and you find the same. This is what we do with this list and it's also published in the website and we are also working on new IXP films, I don't know you have seen the famous video of how the Internet works, it's all about peering, this video dates from a long time ago and now we are working on renewing a little bit the video but is the idea is to be same, not teaching but letting regular people know what an internet exchange is, what peering is, what the Internet is and how it works in a simple manner.

Regarding our members, I'm not going to go in depth into the news of all of the members but we have some updates. Congratulate TOM‑IX they are accept berating their 20th anniversary this year and a new game that you can play in the link is here, the presentation in the website. There is a new IXP announced in general owe I can't that you can find the link here.

Espanix is connecting the fifth one with data four.

Interlan is having a peak of traffic of 400 gigabit per second with 130 ASNs connected and it announced it's working with several community projects, for instance, Interlan is one of the sponsors of the fellowship programme and it has announced a partnership agreement with DE‑CIX to enable Romanian networks to connect to that. Speaking of interconnections, BNIX and LU‑CIX have announced their interconnection. They are allowing members of one of the internet exchanges to connect to the other internet exchange for a small fee, redo you telling the latency times between the two countries.

For LINX, London 1 now 6 Terry bit per second traffic and all LINX pops and new LINX POPs. Internationally speaking LINX has launched IXP in Nairobi in Kenya. It's partner shipping with the IX after in an and LINX and NAPAfrica have announced this collaboration, to ‑‑ more than 300 gigabit per second.

Regarding service developments and with collaboration of AMS‑IX and DE‑CIX LINX also announced the new analytics features in API.

France‑IX, you have seen the presentation before, it has 400 gigabit per second port available now. It has also merged with Rezepol and LyonIX is now France‑IX Lyon, the backbone is moving to ‑‑ there are more than 100 gig ports in Paris.

The traffic peaks you have seen Paris 185 terrabit per second, Lyon has 64 gigabit and Marseille 270. TOUIX moved to France‑IX Toulouse and France‑IX Lille open.

And regarding DE‑CIX Leipzig was launched in October, RF S November '15th it's available at the data centre in Leipzig, I am not able to pronounce the name but a new data centre in Leipzig. DE‑CIX Frankfurt is now giving 800 gig available for the DE‑CIX) to create efficiency. It's celebrating the 10th anniversary of the UAE IX in Dubai, with the peering crews, I don't see the attachment ‑‑ DE‑CIX was also recently with the best internet exchange operator carrier award for the seventh time so congratulations to DE‑CIX.

Regarding [the] IXPDB status, we have the road map of the IXPDB and it has been a project for many years and it's evolving and increasing the visibility of things and probing, time to time we are doing improvements. The idea of the IXPDB it takes data ‑‑ we don't have someone manually entering the data in the IXPDB, it's something that is taken from the exchanges directly through JSON file that exchanges its offering to the IXPDB with a JSON schema. So the idea to have this is that it facilitates, it's always updated information of course if the internet exchange updated ‑‑ updates it and the idea is that more and more exchanges adopt this schema. When we have this structure data we enable this data to have visualisations with maps and charts and whatever is needed to have dynamic filters to and to download these data sets.

From structure data we go to building data pipelines and we are able to collect and organise all the data there and encourage, as I said, other IXPs to have it, I think there are nearly 300 IXPs with this JSON schema, if others are not doing it it would be worth doing it. And there's data validation coming soon, just to avoid mistakes.

The next steps are not to share only the number of ISNs, the ASNs and the interfaces in the exchanges but also the traffic insofar to have aggregated graphs and to have trends and see how the traffic around the world in all the internet exchanges is going. To enable the comparison tools with more features, to develop pricing API also for commercial purposes and identify useful third party data.

And of course the IXPDB wouldn't be what it is without its sponsors so thank you to the sponsors for supporting it. If you want to be a sponsor you can contact the secretariat at IX‑F .net, internet exchange federation. Internet exchange association federation.

More information: We have a mailing list resource, we have the link here. You have the JSON schema, it's public, you can go there and see how it is if you are an internet exchange. The API is also available for the IXPDB, you can see it in the link and you see on the slide. There is a website where you can query, compare, see what is in the region, no another region, if they have MANRS or not, several pieces of information there and you can go to the IXPDB team. With that, I thank you for your attention and you have ‑‑ if you have any questions?

(Applause)


REMCO VAN MOOK: Thank you, Maria. Any questions for Maria? Or Bijal, for that matter? Bijal, how are you? I will get that I am sure at some point, I'm sure. No, nothing online.

WILL VAN GULIK: She said good, are thank you.

REMCO VAN MOOK: Excellent. So I guess that's it, thank you so much, Maria. Next up is Arnold with an update about PeeringDB. Arnold, go for it.

ARNOLD NIPPER: I am I am from PeeringDB and I want to give a short update on PeeringDB. What is PeeringDB? PeeringDB actually is the database with all the information you need if you want to interconnect, that means we have information about the networks or ASNs, we have information about internet exchange points and last but not least there's also information about facilities, and this information do not standalone; they are nicely interconnect. So if you want to have an interconnect you definitely should register in PeeringDB your network, your IXP and also your facility, and who is running PeeringDB? PeeringDB actually is an association in the US but the people behind PeeringDB are almost 100 percent done by voluntary work.

So, these are the, what we call committees and companies you would call them departments who take care of PeeringDB. First, we have our support indemnity which we call admin committee, then it's operations who take care for the running of all the servers and the network connection, then we have our marketing committee, which we call outreach, and then we also have product development which is done in product ‑‑ in the product committee.

So, as mentioned already, all the people behind PeeringDB, I guess we were between 20 and 30, I do not exactly know the figures, are volunteers, and we are looking for fresh blood in all of the three committees, in the admin committee and here we are especially looking for people who speak other languages as English, because more are coming ‑‑ or most of the people using PeeringDB are not necessarily speaking English.

For the outreach committee, most of the people working for us are heavily technical‑orientated and for the outreach committee it would really be nice to have people with marketing experience.

The operations committee is a small and highly trusted small group, as we know, because they are responsible for all the servers and we are also looking for new volume trees over there and if you have experience with containers that would be a plus. So if you are interested shoot an e‑mail to the stewards, they comprise the heads of all the committees and the PeeringDB board.

So what is what PeeringDB does. You see a steep increase in 2016, in 2016 we had a new graphical use interface, and since then you can see that we more or less have around 10 to 12,000 tickets per year which has to be done or goes through the admin committee.

What we did in the last times is also we want to involve more volunteer contributions, so far most of the development, the feature implementation has been done by a company, by 20 C and we are now folks more on volunteer contributions. So we had security changes from Amazon, changes from Google and various other changes from individuals.

The recent product improvement was for the AC committee, we have better support tools, then there is, I guess, Maria also mentioned it, so‑called IX‑F JSON file, which contains the information about all the participants at ‑‑ at internet exchange points and the internet exchange points are able to say okay, you may pull this information about our participants and then we match this with what the networks have put in to PeeringDB.

We also made it more visible which network are peering with root servers and we have implemented organisational poll features which enable multi factor authentication which is highly recommended to do so and what organisations also currently now can do is to limit access for new users to specific e‑mail domains and also revalidate the accounts periodically. We have also changed the main URL to www.PeeringDB.com, which is also enforced, which costs for some time a lot of issues because some of the users had only PeeringDB.com in their scripts and when they called PeeringDB when we did the change they run into error ‑‑ not into an error, simply didn't look at any input.

With every release we do a lot of bug fixes on use small features. I do not know when we introduced that, but we also now have a feature that you can add a small logo to your organisation and this is much accepted.

You also now see when you had updates to the Net IX updates, net and IX to say what is it is, this is connection to internet exchange point and and for FA C means your connections at other facility.

We also did authentication, we introduced in release 2.26, and the main reason behind it is that we run into issues with the service, we saw a lot of scripts running wild and this is also of course increases the fee which we have to pay for the operations of our service and, therefore, we now are set rate limit and API queries. The reason also behind is we want to keep PeeringDB for operational purposes and if you just want to play around PeeringDB definitely is not the place to go.

If you have an operational interest you can still query the database if you still run into operational issues even with an authenticated API queries just reach out to supporting at PeeringDB.com and an engineer will look into this issue and try to optimise your set‑up then.

I already mentioned that account security 2 FA, nothing to add, please use it. Then, we introduced so‑called self selection fields both for exchanges and facilities and I see I am running out of time. We also improved searching which still is an issue because most of the searches has to be exactly how you want to search something. This is an ongoing issue to improve that

Now point where PeeringDB is not very good and that's because we base our support on volunteers and documentation, nevertheless we updated the documentation and what we now have is a schedule you see when we will do new releases. You see that we already planned new releases up to March of next year. We did the user survey, the outcome we will publish soon and we have to have money for running PeeringDB and we take the money from our general sponsors, thank you to all of them. And that's it. I still have 14 seconds left ‑‑ 12, 11...

(Applause)


Of course, if there are any questions, please come up.

WILL VAN GULIK: Thank you, Arnold. I didn't see any question online.

ARNOLD NIPPER: I am around until Friday and if you have a special question or question generally on PeeringDB, just reach out to me.

WILL VAN GULIK: Thank you very much. So, now we have ‑‑ we didn't mention that in the ‑‑ in the slides but we have an AOB and I think ‑‑ Sander is going to tell us about ‑‑

SANDER STEFFANN: During the Address Policy session, one item that came up was the free space in the IPv4 pool for IXPs, which seems to have some relation to this Working Group. Can you show the graph, I think it's the next slide, this is the size of the pool, you can see it was going down up to somewhere in 2019, then a /16 got added and it jumped up again and it just kept continuing in the same way.

So, according to the NCC, the current pool will last about seven years from now, so 2029.

The question to this group is: Is that enough, are we done with ‑‑ in 2029 or do we need to have some top‑up procedure and my personal suggestion would be, because now every time we need to top up the pool we need to write a new policy. If we want to keep this pool alive to make a policy that allows the NCC if the pool drops below this level, feel free to use some of the reclaimed space to top it up again so they don't have to come back to the community for every top‑up? But that would only make sense if there's actually a need for it. So question to this group: Is this an important pool? Is this something we want to keep alive? And if so, is something willing to write a policy for it or did I just volunteer?

ARNOLD NIPPER: Sander, I guess it's only related to IPv4.

SANDER STEFFANN: Yeah.

ARNOLD NIPPER: I guess we should be able to use IPv6 an IXP by 202. Only.

REMCO VAN MOOK: So ‑‑

SANDER STEFFANN: You do something with IXPs, right?

REMCO VAN MOOK: I do something with IXPs in my day job and I occasionally do hobby with address policy. Looking at the ‑‑ looking at the graph, I have done something at the end of 2012 and I did something in 2019 related to policy development around the IXP pool. I get why Marco brought this up as the policy officer in the meeting. However, I think spending once every, roughly, eight years reevaluating where we are seems just fine with me. So, I mean, I would of course love everyone's opinion on the mailing list, we have a mailing list people, please use it, and please subscribe. I would say, you know what, I'm happy to let this run for a while and if this ‑‑ if we ‑‑ if, by the time we are hitting 2027 and then maybe we should start looking into what he we are going to do at that point. For me at least, 2029 in the current environment feels very, very far away.

SANDER STEFFANN: Anybody object to that? We will put it on the mailing list of course. But if ‑‑ whatever the outcome is, but if people feel this is a good solution I will wait to see what happens on the mailing list and then take it back to the address policy Working Group.

WILL VAN GULIK: Thank you, Sander, for this update. With this, I guess we are reaching the end of the session. I thank you all very, very much. I will remind you, you should rate the talks. And with that, I hope to see you all on the mailing list and at the next RIPE meeting, physically or virtually, see you, okay, bye.

LIVE CAPTIONING BY AOIFE DOWNES, RPR
DUBLIN, IRELAND