RIPE 85

Archives

Plenary session

25th October 2022

At 9 a.m.:

WOLFGANG TREMMEL: Good morning, come in, grab a seat. And perhaps someone is just coming in, close the door. So, good morning, Tuesday, 9 o'clock. I am Wolfgang and this is Fernando and we are going to chair this lovely session this morning we have three speakers, and the first is actually my colleague, Matthias Wichtlhuber, and he is going to talk about a tool called IXP Scrubber and how it uses machine learning to detect DDoS attacks at scale. Matthias.

MATTHIAS WICHTLHUBER: Hi, everyone. Good morning. And thank you for getting up for the first slot today. I really appreciate that. So, this is a research from a research project that we did together with wonderful colleagues from Brandenburg University of technology and actually, what I'm presenting here is a snap SIS of a paper that we published at this year's conference in Amsterdam, and I want to show you the main results because I thought it would be really interesting to look into this and into the latest developments in machine learning and how we can use them to solve this DDoS problem at scale.

So, how do I continue here? So distributed DDoS is an ongoing thing, millions of attacks per day globally, Cisco projects 14% annual growth rate and things don't get better with new concepts like IoT so the current peak to that made it is a 3 .5 terabit, one gigabit per second, enough to knock off most smaller services off the network. And at DE‑CIX this is a problem so we frequently have connected members that simply drown in traffic and the port gets clogged and we unable to solve that.

So, mitigation at internet exchange points, that's a topic into which we have looked quite a while and the question is why do you want to do it? We could do it at the edge. In a previous study where we have quantified the we found that we could drop 55% of the attack traffic two or more SA hops earlier if we were able to drop directly at the internet exchange point and that would remove stress from the infrastructure and allow to us simplify complex DDoS analysis because if all the DDoS members are doing it it's a lot more complicated than if we do it at centralised point at the network. The question can we build a DDoS mitigation system that is fitting IXP's operational requirements because they are quite different in this respect.

So what do we want to achieve there? Low cost, of course, everybody wants that, right? We didn't want to buy any new appliances so it needs to work with the existing hard war that we have, low maintenance, what we definitely don't want to do is manual definition of rules or triggers, there are some tools around where you can do that but what we want to achieve here is a high degree of automation so no manual interference here. It should be member driven so we don't want to define what DDoS is, what we want the members to define actually what they don't want to receive. And this is something that we also take into account here. And as we are talking about machine learning there is the issue of can we control that? So machine learning is sort of obscure and you want to be able to debug this algorithm.

So, let me introduce the main idea of this whole paper. So, this is a simple IXP for members and it's shifting traffic between these four member ASs and of course also DDoS. And you probably know that there is an operational practice that is called BGP blackholing, it's a standardised practice and the idea there is you simply merge BGP ‑‑ mark BGP announcements with a certain community and this signals to the other ASs that they please should drop this traffic before it even enters your network. So, what we did here is, we are looking how we can generate and how we can generate large DDoS train sets from blackholing traffic because this is traffic that is marked as unwanted so it should be a good base for providing some good classification on top of that. We provide the self learning IXP Scrubber learning machine model and the third contribution that we made in this paper because with this cool approach of generating large DDoS data sets can run over long time frames and also geographically diverse evaluation, something like can I train a machine learning in Frankfurt and classify in New York? We can answer these questions for the first time in the academic history of DDoS research.

So, let's dive a bit into the details of how blackholing actually works at internet exchange points because this is really interesting. There is ‑‑ this is how it should work actually, so you see the IXP in the middle, you see member AS A which receives some DDoS traffic and what it does at this point is it announces a black hole which is redistributed by the internet exchange point, member AS B accepts this announcement and then blocks the traffic. That's the theory.

However, in 50% of the cases it looks more like this: Member AS C does not accept this announcement and forwards the traffic to AS A.

And usually we would consider this to be a bug but for us it's very useful because we can see unfiltered and unwanted traffic on the IXP platform and this is an excellent base for correlating flow data and allows us to automatically generate DDoS labels in a training set. And the really cool is thing the size is limited by the size of the BGP and flow data that we can record, as opposed to previous works that often relied on manually labelling network traces, so we can easily out perform any academic dataset that we can find with this method.

So, that sounds very simple actually. But there are some challenges in this data. So the first one is you somehow need to balance the data. If you are looking at the flow X port that is coming from our routers, blackholing are under represented so much less than 1% and that is a problem if you want to apply any sort of sophisticated data analysis on that because the classes are highly unbalanced and machine learning algorithms cannot handle that, we are balancing by non‑blackholing flows to an extent we reach an equal share of blackholing and non‑blackholing flows in the data, all my ‑‑ also maintaining some other statistical thing but the DDoS are in the paper if you are interested in that. A nice side effect of it it reduces the overall raw data by more than 99% so we can throw away 99% of the flow X port and learn on the remaining one or two percent of data. And if we are analysing our dataset that remains after this balancing procedure you see all the points data points nicely aligning at the angle by sector, at the and this is a perfect base for doing machine learning on top of the data set.

So, actually, we are not using just one dataset but two on this work so the first one is the ML training set here, that is the one that is generated with the method I have just shown. This is the big one, contains 685 billion flows from five internet exchange points plus BGP between 3 and 24 months of data from EU and US and after the balancing those are the flow records. This one is used for the ML Pipeline Devine Performance Evaluation and everything we are doing there, but we are proposing a completely new sampling method that has not been researched before so we need some other dataset for cross validation and what we are doing here is cross validating all our models, this is a dataset that we obtained by attacking our own infrastructure with a DDoS for hire services so in this dataset we can be sure that nearly everything that is arriving there is DDoS actually and we can use it for cross validation of our models. And this immediately reduces the risk of introducing some bias by the sampling method from blackholing.


A short glimpse into the data analysis, I wanted to show you some of the challenges. If you look at the sending services we have, so the self attack set and then our training set you see a lot of the usual DDoS suspects, like UDP fragments, CLDAP, SSDP, comparing these sets you have a really comparable traffic mix, but there's also this other category on the bottom right in the training set, and this is one of the challenges we have here so blackholing traffic is not pure, it's not only DDoS because it's a mechanism that works on a prefix basis so if one blackholes a service you usually receive DDoS but also receive benign service ‑‑ benign data so you somehow have to ‑‑ you somehow have to keep that in mind if you are designing a machine learning model on top of that.

So, how does that actually machine learning model that we are using on top of that look like? Actually we are not using one model but two models. The first is microscopic machine learning model and the we attack single flows if they are likely part of a DDoS attack. So you can see on top here we are getting the sample data, this is the classification, not the learning process, and if you see some flow with certain combination of headers, that is likely to be part of a DDoS attack then it gets sort of attack which is symbolised by this little black attack over here and this allows us to solve the problem with the impurity of the blackholing data that I showed on the slide before. After that, we are running an aggregation of the flow data and what we are doing is, we are aggregating all the flows towards a certain target to a fingerprint that is representing all the traffic that is flowing towards this target system. And the idea here is that we have a second model that actually classifies these fingerprints into attack systems, for instance in this case A because it receives DDoS or not attack, in this case B, it only receives benign traffic and if this, microscopic model identifies a system to be under attack it only drops the traffic that is actually matching the attacks from the first step. So it can really only separate the DDoS traffic from the benign traffic at this point.

So that is a bit abstract. Let's dive a bit into the details of these two models. So, this is microscopic level so this is where we tag the flows. The goal here is to identify blackholing prone flow clusters. We are using association rule mining in that step and each of you have this algorithm and action, this is the algorithm commonly used to create recommendations in e‑commerce systems, like customers buying milk also bought bread. And we are just asking the question on header data instead of shopping cart items and we are asking it a little bit different so we are asking for which type of header combinations would you actually recommend the blackhole? This allows us to come up with these routes that you see here as an example, so for instance, here, source port 389 which I think is held up and certain range of packet size, the algorithm would recommend a black hole for this. And the last part of this is what we call a ‑‑ this is also what you can match against header data, simply.

However, we are not accepting anything that is coming blindly out of this algorithm blindly, so you want to exert some control about what is matched there and what can actually be fitted by the algorithm. So we are supporting this with user interface, you can see an example here, and we have ‑‑ so you have sort of an overview of the header combination that you see here and what the algorithm thinks about it, you get a bit of statistical data that is really quite simple to understand actually and then you can classify rules into a decline staging or accept category, and only what is accepted actually will later also be used by the algorithm to classify data.

We have looked at this with some networking experts so we gave them the user interface and let them classify actually tagging rules and we found that it worked quite well and was quite understandable and useful to them.

So, the other one is microscopic level, that's the level we are actually classifying the target systems. So the goal here is to classify targeted hosts correctly into, this is a host that is under attack or a host that is not under attack and we want to do that in a way that is independent of the location and also locally explainable, so classification tasks are really a standard task of machine learning, I am not going into which algorithm we have used here essentially you can use right set of algorithms for that but I rather want to talk about how we make this whole thing independent of the location. Because you might see something different in New York than you see in Frankfurt, for instance, for instance, different reflectors. What we are using here is a method that we have borrowed from financial risk assessment, again probably each of you have also been subject to this method because this is something that is often used for credit scoring, so if you ever have been subject to credit scoring this has been applied to you. And this is called weight of evidence and coding it's simply a way to encode categorical variables so. If some categorical variable is likely to appear in a black hole we give it a positive risk. This applies to reflectors IPs, this applies to DDoS prone protocols, if you see something like this, it gets a positive risk score. If something is unlikely to appear in blackhole it gets negative risk score. For instance, quad 8 DNS provider would get a negative risk score or some protocols mostly used for benign traffic, HTTP. And what we are actually doing with that is we are applying this weight of evidence in coding to the data before we actually let it classify so actually the model consists of two parts, the weight of evidence which is encoding all these categoricals and then it only passes a risk score for everything it sees to the classifier. Coding would produce a negative risk score for NTP and positive for quad 8 and the classifier would only see this risk score, never see something like quad 8. I am telling you because we can do cool evaluations with that.

So, let's first talk about the general evaluation that we did with the system. We took the standard approach of numbering this with class fires on all of the data, and we found that an algorithm called X G boost has highest overall performance, you will probably know about F1 scores which is the metric you want to look at here and we have one score for X G boost in this case larger than 0.98, so that means the classification is really, really good because 1 is the best score you can achieve here.

And the other thing that we looked at is retraining. So temporal model drift is a problem that is something that is known in the machine learning community and this is the problem of you are training a model and then you are like to try to predict, for instance, targeted two hosts two months in the future and then you start getting problems because the distributions have shifted over time and your model simply doesn't fit to the data any more. And the common approach is to solve this with retraining but nobody had a dataset long enough to actually do a proper evaluation on that, and we could fix that with our approach, with the blackholing data, and we evaluated daily retraining with the sliding window, you can see this here on the plot on the right, so we tried daily retraining with a window size of one day, one week and one month and on the Y axis you see the performance we reached with the classification and one interesting result here was that the window size actually hardly affects the median performance of the classification, the median is always quite good here but if you don't have enough data you produce occasional outliers, that's what you see here marked in red so you suddenly have days or ‑‑ days in the data set where you really get a bad performance and increasing the window size actually helps here.

The other thing is we also evaluated this type of model drift or model transfer as we call it, and that is now the idea that we are, for instance, training a model in Frankfurt but classifying in New York or the other way around, however. And the first ‑‑ when we did the first attempt on this we did the following: We took the model which consists of the weight of evidence in coding and the classifier, we shifted both parts, trained both parts in IXP A and tried to classify on the model of IXP B and the result was pretty mediocre. So what you can see here is on the Y axis the IXP that we used for the training and the X the IXP that we used for the evaluation, blue is good and yellow is not that good, and you will see that essentially if you do it that way, it only really works if you are training and evaluating at the same IXPs and model transfers is really difficult if you don't want to risk a performance drop and we thought what can we do about that? And we realised that actually we can split the model into two parts, so we can train the model on IXP A but then we only take the classifier and we move it to IXP B but we keep the local weight of evidence in IXP B and it turned out that this actually works very well. And this is a really interesting result because it shows us each IXPs is different DDoS vectors and attacking systems, for instance different reflectors, different attack vectors, and the weight of evidence in coding is different geographically and it's really helpful to encapsulate local knowledge here, up to a point where it's nearly irrelevant where you are training the classifier. This takes intuitive sense, for instance if you see reflectors at a certain IXP it doesn't mean you see the same reflectors at a different IXP, right? So did that fulfil our requirements? We think actually yes, so low cost, works with flow export, can ‑‑ low maintenance, well once you have automated the retraining there's not really much to do. Just keep an eye out on it, that it doesn't deviate in performance. It's also member driven because actually with our approach the members simply define what they want to drop by blackholing, which we think that's pretty neat approach to get the members' opinion on that process. And we have also shown that this is pretty controlled, we have done a lot of evaluations, we can limit the possible damage of false positive and we also understand the performance limitations now which is really important if you want to classify on some 11 or 13 terabits of data per second and that brings me to the end. That was the very short version so if you want to know more, freely free to scan the QR code here, you get a copy of the paper. Only here it's for free, if you have to download it from the ACM portal they will charge you and I am open for questions.

WOLFGANG TREMMEL: I think he have one online question.

FERNANDO GARCIA: Michele ‑‑ speaking about himself, ask: Could this anti‑DDoS solution also be abused block your competitors' data at this same IXP and has it any safeguards to limit that?

WOLFGANG TREMMEL: Can it be misused?

MATTHIAS WICHTLHUBER: I mean, sure, as any system in computer security, the system can be abused. If you ‑‑ it's always the question what is the cost of influencing this, right? Same for, for instance, for Tor or other systems in IT security. So we have looked at scenarios like this, like for instance poisoning blackholing data which is the base for the learning process, and this could be done but you need to invest quite a lot of money to do that, you need to be present at the IXP that is measuring the actual data, that means you need a port there, you probably also need a router, maybe you can solve this with virtual machine so it's definitely not for free, though it's in the range of, let's say, a four digit amount of dollars because there was a need to generate the traffic, of course. It needed within ‑‑ need to generate considerable amounts of traffic, it's not like that you can generate one end bit and you are done because you are competing with all the other DDoS traffic so it's possible in principle but I think it's much more likely that someone that wants to take a system online ‑‑ off‑line is looking for different victim.

AUDIENCE SPEAKER: From the global ‑‑ foundation. My question is related to the previous one but a little bit different. When you spoke about the classifying you provided an example where the traffic originated from the big ‑‑ DDoS provider. Would not be classified as an attack? And the traffic not originated from this big well known DDoS provider likely would be? So how is this different from like SMTP, antispam rules, because I believe everyone knows that if nowadays you set up your own server you likely would be blocked by many provides just because they don't know you, you are not going from like Microsoft or any big player?

MATTHIAS WICHTLHUBER: So the issue here is that what you are mentioning, you don't have a chance to opt out, right, as a provider, that's the real problem here I guess, if we offer like that, it would be sort of opt out. We can also influence what we are actually filtering here pretty well, because we have defined certain points where we can influence the algorithm. For instance, if you know that a certain DNS provider is producing false positives we could give it a bit of evidence score manually which would mean it wouldn't be considered to be bad by the algorithm.

AUDIENCE SPEAKER: Thank you.

AUDIENCE SPEAKER: From ACOnet ‑‑ internet exchange. I have a question regarding your tool and do you provide the rules for dropping traffic somehow publicly? I mean, if I debug some connectivity issues it will be interesting to know if my traffic is currently somewhere dropped or not, independent of who sets the rule because it can happen that an AS is behind an AS behind an AS, and that drops packets and I lost some connection I didn't intend to lose

MATTHIAS WICHTLHUBER: So those are still operation ‑‑ this is still an operational project, it's not that further I can answer that question. I guess you would get insights via the portal.

AUDIENCE SPEAKER: I know. That is the problem; it should be public ‑‑

MATTHIAS WICHTLHUBER: I wasn't finished with the answer. I had a talk on the actual filtering rules and how generated last RIPE for 84 in the anti‑abuse group and we have Open Sourced the filter list actually. And we will probably use a similar mechanism to Open Source the filter list for people that want to debug stuff.

WOLFGANG TREMMEL: Thank you very much.

(Applause)


WOLFGANG TREMMEL: There are Programme Committee elections going on and if you would like to join us you can nominate someone or self‑nominate until today at 3:30, just send an e‑mail to pc [at] ripe [dot] net, include your biography and why would you like to join us. Next speaker is Valerie Aurora from Frame Shift Consulting and she is going to talk about how community management is like network management, since I am doing network management for a very long time I am really interested in that talk.

VALERIA AURORA: Just checking the sound. I think it's cool to have different colours of microphones for skin colour but now we have mask colour.

I can start with some part which is: I'm Valeria Aurora, my pronouns are she, her and hers. This is exciting to be here, I haven't been to a conference in person since 2018. I am here to talk about how community management is like network management and in a minute they will have my slides up.

I have to say today I learned the mystery of sometimes the clicker is really slow, sometimes it is routed really far out of the way, and because you are are all an expert on that today's clicker is very responsive. I am here to talk about how community management is like network management.

So, when networks and communities need management, but when it's small, it's really easy; when your network is small managing routes is easy, you can do just it by hand. Same thing with communities, it's very easy to manage a small group of people. When they get big and get grown up we need things like BGP plus monitoring blacklists, filtering RPKI, all the stuff that people are giving talks on today.

When communities are big, we need a Code of Conduct, among many other things. So today I am going to talk about what codes of conduct are for, how we work, how they are enforced, how to prevent abuse, I noticed that was the first question and we will have time for Q&A.



I have been doing this stuff for over ten years, I co‑wrote a free e‑book, it's actually novel length. I was lead author of a Code of Conduct that's now in use by thousands of groups including most technical conferences and I do some other things as well like DEI coaching and things like that. Also, I used to be an operating systems programmer, so my first Linux kernel patch was in 2001, I fixed a TCP proxy bug, it only broke every other connection. I fixed that while I was building an Internet appliance and I wrote file systems. I also collected questions for the TCP/IP drinking game which you can see me playing while wearing my Red Hat.

Small networks are simple to manage, this is the ARPA net in 1977. Small communities are too. If you have a party with ten people at your house and someone is being a jerk, the person who lives there, the person who is running the party or any random person says get out of here and you just don't invite them back, easy. What happens when networks gets big? Well, so here is the list of routing incidents from the MANRS observatory from September 2022, there's about 1,500 issues. If you had to deal with those one by one by hand that would be pretty miserable. So communities when they get big have problems too. So just to give you a very extreme example, DEF CON is a hacker convention in Las Vegas, I went in 1995, I stopped going a few years later but they have made things better so about 30,000 people attended in 2019, there were about 30 reported incidents, most of which were things like someone got lost in the casino, as a result of this event about 3 people were banned from future DEF CONs, so about one in 10,000 chance per person of getting banned. This is one of the people who was banned was literally a social engineering expert. So...

What's the solution? I think the solution for community and networks is ‑‑ are similar. This is from a MANRS post, it is only through collective action and a shared sense of responsibility, that we can address problems like BGP leaks, hijacks and DDoS and spoofing that have real world consequences for millions of people. The same thing is true for people behaving badly within your community and people behaving badly has real world consequences and is currently having real world consequences on your community. The people in it and the people who chose to leave or not join.

So what are codes of conduct for? What is the purpose? I am going to do all this by analogy with RPKI. So, routing ‑‑ by the way, which I learned for this talk so I will be making mistakes all the time, please forgive me.

Routing improvements prevent accidental or intentional advertisement of bad routes. Codes of conduct prevent harm to less powerful community members. And I will talk a little bit more about that.

So, less powerful, the Code of Conduct needs to protect the less powerful, here is a question for you: Who is more likely to be sexually orationly harassed especially if there's no Code of Conduct, a male middle‑aged executive, in the United States it would be a white person, in your area something more complicated, or the young woman student who is part of a marginalised ration or ethnic group. I hope so it's really important that less powerful get protected by the Code of Conduct.

So here is how we less powerful organisations using BGP and RPKI. You start with trust anchors, issue those cryptically signed certificates, showing who is who, two network operators which say which IP ranges they own. The TAs keep the repositories of network information and network operators query them and use them to filter routes. Here is an example of how communities protect the less powerful. Early on in my work the CEO of a large technical company that you have heard of told us we don't need a Code of Conduct that doesn't happen here, we are so professional so I arranged a call with someone I knew had been sexually harassed at multiple of this person's flagship conferences, the CEO was oh, we are adopting a code of conduct, this is the action that the most powerful person in this organisation took to protect people who were less powerful.

I want to talk a bit about that, it doesn't happen here, one of the things I learned I only see the things that happen to me. When people are saying this what they mean is that doesn't happen to me here. If you have more ‑‑ I think it's less than this, but if you have more than 100 people it's almost a mathematical certainty that someone is being harmed or harassed in your community, there is also a group of people who are certain this is not happening because they know, they know everything about what it's like to be a person.

So, how do codes of conduct work? I love the name of MANRS, mutually agreed norms for routing security. Together we are working to make an agreement about what we think is acceptable.

Code of conducts also establish mutually agreed norms and I want to say here your obvious norm is someone else's surprise, people really have different ideas what acceptable behaviour is. I do have an example of a person who gave a talk at a ruby conference who included pornography because he wanted to get everyone's attention and when we let him know that that was not okay he said, oh, if I had known it wasn't cool to have pornography in my slides I wouldn't have done it, everybody. But he needed to be told because he came from a group of people who thought that was awesome.

So, how are codes of conduct enforced? In RPKI, the AS sends a ROA to the TA, it's signed with the key that was issued by it, the network operators ask for the records records and then the network operator verifies it and decides what to do. With the code of conduct people do stuff, maybe it's good stuff, maybe it's bad stuff, other people file reports, the committee investigates, recommends an action and this is really important, the committee can't carry this out themselves, the community reviews it and decides what to do.

So I want to talk about recommended actions because people focus on those one in 10,000 bans. The most common action is don't do that, and the other person saying no problem, I won't do that. Sometimes, and this is pretty often, you just don't do anything, it was an accident, it was a misunderstanding, it was a false report, people already dealt with it, nobody is worried about it anymore. Occasionally, if, say, you made a mistake in your talk, people will ask you to edit your slides or video or just don't post them, and most of these cases the person who had some sort of mistake in their slides says oh, please, please don't push that ‑‑ don't publish the video, I don't want the whole Internet mad at me and I change my mind. So very rare you have leave this space until you agree to stop whatever the behaviour is, or just a certain time‑out, then you are allowed back with monitoring. I think this is a really good analogy to what happens with misbehaving and network sources that, often you just take them off‑line for a little while and then you watch them carefully when they come back. Extremely rare is leave this space and don't come back. That's when somebody has made it completely clear that they like harming people and they will continue harming people, that is their goal, they want to do it, you can't make them stop. So that's a pretty extreme situation. It does happen, it's just not common.

Let's talk about how to prevent abuse. So, the way this works with RPKI, network operators trust the trust anchors which are run by the regional Internet registries and we will talk more about how those are governed, the community trust the code of conduct committee, that is also run by a trusted organisation, in your case these are both the same thing, RIPE NCC. So, the community in all cases is the ultimate route of trust. The code of conduct committee does not have the power to enforce its decisions. The people who control mailing lists, the meet admission, Working Group membership, things like that, they have to actually agree and implement it. The community then governs the committee through your existing community governance systems which you have quite a few of; I can talk about some examples here.

So, your RIPE Chair is selected ‑‑ I read the RFC ‑‑ through an extensive community input process, the RIPE NCC members vote on the board and many other things ‑‑ I think that's tomorrow, in the afternoon. The Working Groups select Working Groups Chairs and co‑Chairs by their own process. And the community chooses the Programme Committee, who then chooses the meeting content. So these are just examples of the whole feedback system, they apply also to the Code of Conduct committee.

Let's talk a little bit about due process. People again get really stuck on extreme cases here, and their model is often due process in the formal legal system of your government. So here is an important difference between your government and RIPE NCC. Governments are very popular, they have ‑‑ depending on the government, unlikely ‑‑ powerful; they have a monopoly on violence, the State monopoly on violence. So they can kill you, fine you, imprison you, kick you out of the country forever. They employ police forces who have professional investigators, they have hold courts, they have judges, they can compel you to testify, they can punish you if you lie in court. So here is what RIPE NCC and the RIPE community can do to you, and this is not trivial, but they control the regional Internet registry, and membership of RIPE NCC control things like Working Group Chairs, who gets to be on mailing lists, things like this. So RIPE NCC, if the community and the organisation chooses to carry out the Code of Conduct committee's decisions it may have an important effect on somebody's career, but the RIPE NCC does not have the ability to compel you to testify and punish you if you are lying; they can't imprison you, all that kind of stuff. So the process needs to be in proportion to the power in the resources of the organisation, saying that you want a legal court‑style process, feel free to fund that, it's going to cost a lot of money. So, I want to talk about false Code of Conduct reports. I was really surprised by what actually happened over the last ten years when it comes to false reports. So first, they are extremely rare. The cost of reporting can be very high, even for true reports. There's a lot of news stories around, for example, the me too movement, if people talk about their experiences with racism they often get ‑‑ suffer severe professional consequences. False reports are punished even more, and they are often against the Code of Conduct, I highly recommend that, that's part of ‑‑ to make ‑‑ using false reports to harass people against the Code of Conduct, that's the right thing to do. A code of conduct just makes it slightly less costly to report someone's behaviour and slightly more likely that something will be done about it. It's still costly. So here is what actually happens with false reports; I was really surprised. Guess who is filing them? They were frequently filed by powerful people, who had a long record of opposing codes of conduct and individual bad behaviour that made other people most of the community wish that they were not part of the community. That was really surprising to me.

They are often filed against marginalised people who support codes of conduct, and in all the cases I'm aware of they were quickly discarded or punished by the code of conduct committee, so I have one example of somebody who took exception to a published ‑‑ a quote from a published news article they had written, saying ‑‑ being used as an example in a conference of negative behaviour by a speaker who included their own past writing as an example of negative behaviour, this person filed a code of conduct, the code of conduct committee said give me a break, no. And then a few years later, the same person who filed this report was finally banned from the conference after berating a member of the conference staff in a totally unacceptable manner, many cheers were had.

So, here is some real world code of conduct problems I have seen. The anti‑code of conduct people file false reports against pro‑code of conduct people. Your biggest risk is if you advocate for code of conducts. There's so much fighting, everyone is so mad and this isn't worth it, I am not even being paid.

The committee under‑enforces because they are afraid of the personal consequences for themselves. This is why it's so important to support your committee.

Or the committee does nothing because they can't figure out what happened.

I have been in this experience ‑‑ this situation before, there's multiple different languages I don't know, there's multiple people who are all seeming to behave badly towards each other, there's some sort of massive history going on, that's why I rarely manage communities any more.

I have never seen somebody use a code of conduct report to win an unrelated argument, like, you know, what DDoS system you ‑‑ prevention system you should be using.

All right. So here is the summary of how you prevent abuse of codes of conduct. The committee is chosen by a trusted authority in some manner. You don't choose many power hungry jerks. Members must recuse themselves for conflicts of interest. This is an RFC sense of must. Recommendations are implemented only if other people agree. The committee publishes regular transparency reports so this is observability, the question what was happening with your automatic classifier at the IXP. So you really need to be able to review what they have been doing.

The community uses existing governance structures to hold the trusted authority accountable. And therefore, the committee.

So here is a few tech communities with codes of conduct, there's literally thousands of them but I know for sure and I have links for the IETF, IEEE, ACM, Python foundation and all its communities and things like that. I used to maintain a web page where they all had the list and it had to get split into multiple web pages so there's a lot. These are enforced in various different ways, you can go look at their transparency reports if they publish them, which is a good sign that they are doing a good job, to get examples of how they do this.

So what's the alternative to having a code of conduct or RPKI?

I want to emphasise that that stuff is happening now, and the process is much less fair, so if you remember before we started partially implementing, we are getting there, RPKI. Network operators would just trust other operators by default, they would used a ad hoc methods to screen out bad routes or bad actors, sometimes they would get them wrong and sometimes they wouldn't get all the information. Overall, there's no way that was going to prevent BGP hijacks or leaks, all the bad stuff is happening over there where you don't even have any influence.

With communities, every person is forced to trust every other person by default. They may build their own list of harmful people using ad hoc methods and Goss sieve. I am pretty pro‑gossip myself but I much prefer to have a formal system where people can do a good job of finding out whether it's an ill intentioned rumour or somebody's true behaviour. Overall, they can't protect themselves from harm unless they are very powerful. So if you are in the audience thinking well, I have never had a bad experience on a RIPE NCC meeting or mailing list, I have news for you: People think that you are not a good target because you have too much influence.

All right. So, I would rather do this. It is only through a collective action and a shared sense of responsibility that we can make our community safer for the less powerful.

All right. So, here we are. RPKI isn't perfect and neither are codes of conduct but they are better than what we have now, and they are improving daily. We keep ‑‑ you keep coming back and refining and improving and editing your code of conduct, I have been through this cycle multiple times. Many tech communities have successfully implemented codes of conduct and prevented abuse of them, they have been tested and it worked. If you would like to be part of this, please support your code of conduct committee, they need it, it's such a hard job, it's completely miserable. I only do this work when people pay me. I have too many years of it. And please participate in the RIPE NCC General Meeting to make sure that you are holding your community accountable.

All right. We have ten minutes for questions now, but also, this afternoon at 3:30 we have 30 minutes of questions I would love to see you there. I love talking about this, I have seen all this stuff, I have a million stories and I have worked out all these weird principles. So I would appreciate it.

WOLFGANG TREMMEL: Thank you, Valerie.

(Applause)


Are there any questions? If you come to the microphone, please, remember to state your name and affiliation.

AUDIENCE SPEAKER: Rob Lister: I love this analogy of ‑‑ I love this analogy of the committee and the RPKI routing security, this sounds suspiciously like one of these conversations that happens maybe at one of these conferences in the bar and you speak to someone, oh, what are you working on? This sounds very familiar. And this is why things like this are great because you get ‑‑ you can get talking and make these really unusual analogies and connections that you ‑‑ so it's ‑‑ it's inspired, I love it.

My question is: I run a Programme Committee and we are sort of very mindful of, sometimes, not being the census of when people are sending us proposals and presentations but we kind of think, okay, I may personally disagree with some or all of this presentation but I am not there to censor it, I am there to make sure that it's appropriate for our community, so sometimes I get push back to saying you are censoring, why can't I say this? Are we going too far?

VALERIA AURORA: Exactly.

AUDIENCE SPEAKER (Rob Lister): And it's very subjective what I would say is appropriate and so ‑‑

VALERIA AURORA: I have some principles I can share. In order to claim no platform you need to have the sense of entitlement to a platform and have had a lot of time of people listening to you. There are people no platformed right now, people saying unpopular things, people who are less powerful. So I like to start from the framing of look, we already have a problem with people either self censoring or being censored, it's happening informally through the processes of systems of oppression, sectarianism or racism or homophobia. So the code of conduct is actually evening the likelihood that you are going to get no platform. I do have a specific principle, the paradox of tolerance so if you look this up, it's a philosophical principle developed after World War 2, you should be tolerant of everything except intolerance itself. The way I have worked this out is, the ‑‑ if somebody says, is trying to do something that is dehumanising to a group of people, especially based on their identity, but I don't want to be dehumanising any group of people, I don't need to allow that, I don't have to tolerate it, nobody gets to get on stage and get my organisation's name behind that so threats the short version. I would be happy to chat at more length in the afternoon. Thank you.

VESNA MANOJLOVIC: I am working for RIPE NCC. My question is about how do you balance the need for transparency and accountability versus confidentiality?

VALERIA AURORA: That is a really good question and is something we have worked with a lot. The information that people need to know from a transparency report or other announcement is, is the code of conduct being enforced fairly and am I examining to be safe when I attend this community? You can often do that without naming any names or providing identifying details. Often, if something has happened publicly everybody knows, it's more interesting when it comes to something that happened that only a few people know about. What I have learned is that a thing that a few people know about often becomes a thing that a lot of people know about. One of the ways to slow that down is to publish a transparency report saying what you did about it. I feel that codes of conduct increase confidentiality because you have a professional group of people who have been trained to respect confidentiality. This is something I have run into a lot myself, yes, I am a member of a board of directors, I need to not share what happened, these other people don't feel that way. So, yes, thanks.

WOLFGANG TREMMEL: Any more questions? Okay. It doesn't look like it, thank you very much. And please do not forget to rate the talks. Go to the website, there is a button behind each talk where you can rate it and that helps us in the Programme Committee immensely to select the talks for the next RIPE meeting. Good. Next speaker, Leandro Bertholdo. He is going to talk about well we all know traffic can be ‑‑ Leandro will talk about the asymmetry of internet exchange points and why should CDNs care and why should we care?

LEANDRO BERTHOLDO: Well, yes yesterday I was here and I listened to the presentation of Tobias talking about hypergiants in the Internet and how the content is centralised and everything, and here we have another issue, the other side like small CDNs, small providers, the small content providers, some problems they are facing about the internet exchange. With type of problems, normally they are preferring to stay behind big provider and pay for the transit instead of using the IXPs and, strangely talking with them, they related the problem was not the cost itself but it was about the low quality of routing inside the internet exchange. And we try to investigate it what they are saying about that.

Well, normally this is small CDNs use some solution based proxy and Anycast. If we look at that, for example, the top 1 milllion domains in the Internet, more than half of them are using Anycast at least for DNS. So, the use of Anycast in the Internet is growing every year since, started a lot of DDoS attacks in the beginning of 2000. So it's a good solution for content solution and for getting better resilience in the network in case of attacks and everything.

The fact is, Anycast is no more than hacking in the routing because we get the same IP address that we use for Unicast servers and spread the same IP address over different parts of the world or over different parts of our network. And that makes that part of the network goes to the closest server that are there. That is what to expect. But not all is like that. Because of ISP routing policies happens that part of the users that are in, for example, one continent, being directed to other continents and there is some impacts on it. And what is the problem of IXPs? In that situation that we showed before, we know that okay, it happens because there is a lot of different policies inside each one of the providers and everything but inside internet exchange normally when somebody goes there like you and connect to the internet exchange, what do you expect is, okay, I send traffic to any other participant and I will receive this traffic back, yeah? Because we are directing ‑‑ connected there. But that's not true. We have a part of this traffic that the other participant just ignore our announcement inside the internet exchange and prefer to send by the other means. And we have other cases where, okay, somebody didn't announce anything in the internet exchange and we are receiving traffic from them here, strange? So we have these type of issues. These guys we use to identify as asymmetrical traffic inside the internet exchange and we want to quantify that to know that there's difference between one internet exchange to another or how many of the ASs prefer A, B or C in this case, that was we want to know here.

Well, what ‑‑ why this asymmetry is important and why that is really bad? We know a lot of old problems with that, that is already in the ‑‑ for example, if you try to do a latency estimation here fore these cases you get wrong values, the troubleshooting of networks that are in these type of asymmetry position, it's always tough if you are an administrator to identify, to identify that normally you have some issues. Optimisation is a problem here too. And we have some new problems, example: This case of low quality paths, when we have a network that have these configuration normally we have a path that is fast rather than is slow and we have ‑‑ we then find one case of one provider being overcharged because of this scenario, when he have like a packet in US, a packet in Europe, and part of the users of another CDN were sending the traffic to Europe instead of US and he was overcharging four times the price for using this type of service between two different Anycast networks or two different CDNs. So there is other problems with this type of service, too.

Well, then, what we want to know is, there is difference between internet exchange, if each one of these autonomous system have a different behaviour and map, individual behaviour of this internet exchange and what we can do to improve that situation.

So, the way here is like to say okay, can we do maybe a better Anycast or better way to create an Anycast network, that is our goal here. So, we try first to identify a way to measure that. First we try trace routes. Normally outside Europe we have very low values like in Sao Paulo we have 4% of ASes that we can identify and other difficult problem to solve is identify if trace route is really passing through the internet exchange or not.

IXP data flows, well in some places people will still have issues about the legislation. That means that the IXP administration is not sure if they can share or use the information from the data flows, even for research purposes.

And other thing is, okay, if we analyse the data flows, we can be sure when we see one flow coming and going back to the internet exchange, but we do not know what to do when we see just one ‑‑ because maybe is just a sampling problem because normally IXPs use a statistic flow so we do not know if a sample is a sampling problem or is an asymmetry.

And the last one is about routing dynamics because we have some times that we have different, for example, variations in the Internet, when we have group of ‑‑ fiber in the sea, for example, transit on a cable so we have variations for one month, for example, then we reduce our problem to just to know the neighbours that are direct connected to the internet exchange. Well, what we did here is, we set Anycast network inside each one of five internet exchange, and we actually generate traffic for 6 million /24 networks for the participants of this internet exchange. Well, it was nice because we could map almost 90% of all Ases in this internet exchange in 15 minutes, that is good and when you compare with the results that from transfers of each one of these ASes to the AMSIX, for example, we increase from the 59 to 91% of mapped ASes and for example in Sao Paulo we increase from 4 to 79. That is ‑‑ allow us to have good picture of how it works or how it works that IXP.

Well, that's the test‑bed we used. It was built for the university some years ago.

And those are the IXPs that we tested. So five IXPs, US, South America, and Europe. And how do we test it here? Basically, what we do, it was generate traffic inside and outside the internet exchange, and manipulate the routing using equal size prefix from our Internet provider in order Anycast node in other region of the planet and to a generate a more specific prefix inside the internet exchange. What we got, well, we learned that we have those deaf neighbours, always ignores what we send in the internet exchange, they never used our prefix to return traffic back. We have some mute neighbours that do the contrary, do not announce anything to the internet exchange but they send traffic, what is strange at minimum.

And we noticed, too, that the IXP path inside the internet exchange is really being depreferred by the IXP customers, that means that the customers is ‑‑ a lot of customers do not want to receive traffic in the internet exchange and when we try to force this traffic using, for example, local preference and everything, in some internet exchanges we have a negative impact; that means worse roundtrip time, for example. We notice that around ‑‑ there is a variation from each one of the internet exchanges, but around 85% of all the traffic that we got is symmetrical, very nice, but 12% of them is like that deaf neighbours, what happened if you analyse the graph of deaf neighbour in internet exchange is more or less like that one that we see. We evaluate around 50 different cases that we could validated with the Internet ‑‑ with the IXP administration about graphs like that. So, some big guys are doing this.

Why? We saw, for example, before saying ‑‑ normally they use, for example, in one case half of those that we consulted, it was by configuration mistake. Normally, it is small systems. And other part they just say okay, I want to do that, we are not preferring to use the internet exchange, we are using the internet exchange just as a pick pref, that was at least new for us.

Well, the numbers that we got for each one of these cases is around, for example, that 85% in the best case, is when we use a local preference to the internet exchange and use a more specific prefix there so better numbers of symmetry in that case. When I use the other case, like ‑ the prefix with our IXP in another ‑‑ other, we have this symmetry going up. We need to do a special treatment to try to decouple which is the IXP that use it, so we described that one in the paper. And that's the type of the the impact that we have when we try to force or push this /24 there. In some IXPs, we are seeing the yellowish part is like the RTT that are increasing so we have two degrees of RTT when you use /24. So normally when we try to force to return to traffic through the internet exchange we are attracting bad routes.

Well, the second part is like, okay, and if you like, who are doing that, there is some classification about the type of the business. We are expect, for example, that the most asymmetrical one will be the normal IXP. Not really, they are the green ones so they have a very good symmetry when you consider the average of all of them but we notice the one case about the mobile operators, those one are the most asymmetrical, we couldn't talk about that, why they are preferring to deliver in other parts and why they are doing that in the internet exchange is other thing.

And those guys that are 100% deaf or mute, they are, in fact, a low number, that is the real number, like only ingress or only egress, they just receive or just send traffic. It's a small number of autonomous systems compared with the total ASes that are connected there. So, we know that a small number, when we aggregate, for example, we get the networks, the network symmetry and we aggregate to AS level symmetry, symmetry we got just a few ASes that are green, this symmetry thing in most of the cases.

The part of the ingress are more related to older CDNs or Anycast networks which is kind of expected because Anycast networks, most of them have asymmetrical feature or use some SDN feature or a lot of the routing configuration to try to pick the best path.

But, we have another case that is about the egress only traffic. In this egress only traffic we noticed the following: We have a big I that is generating data symmetry. Is related to routing tables so let's look at routing tables. When we look, for example, here, we have the graph of, for example, the view of RIPE RIS 333 global view and we notice we have around 10% over 12 years and this growing of path prepended is well known but when we know the other cases like LINX and we see that we have 30% of paths prepending, so we have at least three times the number of prepends here. Who are doing that? Surprise in the most are a significant part of of this prepenned, around 50%, is the neighbour that are connected in the internet exchange doing prepenned on his clients, not himself, the red one means networks that the autonomous system connected to the internet exchange prepending his own prefixes, very low. When we prepends his clients that it was like a surprise for us.

Okay, the path is nice, but nice just for me, not for my clients. What is strange.

Another one is, of course, the origin prepends so in some IXPs that origin prepend is very high. And then we went to look at that. And we noticed that some big autonomous systems connected to the internet exchange, for example, Hurricane Electric, used to contribute in the number of the path, a very high number, for example here is the total of paths that you can see in 6 six and here is the contribution of prefixes of AS Hurricane Electric, or they are responsible for half part of the prefix on that internet exchange. And it is not just on that other internet exchanges have similar behaviour. Here, you can see, for example, in Amsterdam, Amsterdam is like 30%, but SIX is more than half, so they have a huge contribution on prefixes. That just one autonomous system. And that's because that you can see or the effect that you can see here. About the difference in time and quality.

Well, so, conclusions: We have, for example, 25 ‑‑ 24% of all autonomous systems try to avoid exchange traffic over the internet exchange. That's a kind of big number, in my opinion. We have around 28% of all IXP paths that have some type of prepend. 50% of it is about the customers prepending his clients inside the internet exchange. And other number is about 8% of all autonomous systems just ignore all the IXP's routes and we have 34% of all IXP prefixes that will never send traffic back to you. It's like okay, I am connecting to the internet exchange, hoping that I will have ‑‑ I am talking about opening policy and I am waiting to receive traffic from all of them. In the worst case you receive just two‑thirds of those ASes returning traffic to you in those internet exchanges that we analysed.

Well, what we can do in the case of the CDNs or small CDNs to try to solve this issue, normally if they are using Anycast we need to pass that solution to Anycast. So, one thing that you can do, try to inform or use the IXP to inform which one of the neighbours are really symmetric or provide symmetric traffic. That is one way to do that, yeah? For example, apply this technique to say okay, if you go to this internet exchange, this internet exchange have 94% of symmetry so we will receive traffic back for 94% of all ASes there. Or you receive from 65%, that is the best and the worst case. It's like symmetrics for the internet exchange. The other one is incentives, for example, the IXP itself can use the flows that they have to identify if this traffic is ‑‑ or this prefix is symmetric or not. It's not possible for them to identify which one are asymmetric with 100% of confidence, but they can be 100% confident of which one are symmetric. That's how to CDNs to know if they have a good path or a bad path, it's like okay we are adding quality to the paths on the internet exchange.

The other one is about standardisation, for example we have a Cloudflare proposal here this year about, okay, try to mark all the networks that are Anycasted in the internet exchange, so we ‑‑ the provider or anyone that are dealing with this prefix know it's talking about a network that is Anycasted or a CDN, for example.

The other thing is, this won't solve the problem inside the internet exchange, but maybe we can go further and try to create a special AS range for those Anycast networks, so we can solve everywhere to be able to recognise when we are talking about some CDN.

And other thing that can happen and maybe it's time to implement or to make Anycast or that idea of having content distributed over different sites in a protocol inside of just using a ‑‑ hack.

It was repeated. The data that we use and there is a little system here that you can access about this GitHub page here, you have all the data set we use, that software that was developed, you can check each one of the internet exchange and each one of the autonomous systems and how we recognise it and map it. Okay. Thank you very much.

(Applause)


WOLFGANG TREMMEL: Thank you. Thank you, I think we have one online question.

FERNANDO GARCIA: Thanks. Kurt Kaiser, a private citizen: Did you run these tests over IPv4 and IPv6 and if yes, are there any different results?

LEANDRO BERTHOLDO: No, we just run over IPv4.

FERNANDO GARCIA: Okay.

WOLFGANG TREMMEL: That's a way to do it, you just need to be transparent.

LEANDRO BERTHOLDO: Because there is some limitations at the current moment about identifying or using active measurements over IPv6.

AUDIENCE SPEAKER: This is ‑‑ from internet exchange. Amazing work, well done, very interesting. I have one question. You gave a definition of mute neighbours that they don't get IXP routes but they still send traffic back to IXP. How do you recognise that? Did you see on the route servers that they don't peer with route servers so don't get prefixes, I was very curious?

LEANDRO BERTHOLDO: You remember that figure when we tested from inside internet exchange and outside, that one here ‑‑ that. What do you here is we generate traffic outside but if the traffic goes here we know that, okay, this prefix we didn't use it and we don't know. But if we do that case and we forward the traffic here, then we know the routing table and if we receive the traffic back here we know that it is not sending the traffic back. So we identify one case using this method here and other case using this ‑‑ depending where we generate the traffic we know that we are doing that. You imagine the I am generating traffic outside the network in this case and somebody is sending traffic here, oh wait, you are not in the routing table, how can you send me the traffic here because I am using all the routing table here in the Internet to send the traffic to somebody that was here and we receive back because we have for example, more specific prefix here.

AUDIENCE SPEAKER: Okay. So you have a node inside the IXP peering LAN?

LEANDRO BERTHOLDO: Imagine that everyone that is connected in the internet exchange is forced for a more specific prefix to form this path and he will never receive the traffic from the internet exchange because we are ‑‑ there is no table there, in this case we are generating traffic outside the internet exchange so he receive and then okay, the best path is the internet exchange and send it back, that is why we show that.

AUDIENCE SPEAKER: My mind went to maybe you correlated the traffic patterns with what you see in the routing tables of the root servers actually so I was you took it down and then you correlate it, because usually this is where you find the shortest paths

LEANDRO BERTHOLDO: Both information, the routing table with the result that we have with the forwarding table.

AUDIENCE SPEAKER: Okay, thank you.

LEANDRO BERTHOLDO: That was really something different.

AUDIENCE SPEAKER: Antonio Prado, SBTAP. Thank you, Leandro, for your great presentation. I am curious to see if the ‑‑ in the future measurement, the results could be different for an IPv6‑only measurement, first question.

Second one: If I recall correctly you said ‑‑ you used 24 ‑‑ 6 million of 24 network. Out of curiosity, what did you take those networks?

LEANDRO BERTHOLDO: There is a hit‑list, call it, made by the people of US (inaudible) ‑‑ it is of all the /24 mapped to be responsible for ECMP, for example. So there are [[inaudible]] and we do a subset of this with the most responsible ones and we try to get at least one inside each autonomous system, in this case we are getting more than one but we have review of each one of the autonomous systems.

About IPv6, okay, we did use an IPv4 because we have some limitations at the moment to run IPv6, that is ‑‑ in IPv6 because of that privacy that IPv6 have. But normally we know that the ‑‑ each one of the autonomous systems used to have exactly the same policy applied for IPv6 and IPv4, for those that we could check that, normally the people does not choose different paths because I am using v6 or using v4.

AUDIENCE SPEAKER: Thank you.

WOLFGANG TREMMEL: I am closing the queues now, please keep your question short. The next question is online question again.

FERNANDO GARCIA: From AMSIX is asking: As far as I understand, you use a single vantage point for... from the Anycast address. Did you try to use a vantage point in a different region and if it showed any difference?

LEANDRO BERTHOLDO: What we tried, imagine that we have one vantage ‑‑ there is a different interpretation of vantage points. If you look at vantage point as I am saying, our vantage points are about 6 million of vantage point spread over the world, each one with different IP address. What we do is generate the traffic to any router on the Internet or responsible ICP address and see where is the catchment, exactly as any catchment using Anycast, so when you do that, in fact if you look we have 6 million of vantage points, not just one.

WOLFGANG TREMMEL: Okay.

AUDIENCE SPEAKER: Alexander Azimov, Yandex. I loved the way you performed your measurements because I have a comment relating to the traffic engineering from CDNs, from my experience running CDNs, CDNs average about three things: Money, capacity, which is about money again, and quality of service. It's not about symmetry.

LEANDRO BERTHOLDO: It's about what we saw, it's about quality of paths, because imagine that you have one provider that are providing to each a user like 100 milliseconds when you can choose another path using 10 milliseconds.

AUDIENCE SPEAKER: RTT is related to quality of service but not directly. Roundtrip time is related to quality of service but not directly.

LEANDRO BERTHOLDO: Yes. But what I am saying, what we saw is that some CDNs are in fact doing a new type of routing table using SDN to pick the best path to deliver the traffic towards the best destination. SoI have a routing table, by just ignoring the routing table and seeing this is the best result I get from this node, if I have to deliver for, I don't know, provider X the best node to deliver is the node A instead of B, that is the closest one. And the ‑‑ if you grow more, some CDNs providers are connecting the internal nodes to be able to do that internally.

WOLFGANG TREMMEL: Perhaps you can take the rest off‑line

AUDIENCE SPEAKER: Thank you for your answers. I would not hold the queue.

WOLFGANG TREMMEL: We have two more in the queue. Please keep your question and answer short.

AUDIENCE SPEAKER: My name is Ivan Beverage: You mentioned about mute paths. Could one reason be that the destinations is under DDoS mitigation?

LEANDRO BERTHOLDO: Do you mean how it works this situation for mitigation?

AUDIENCE SPEAKER: No, I mean, for example you have got the ‑‑ in your slide about mute, people that ‑‑ whereby they send traffic to the IXP, but don't receive ‑‑ but don't advertise their routes

LEANDRO BERTHOLDO: Do you mean if we validate that is [[inaudible]] or something like this.

AUDIENCE SPEAKER: No, I mean if the destination is under ‑‑ has DDoS mitigation service, then they might advertise their route out via the DDoS mitigation provider. So that's why they don't ‑‑ that's why the routes that you might see will not be advertised to the IXP, yet they send traffic out to the IXP so that's a standard kind of...

LEANDRO BERTHOLDO: Both we identify as doing that type of just sending back to the IXPs was a node. One example, Akamai nodes because they use the type of apparently some internal routing and they mark that the best path for our prefix would be in one region, so they use ‑‑ they have to deliver in that region instead of using other networks or other nodes. I do not know how it works in the tables, if they use some reference to identify AS or if they have special metric inside them to identify prefix because you use Anycast prefix so it not be an easy them to identify if this prefix is in each one of these countries. So we believe they have a different approach for that. And that is one of the case ‑‑

AUDIENCE SPEAKER: The slide that you had that showed that IXP connections provides that part‑padded their own networks minimally but their customers' networks more, could that just be that there are more customer prefixes than their own prefixes?

WOLFGANG TREMMEL: Perhaps can take the rest of the discussion off‑line, we are already running in the coffee break.

AUDIENCE SPEAKER: Rob Lister: Do you have any recommendations for exchange operators? What should we do about this?

LEANDRO BERTHOLDO: One thing is maybe is, because the most popular, I believe, is the open policy router, that it was started very nice, the main idea it was to connection the internet exchange and keep the traffic local, but then a lot of other autonomous systems they start doing remote peering to the internet exchange and that is one issue that affect the quality. And the other thing we noticed here the symmetry, I am connected to the internet exchange, I am not a remote peer but I am doing this symmetry so the quality is pretty the same of those remote ASes, so the main idea is like, okay, maybe create a new policy or improve this policy marking which are remote and which one is asymmetric, we have quality associated with each one of these routes that we see in the routing table. So CDNs can pick this one, instead of doing a lot of direct connection or a lot of VLANs inside the internet exchange to view the full mesh when you already have open policy, a new policy, I don't know.

WOLFGANG TREMMEL: Any follow‑up questions, please take it off‑line. Thank you very much. Do not forget to rate the talks or about the PC elections and see you all at 11 o'clock.

LIVE CAPTIONING BY AOIFE DOWNES, RPR
DUBLIN, IRELAND