10 stories
·
0 followers

Technology and Policymakers

1 Comment and 2 Shares

Technologists and policymakers largely inhabit two separate worlds. It's an old problem, one that the British scientist CP Snow identified in a 1959 essay entitled The Two Cultures. He called them sciences and humanities, and pointed to the split as a major hindrance to solving the world's problems. The essay was influential -- but 60 years later, nothing has changed.

When Snow was writing, the two cultures theory was largely an interesting societal observation. Today, it's a crisis. Technology is now deeply intertwined with policy. We're building complex socio-technical systems at all levels of our society. Software constrains behavior with an efficiency that no law can match. It's all changing fast; technology is literally creating the world we all live in, and policymakers can't keep up. Getting it wrong has become increasingly catastrophic. Surviving the future depends in bringing technologists and policymakers together.

Consider artificial intelligence (AI). This technology has the potential to augment human decision-making, eventually replacing notoriously subjective human processes with something fairer, more consistent, faster and more scalable. But it also has the potential to entrench bias and codify inequity, and to act in ways that are unexplainable and undesirable. It can be hacked in new ways, giving attackers from criminals and nation states new capabilities to disrupt and harm. How do we avoid the pitfalls of AI while benefiting from its promise? Or, more specifically, where and how should government step in and regulate what is largely a market-driven industry? The answer requires a deep understanding of both the policy tools available to modern society and the technologies of AI.

But AI is just one of many technological areas that needs policy oversight. We also need to tackle the increasingly critical cybersecurity vulnerabilities in our infrastructure. We need to understand both the role of social media platforms in disseminating politically divisive content, and what technology can and cannot to do mitigate its harm. We need policy around the rapidly advancing technologies of bioengineering, such as genome editing and synthetic biology, lest advances cause problems for our species and planet. We're barely keeping up with regulations on food and water safety -- let alone energy policy and climate change. Robotics will soon be a common consumer technology, and we are not ready for it at all.

Addressing these issues will require policymakers and technologists to work together from the ground up. We need to create an environment where technologists get involved in public policy - where there is a viable career path for what has come to be called "public-interest technologists."

The concept isn't new, even if the phrase is. There are already professionals who straddle the worlds of technology and policy. They come from the social sciences and from computer science. They work in data science, or tech policy, or public-focused computer science. They worked in Bush and Obama's White House, or in academia and NGOs. The problem is that there are too few of them; they are all exceptions and they are all exceptional. We need to find them, support them, and scale up whatever the process is that creates them.

There are two aspects to creating a scalable career path for public-interest technologists, and you can think of them as the problems of supply and demand. In the long term, supply will almost certainly be the bigger problem. There simply aren't enough technologists who want to get involved in public policy. This will only become more critical as technology further permeates our society. We can't begin to calculate the number of them that our society will need in the coming years and decades.

Fixing this supply problem requires changes in educational curricula, from childhood through college and beyond. Science and technology programs need to include mandatory courses in ethics, social science, policy and human-centered design. We need joint degree programs to provide even more integrated curricula. We need ways to involve people from a variety of backgrounds and capabilities. We need to foster opportunities for public-interest tech work on the side, as part of their more traditional jobs, or for a few years during their more conventional careers during designed sabbaticals or fellowships. Public service needs to be part of an academic career. We need to create, nurture and compensate people who aren't entirely technologists or policymakers, but instead an amalgamation of the two. Public-interest technology needs to be a respected career choice, even if it will never pay what a technologist can make at a tech firm.

But while the supply side is the harder problem, the demand side is the more immediate problem. Right now, there aren't enough places to go for scientists or technologists who want to do public policy work, and the ones that exist tend to be underfunded and in environments where technologists are unappreciated. There aren't enough positions on legislative staffs, in government agencies, at NGOs or in the press. There aren't enough teaching positions and fellowships at colleges and universities. There aren't enough policy-focused technological projects. In short, not enough policymakers realize that they need scientists and technologists -- preferably those with some policy training -- as part of their teams.

To make effective tech policy, policymakers need to better understand technology. For some reason, ignorance about technology isn't seen as a deficiency among our elected officials, and this is a problem. It is no longer okay to not understand how the internet, machine learning -- or any other core technologies -- work.

This doesn't mean policymakers need to become tech experts. We have long expected our elected officials to regulate highly specialized areas of which they have little understanding. It's been manageable because those elected officials have people on their staff who do understand those areas, or because they trust other elected officials who do. Policymakers need to realize that they need technologists on their policy teams, and to accept well-established scientific findings as fact. It is also no longer okay to discount technological expertise merely because it contradicts your political biases.

The evolution of public health policy serves as an instructive model. Health policy is a field that includes both policy experts who know a lot about the science and keep abreast of health research, and biologists and medical researchers who work closely with policymakers. Health policy is often a specialization at policy schools. We live in a world where the importance of vaccines is widely accepted and well-understood by policymakers, and is written into policy. Our policies on global pandemics are informed by medical experts. This serves society well, but it wasn't always this way. Health policy was not always part of public policy. People lived through a lot of terrible health crises before policymakers figured out how to actually talk and listen to medical experts. Today we are facing a similar situation with technology.

Another parallel is public-interest law. Lawyers work in all parts of government and in many non-governmental organizations, crafting policy or just lawyering in the public interest. Every attorney at a major law firm is expected to devote some time to public-interest cases; it's considered part of a well-rounded career. No law firm looks askance at an attorney who takes two years out of his career to work in a public-interest capacity. A tech career needs to look more like that.

In his book Future Politics, Jamie Susskind writes: "Politics in the twentieth century was dominated by a central question: how much of our collective life should be determined by the state, and what should be left to the market and civil society? For the generation now approaching political maturity, the debate will be different: to what extent should our lives be directed and controlled by powerful digital systems -- and on what terms?"

I teach cybersecurity policy at the Harvard Kennedy School of Government. Because that question is fundamentally one of economics -- and because my institution is a product of both the 20th century and that question -- its faculty is largely staffed by economists. But because today's question is a different one, the institution is now hiring policy-focused technologists like me.

If we're honest with ourselves, it was never okay for technology to be separate from policy. But today, amid what we're starting to call the Fourth Industrial Revolution, the separation is much more dangerous. We need policymakers to recognize this danger, and to welcome a new generation of technologists from every persuasion to help solve the socio-technical policy problems of the 21st century. We need to create ways to speak tech to power -- and power needs to open the door and let technologists in.

This essay previously appeared on the World Economic Forum blog.

Read the whole story
Stefanauss
1589 days ago
reply
La domanda non è più se la policy dovrebbe essere dettata da "Stato o Mercato?" ma "Stato o Tecnologia". Servono tecnologici in politica, "tecnologi per il pubblico interesse".
Share this story
Delete

Feedback

3 Comments and 12 Shares

Read the whole story
Stefanauss
2688 days ago
reply
Skill per la vita.
Share this story
Delete
2 public comments
ChrisDL
2685 days ago
reply
good fedback
New York
tedgould
2686 days ago
reply
Always try to give good feedback.
Texas, USA

Ad blockers are part of the problem

1 Comment

Sponsored by: Terbium Labs — Try Matchlight for free. Fully automated, full private Dark Web Data Intelligence.

Ad blockers are part of the problem

Earlier this year, I wrote about bad user experiences on websites and foremost among these were the shitty things some sites do with ads. Forbes' insistence that you watch one before manually clicking through to the story, full screen and popover ads and ads that would take over your screen after you started reading the article were all highlighted. Unanimously, we hate this experience.

Because the aforementioned experiences are shit, people run ad blockers and I get the rationale: if ads are going to do crap like this then let's ban them. Except then you get the likes of Forbes denying access to their content if you run them and you get into this nasty cycle of advertisers trying to circumvent ad blockers trying to circumvent advertisers. This is just not a healthy place to be.

A couple of months ago, I got fed up with ads too. I didn't start running an ad blocker though, I decided to make a positive difference to everyone's experience when they came to my site and I began offering sponsorship of this blog instead. The sponsor boils down to an unobtrusive line of text like this:

Ad blockers are part of the problem

Readers were happy because none of the shit they usually have to deal with when ads load was there. Sponsors were happy as they were getting prime real estate and heaps of exposure. And I was happy because not only was I giving people a much better user experience, they pay a lot more than ads too. In fact, since then I've not run a single ad - I've always filled every available sponsor slot. As best I could tell, everyone was happy. But it turns out that's not quite true...

Shortly after launching the sponsorship, someone pointed out that the sponsor message was being removed by ad blockers.

What. The. Fuck.

I get that ad blockers block ads because there's the extra bandwidth they consume, they're frequently a vector for malware and because frankly, they're obtrusive and detract from the viewing experience. But my sponsor message was none of these, what the hell was going on?!

I gave the ad blockers the benefit of the doubt and assumed that because I'd named a class "sponsor_block" and given an element a name of "sponsor_message" it was simply caught up in an automated process of filtering out ad-like content. So I changed things to instead refer to "message_of_support" and in my naivety, assumed this would fix what must surely have been a mistake. There were no more "false positives" as I saw them and the sponsor message again appeared for those running ad blockers. Everyone was happy.

And then it started getting blocked again. Someone recently pointed out that Adblock Plus was causing the message to be displayed so I installed the extension and sure enough, here's what I saw:

Ad blockers are part of the problem

This was no longer a false positive, I was convinced they were deliberately filtering out my sponsor. I delved a little deeper, and found that Adblock Plus uses EasyList which has an admirable objective:

The EasyList filter lists are sets of rules originally designed for Adblock that automatically remove unwanted content from the internet, including annoying adverts, bothersome banners and troublesome tracking

Yet when I drilled down into the EasyList definitions of content to be blocked, I found something that didn't meet any of those criteria:

Ad blockers are part of the problem

In other words, someone had deliberately decided that the sponsor I show in order to help support me financially - the one with no tracking or images or iframes or malware or other crap - was being consciously blocked. The highlighted line there is just one of more than 57k other examples in that file, many of which are no doubt nasty ads in the traditional sense we think of them.

Unfortunately, because EasyList is used across other ad blockers as well, the problem extends beyond one rogue extension:

Ad blockers are part of the problem

This is uBlock Origin and it was the final straw for writing this post after someone reported it to me on the weekend.

Now as it turns out, Adblock Plus actually defines criteria for acceptable ads, criteria which are entirely reasonable. For example, ads shouldn't disrupt the page flow by inserting themselves into the middle of the content:

Ad blockers are part of the problem

Ads also shouldn't consume too much space:

Ad blockers are part of the problem

This is good - any reasonable person would agree with all of this - yet my sponsor text comes nowhere near exceeding any of the criteria. Clearly, this is a mistake so I went ahead and filled out an acceptable ads application. That was now a couple of weeks ago and as of today, their false positive remains. Unfortunately, as best I can tell the process for blocking content involves no review whilst the process for unblocking errors like this require human intervention.

When I realised what was going on here, I was angry. I was suddenly sympathetic with Forbes and their decision to block people with ad blockers which is just wrong - I shouldn't be sympathetic with them - but I'm enormously frustrated at being penalised whilst trying to make a positive difference to this whole ad thing. I was being penalised for doing precisely what the likes of Adblock Plus say I should be doing!

So here's what I'm going to do: absolutely nothing.

I'm not going to rename elements or CSS classes in an attempt to circumvent their blocking, that's a vicious cycle that would only sap my time as I continued to try and circumvent an unjust process. Fortunately, sponsors pay me independently of any form of CPM such as ad providers rely on so it doesn't directly impact me, but of course I want my sponsor messages to be seen as that's why they're there in the first place. I could appeal to people to whitelist my site in their own instance of Adblock Plus or uBlock or whatever other ad blocker they're using, but I'd prefer to appeal to them to report this as an incorrectly categorised ad.

When ad blockers are stooping to the same low level as advertisers themselves are in order to force their own agendas, something is very, very wrong. Deliberately modifying sites like mine which are making a conscious effort to get us away from the very things about ads that led to ad blockers in the first place makes them part of the problem. Ad blockers like this need to clean up their act.

Update (the following day): Shortly after posting this article, Adblock Plus added an exception for the element which shows the sponsor message. They responded by explaining that "I saw your sponsor message and it looks perfectly acceptable" and that it was in compliance with all their criteria. I appreciate their responsiveness on this; supporting responsible ads or sponsors or whatever you want to call them is what we need for a healthy balance of content and monetisation. As for the comment within that link that's concerned this will now display ads if ever I put them back, note that ABP's white-list is specifically for the sponsor banner and there would be no reason for me to put ads in that element.

Read the whole story
Stefanauss
2690 days ago
reply
Deludentissimo, "lead by example" del tutto disatteso qui.
Share this story
Delete

How Cloudflare's Architecture Allows Us to Scale to Stop the Largest Attacks

1 Comment

The last few weeks have seen several high-profile outages in legacy DNS and DDoS-mitigation services due to large scale attacks. Cloudflare's customers have, understandably, asked how we are positioned to handle similar attacks.

While there are limits to any service, including Cloudflare, we are well architected to withstand these recent attacks and continue to scale to stop the larger attacks that will inevitably come. We are, multiple times per day, mitigating the very botnets that have been in the news. Based on the attack data that has been released publicly, and what has been shared with us privately, we have been successfully mitigating attacks of a similar scale and type without customer outages.

I thought it was a good time to talk about how Cloudflare's architecture is different than most legacy DNS and DDoS-mitigation services and how that's helped us keep our customers online in the face of these extremely high volume attacks.

Analogy: How Databases Scaled

Before delving into our architecture, it's worth taking a second to think about another analogous technology problem that is better understood: scaling databases. From the mid-1980s, when relational databases started taking off, through the early 2000s the way companies thought of scaling their database was by buying bigger hardware. The game was: buy the biggest database server you could afford, start filling it with data, and then hope a newer, bigger server you could afford was released before you ran out of room. Hardware companies responded with more and more exotic, database-specific hardware.

Meet the IBM z13 mainframe (source: IBM)

At some point, the bounds of a box couldn't contain all the data some organizations wanted to store. Google is a famous example. Back when the company was a startup, they didn't have the resources to purchase the largest database servers. Nor, even if they did, could the largest servers store everything they wanted to index — which was, literally, everything.

So, rather than going the traditional route, Google wrote software that allowed many cheap, commodity servers to work together as if they were one large database. Over time, as Google developed more services, the software became efficient at distributing load across all the machines in Google's network to maximize utilization of network, compute, and storage. And, as Google's needs grew, they just added more commodity servers — allowing them to linearly scale resources to meet their needs.

Legacy DNS and DDoS Mitigation

Compare this with the way legacy DNS and DDoS mitigation services mitigate attacks. Traditionally, the way to stop an attack was to buy or build a big box and use it to filter incoming traffic. If you were to dig into the technical details of most legacy DDoS mitigation service vendors you'd find hardware from companies like Cisco, Arbor Networks, and Radware clustered together into so-called "scrubbing centers."

CC BY-SA 3.0 sewage treatment image by Annabel

Just like in the old database world, there were tricks to get these behemoth mitigation boxes to (sort of) work together, but they were kludgy. Often the physical limits of the number of packets that a single box could absorb became the effective limit on the total volume that could be mitigated by a service provider. And, in very large DDoS attacks, much of the attack traffic will never reach the scrubbing center because, with only a few locations, upstream ISPs become the bottleneck.

The expense of the equipment meant that it is not cost effective to distribute scrubbing hardware broadly. If you were a DNS provider, how often would you really get attacked? How could you justify investing in expensive mitigation hardware in every one of your data centers? Even if you were a legacy DDoS vendor, typically your service was only provisioned when a customer came under attack so it never made sense to have capacity much beyond a certain margin over the largest attack you'd previously seen. It seemed rational that any investment beyond that was a waste, but that conclusion is proving ultimately fatal to the traditional model.

The Future Doesn't Come in a Box

From the beginning at Cloudflare, we saw our infrastructure much more like how Google saw their database. In our early days, the traditional DDoS mitigation hardware vendors tried to pitch us to use their technology. We even considered building mega boxes ourselves and using them just to scrub traffic. It seemed like a fascinating technical challenge, but we realized that it would never be a scalable model.

Instead, we started with a very simple architecture. Cloudflare's first racks had only three components: router, switch, server. Today we’ve made them even simpler, often dropping the router entirely and using switches that can also handle enough of the routing table to route packets over the geographic region the data center serves.

Rather than using load balancers or dedicated mitigation hardware, which could become bottlenecks in an attack, we wrote software that uses BGP, the fundamental routing protocol of the Internet, to distribute load geographically and also within each data center in our network. Critical to our model: every server in every rack is able to answer every type of request. Our software dynamically allocates load based on what is needed for a particular customer at a particular time. That means that we automatically spread load across literally tens of thousands of servers during large attacks.

Graphene: a simple architecture that’s 100 times stronger than the best steel (credit: Wikipedia)

It has also meant that we can cost-effectively continue to invest in our network. If Frankfurt needs 10 percent more capacity, we can ship it 10 percent more servers rather than having to make the step-function decision of whether to buy or build another Colossus Mega Scrubber™ box.

Since every core in every server in every data center can help mitigate attacks, it means that with each new data center we bring online we get better and better at stopping attacks nearer the source. In other words, the solution to a massively distributed botnet is a massively distributed network. This is actually how the Internet was meant to work: distributed strength, not focused brawn within a few scrubbing locations.

How We Made DDoS Mitigation Essentially Free

The efficient use of resources isn't only with capital expenditures but also with operating expenditures. Because we use the same equipment and networks to provide all the functions of Cloudflare, we rarely have any additional bandwidth costs associated with stopping an attack. Bear with me for a second, because, to understand this, you need to understand a bit about how we buy bandwidth.

We pay for bandwidth from transit providers on an aggregated basis billed monthly at the 95th percentile of the greater of ingress vs. egress. Ingress is just network speak for traffic being sent into our network. Egress is traffic being sent out from our network.

In addition to being a DDoS mitigation service, Cloudflare also offers other functions including caching. The nature of a cache is that you should always have more traffic going out from your cache than coming in. In our case, during normal circumstances, we have many times more egress (traffic out) than ingress (traffic in).

Large DDoS attacks drive up our ingress but don't affect our egress. However, even in a very large attack, it is extremely rare that that ingress exceeds egress. Because we only pay for the greater of ingress vs. egress, and because egress is always much higher than ingress, we effectively have an enormous amount of zero-cost bandwidth with which to soak up attacks.

As use of our services increases, the amount of capacity to stop attacks increases proportionately. People wonder how we can provide DDoS mitigation at a fixed fee regardless of the size of the attack; the answer is because attacks don't increase the biggest of our unit costs. And, while legacy providers have stated that their offering pro bono DDoS mitigation would cost them millions, we’re able to protect politically and artistically important sites against huge attacks for free through Project Galileo without it breaking the bank.

Winning the Arms Race

Cloudflare is the only DNS provider that was designed, from the beginning, to mitigate large scale DDoS attacks. Just as DDoS attacks are by their very nature distributed, Cloudflare’s DDoS mitigation system is distributed across our massive global network.

There is no doubt that we are in an arms race with attackers. However, we are well positioned technically and economically to win that race. Against most legacy providers, attackers have an advantage: providers' costs are high because they have to buy expensive boxes and bandwidth, while attackers' costs are low because they use hacked devices. That’s why our secret sauce is the software that spreads our load across our massively distributed network of commodity hardware. By keeping our costs low we are able to continue to grow our capacity efficiently and stay ahead of attacks.

Today, we believe Cloudflare has more capacity to stop attacks than the publicly announced capacity of all our competitors — combined. And we continue to expand, opening nearly a new data center a week. The good news for our customers is that we’ve designed Cloudflare in such a way that we can continue to cost effectively scale our capacity as attacks grow. There are limits to any service, and we remain ever vigilant for new attacks, but we are confident that our architecture is ultimately the right way to stop whatever comes next.

PS - Want to work at our scale on some of the hardest problems the Internet faces? We’re hiring.

Read the whole story
Stefanauss
2703 days ago
reply
Cloudfare paga agli upstream provider solo la più grande tra le quote di traffico in ingresso o uscita. Poichè anche in presenza di attacchi emettono più traffico di quanto ne ricevono, gli attacchi sono economicamente sostenibili.
Share this story
Delete

Fixing the IoT isn't going to be easy

1 Comment and 4 Shares
A large part of the internet became inaccessible today after a botnet made up of IP cameras and digital video recorders was used to DoS a major DNS provider. This highlighted a bunch of things including how maybe having all your DNS handled by a single provider is not the best of plans, but in the long run there's no real amount of diversification that can fix this - malicious actors have control of a sufficiently large number of hosts that they could easily take out multiple providers simultaneously.

To fix this properly we need to get rid of the compromised systems. The question is how. Many of these devices are sold by resellers who have no resources to handle any kind of recall. The manufacturer may not have any kind of legal presence in many of the countries where their products are sold. There's no way anybody can compel a recall, and even if they could it probably wouldn't help. If I've paid a contractor to install a security camera in my office, and if I get a notification that my camera is being used to take down Twitter, what do I do? Pay someone to come and take the camera down again, wait for a fixed one and pay to get that put up? That's probably not going to happen. As long as the device carries on working, many users are going to ignore any voluntary request.

We're left with more aggressive remedies. If ISPs threaten to cut off customers who host compromised devices, we might get somewhere. But, inevitably, a number of small businesses and unskilled users will get cut off. Probably a large number. The economic damage is still going to be significant. And it doesn't necessarily help that much - if the US were to compel ISPs to do this, but nobody else did, public outcry would be massive, the botnet would not be much smaller and the attacks would continue. Do we start cutting off countries that fail to police their internet?

Ok, so maybe we just chalk this one up as a loss and have everyone build out enough infrastructure that we're able to withstand attacks from this botnet and take steps to ensure that nobody is ever able to build a bigger one. To do that, we'd need to ensure that all IoT devices are secure, all the time. So, uh, how do we do that?

These devices had trivial vulnerabilities in the form of hardcoded passwords and open telnet. It wouldn't take terribly strong skills to identify this at import time and block a shipment, so the "obvious" answer is to set up forces in customs who do a security analysis of each device. We'll ignore the fact that this would be a pretty huge set of people to keep up with the sheer quantity of crap being developed and skip straight to the explanation for why this wouldn't work.

Yeah, sure, this vulnerability was obvious. But what about the product from a well-known vendor that included a debug app listening on a high numbered UDP port that accepted a packet of the form "BackdoorPacketCmdLine_Req" and then executed the rest of the payload as root? A portscan's not going to show that up[1]. Finding this kind of thing involves pulling the device apart, dumping the firmware and reverse engineering the binaries. It typically takes me about a day to do that. Amazon has over 30,000 listings that match "IP camera" right now, so you're going to need 99 more of me and a year just to examine the cameras. And that's assuming nobody ships any new ones.

Even that's insufficient. Ok, with luck we've identified all the cases where the vendor has left an explicit backdoor in the code[2]. But these devices are still running software that's going to be full of bugs and which is almost certainly still vulnerable to at least half a dozen buffer overflows[3]. Who's going to audit that? All it takes is one attacker to find one flaw in one popular device line, and that's another botnet built.

If we can't stop the vulnerabilities getting into people's homes in the first place, can we at least fix them afterwards? From an economic perspective, demanding that vendors ship security updates whenever a vulnerability is discovered no matter how old the device is is just not going to work. Many of these vendors are small enough that it'd be more cost effective for them to simply fold the company and reopen under a new name than it would be to put the engineering work into fixing a decade old codebase. And how does this actually help? So far the attackers building these networks haven't been terribly competent. The first thing a competent attacker would do would be to silently disable the firmware update mechanism.

We can't easily fix the already broken devices, we can't easily stop more broken devices from being shipped and we can't easily guarantee that we can fix future devices that end up broken. The only solution I see working at all is to require ISPs to cut people off, and that's going to involve a great deal of pain. The harsh reality is that this is almost certainly just the tip of the iceberg, and things are going to get much worse before they get any better.

Right. I'm off to portscan another smart socket.

[1] UDP connection refused messages are typically ratelimited to one per second, so it'll take almost a day to do a full UDP portscan, and even then you have no idea what the service actually does.

[2] It's worth noting that this is usually leftover test or debug code, not an overtly malicious act. Vendors should have processes in place to ensure that this isn't left in release builds, but ha well.

[3] My vacuum cleaner crashes if I send certain malformed HTTP requests to the local API endpoint, which isn't a good sign

comment count unavailable comments
Read the whole story
Stefanauss
2710 days ago
reply
Fixerai la IoT solo con estremo dolore.
Share this story
Delete

Uk, Democrazia oppure Ordalia?

1 Comment

Quando un politico inglese da’ le dimissioni, tutti in Italia cominciano a parlare di dimissioni come simbolo e sintomo di democrazia, anche se onestamente non sembra che chi scrive abbia qualche teoria logoca da proporre. Il legame tra democrazia e necessita’ di dimissioni, infatti, e’ poco provato e non e’ molto consistente.

Come sapete, i perdenti delle scorse elezioni si dimetteranno, e quindi tutti sono li’ a guardare ammirati l’inghilterra che da’ “prova di democrazia”. In realta’ questa cultura proviene dall’origine germanica della popolazione, precisamente dall’idea di Ordalia, che deriva dal germanico Ur-Teil. L’ordalia era un processo che , quando si svolgeva mediante duello, assegnava la colpa al perdente: anziche’ giudicare le prove, si credeva che la giustizia fosse una forza fisica dell’universo, quando non una manifestazione divina, ragione per cui nel dubbio si sottoponeva l’imputato ad una difficile prova, convinti che se innocente l’avrebbe superata. Una di queste prove poteva essere lo Zweikampf, il combattimento a due, ovvero il duello. In questo modo e per questa ragione, la sconfitta veniva associata ad una colpa da espiare. L’ordalia rimase legge nel sistema di common law britannico sino all’ 800, quando i duelli vennero vietati.

Sostituite al popolo che vota la legge divina, e alla condanna una qualche responsabilita’ da espiare con le dimissioni, e la ragione per cui si dimettono gli sfidanti di Cameron sono chiare.

Nell’ordalia, infatti, ad essere sconfitto nel duello (quando si faceva per duello) era un preciso individuo: questo identificava in lui, ed in lui solo, la persona avversa a Dio. Di consegnenza, questo ne faceva l’unico colpevole. Allo stesso modo, sarebbe impossibile dire che quando un intero partito perde le elezioni l’unico responsabile sia il capo.

Eppure, e’ quello che si dice, in quasi tutti i paesi la cui legge derivi dal diritto normanno o germanico, Germania compresa. Andiamo a vedere le motivazioni dei sostenitori delle elezioni.

Il candidato sfidante deve dimettersi perche’ il partito ha perso sotto la sua guida

Affermare che lo UKIP abbia perso per via della guida di Farage e’ un attimino sospetto. Sia perche’ il sistema elettorale inglese e’ maggioritario, e quindi si basa sul candidato che vince in una data zona, sia perche’ i subalterni di Farage si sono prodotti in un “turbinio di cazzate” al cui confronto leggere di Salvini da pensare alla filosofia cartesiana. Mi meraviglia che non abbiano citato Evola, ma non seguo molto la stampa inglese. Del resto, Farage e’ un brillante oratore , unica dote che ha, mentre i suoi scagnozzi che aprono bocca sembrano usciti da un delirio maccartist-monarchico.

Andando in Germania, accusare Steinbrück di essere responsabile della sconfitta quando SPD ha perso principalmente sul territorio, sia come distribuzione dei voti che come struttura (federale) dello stato, e’ semplicemente ridicolo. Una sconfitta del genere, razionalmente, dovrebbe portare ad una epurazione interna del partito: una sola persona non puo’ causare, neanche volendo, l’offuscamento di OGNI candidato sul territorio.

Il presidente del partito deve dimettersi perche’ perdendo ha mostrato di non avere le capacita’ di capire il paese

A questo punto, sarebbe sensato chiedersi chi diavolo lo abbia messo al proprio posto. Non e’ stato per caso un comitato di partito, o un direttivo, o un processo detto “primarie?”. E perche’ allora l’accusa di non aver capito il paese reale non vengono fatte anche a coloro che hanno nominato ed acclamato il candidato?

Come mai lui non ha capito il paese reale, mentre chi lo ha eletto pensava che lui avesse capito il paese? Come mai non ha alcuna responsabilita’ chi ha eletto questo personaggio, e come mai il vice - che prendera’ il suo posto quasi inevitabilmente - vice che era pure presente durante le elezioni, non e’ stato messo subito alla guida?

Se le elezioni dimostrassero che la persona non ha capito il paese reale, le dimissioni dovrebbero riguardare anche il suo direttivo, chi lo ha eletto, e la gerarchia che ha creduto in lui. Chi crede nell’uomo sbagliato era nel giusto?

Il candidato deve dimettersi perche’ il suo programma politico e’ stato bocciato dagli elettori.

Questa strana affermazione sembra dimenticare il fatto che l’intero partito ha aderito al manifesto elettorale. A meno di non avere il capo del Klu Klux Klan a capo delle Black Panther, come mai nessuno si era accorto delle storture palesi nel programma del candidato prima della sconfitta.

Se le idee del candidato erano sbagliate, se la sua visione era quella scorretta e/o se il suo programma non era quello che serviva al paese, aveva capito tutto chi ha appoggiato il programma inadeguato? Come e’ possibile affermare che i funzionari del suo partito non abbiano colpe.

E ammettiamo pure che il programma fosse sbagliatissimo: rimane sempre quel 20-25% di persone che hanno votato il partito. Devono dimettersi anche loro per aver creduto nel programma sbagliato?

In che modo vogliamo affermare che il leader sia responsabile di un programma sbagliato quando tutti gli altri che lo hanno appoggiato e votato invece siano innocenti?

Tutte queste scuse sono patetiche. In generale, qualsiasi accusa si possa rivolgere al capo la si puo’ rivolgere anche alla nomenklatura del partito,e la si dovrebbe rivolgere anche agli elettori, pure di minoranza, che hanno comunque votato il partito col candidato/programma/ideale “sbagliato”.

La vera ragione dietro alle dimissioni dei leader perdenti non sta nella moderna democrazia ma nel retaggio barbarico dell’ordalia

E’ solo trasformando le elezioni in un duello ordalico, nella germanica Ur-Teil della Zweikampf, poi fluita in inghilterra insieme ai sassoni, che si possono scaricare tutte le colpe sul partecipante al duello: su di lui, e nessun altro, si e’ abbattuto il pollice verso del dio-popolo.

Sicuramente, gli inglesi sono piu’ bravi a vendere l’ordalia, come sono bravi a vendere quasi tutte le loro arcaiche cazzate come “culla della democrazia”. Il paese senza una costituzione si vende come la culla della rule of law, una monarchia si vende come culla della democrazia parlamentare, un paese ove il Re si considera defensor fidei si vende come paese laico ed equidistante.

Almeno i tedeschi cercano scuse. Quando Steinbrück si dimise a favore di Gabriel, si disse che era perche’ dovendo Gabriel entrare nel governo con la Merkel, era assurdo che ci entrasse quello che la Merkel lo aveva sconfitto. Ma la Merkel non ha sconfitto solo Steinbrück, anche Gabriel ha subito la sua pelata di voti dalla cancelliera. Tuttavia, almeno in apparenza la scusa regge, e sembra quasi un criterio pragmatico: se sei stato sconfitto non vai al governo.

La cosa ridicola dell’autocompiacimento inglese, invece, e’ che si sforzano di vendere l’ordalia non come criterio empirico, ma come altissimo esempio di democrazia. Gli elettori che in campagna elettorale che hanno votato proprio QUEL leader, pur essendo una minoranza, hanno votato PROPRIO LUI, e non il suo successore: dov’e’ la democrazia nel togliere dal vertice proprio ed esattamente l’unico ad essere stato votato alle ultime elezioni, seppure da una minoranza, PUR SEMPRE DAGLI UNICI ELETTORI DEL PARTITO, SPESSO DA TUTTI GLI ELETTORI DI QUEL PARTITO?

Il mondo si e’ gia’ visto vendere dagli inglesi i resti di un impero coloniale come “commonwhealth”, una monarchia con tutti i tratti della superstizione popolare come “moderna democrazia occidentale”, un diritto consuetudinario privo di costituzione come “rule of law” , quindi adesso tutti dobbiamo sciropparci come “democrazia” il rimasuglio rimasticato dell’ordalia medioevale.

E non credo che qualche giornalista italiano perdera’ mai il complesso di inferiorita’ verso qualsiasi cosa vada di moda fare a Londra: il provinciale confonde sempre l’ultima moda con la modernita’. Londra.

Per cui, ripetete con me:

La monarchia e’ Repubblica.
Il difensore della Fede e’ Laico.
La Guerra e’ Pace.
L’arancione e’ il nuovo nero.
L’ Ignoranza e’ schiavitu’.
L’ordalia e’ Modernita’.

Read the whole story
Stefanauss
3246 days ago
reply
Mai capita questa cosa che chi perde deve dimettersi sennò non è Democrazia.
Share this story
Delete
Next Page of Stories