Follow by Email

Saturday, November 10, 2018

Have you cheked if your passwords are in breach database?

Mozilla has formally launched Firefox Monitor, a privacy-engineered website that hooks up to Troy Hunt’s Have I Been Pwned? (HIBP) breach notification database

The site – which despite the Firefox tag is open to anyone – can be used either to check an email address against known breaches, or to register for breach notification should that address be detected in future breaches logged by HIBP.

Both of these things can already be done from the main HIBP website,  question behind is : What does Firefox Monitor do that HIBP doesn’t?

There are several answers. The first of which is that connecting HIPB to a site run by and branded under the Firefox name will promote breach checking and notification to a larger audience.
Expanding the numbers signing up for notification from the hundreds of thousands to the millions would be a major advance for breach detection not least because HIPB has a record of detecting breaches before some breached companies do. (The Disqus breach earlier 2012.)

The earlier a user hears that their email address is part of a breached cache of data, the sooner they can change their password. Until HIBP, that might have been years after the address entered the public domain – or perhaps never.

A second reason has to do with Mozilla’s interest in integrating breach notification into the Firefox browser itself, a logical next step which has already been completed by password management tool, 1Password.

It’s not clear what progress Mozilla is making towards this although a dev involved with the project, Matt Grimes, made the following comment in his overview of Firefox Monitor’s origins:

A hidden technical challenge services like HIPB and its partners face is how internet users can enter searches (for breached email addresses or even specific passwords, which HIPB can also check), as unsalted hashes without this becoming a secondary privacy risk.

In theory, the search could be entered as a salted hash but that would greatly increase the computational demands when coping with large numbers of queries.

The answer proposed by the company that hosts the Firefox Monitor service on Mozilla’s behalf, Cloudflare, is a mathematical idea called k-anonymity. The company offers a full description of how this works, but the essence is as follows:

The website generates a local hash of the given email address using SHA-1 but sends only the first six characters to HIPB via its API. This returns a list of hashes containing the queried string (around 477 on average), each of which is compared to the full local hash. If a match is found, that email address or password is in the database and has been breached.

Instead of seeking to salt hashes to the point at which they are unique, we instead introduce ambiguity into what the client is requesting.
Both sides benefit – the user keeps the hash to themselves while HIPB doesn’t have to return more hashes than it needs to. For extra security against cyberattackers, Firefox Monitor doesn’t store anything in its database and caches the user’s results in an encrypted session.

Can this be leveraged with your web/ mobile applications to ensure that users data is in breach database?

yes even github and other products started utilizing this to notify users and password change happens then by the users.

Monday, August 27, 2018

Does 2FA solves the problem ?

There are lot of discussion about 2FA and how this will enhance everyone’s security. Indeed, it is an improvement and it was about time we start becoming more conscious about the security issues related to authentication.
As usually though, these discussions generated a lot of confusion and “why it didn’t work” moments when we see cases such as Reddit’s hack in August of 2018.

Since this is becoming more and more confusing to many people - including security professionals, I decided to discuss the terminology and explain how any additional security added by 2FA can actually be bypassed by several ways.


Lets make it simple: what is what.

  • xFA: x factors authentication, where x is usually 2 or M[ultiple]. The usual clarification is that each factor may be one of :
    • something you know: e.g. a password or a pin. That is usually called a “knowledge factor”.
    • something you have: e.g. a hardware device such as RSA SecureID. That is usually called a “possession factor”.
    • something you are: e.g. biometrics such as fingerprint, iris recognition etc. That is usually called an “inherent factor”.
  • 2SV: two step verification, which means that there is a verification step after the initial authentication.
  • MITM: Man in the middle. You will see why it’s important.
  • OTP: one time password; typically code generators.

As it should be obvious, these terms have different meaning and they are neither dependent on each other, nor mutually exclusive.
One may have two factors and use them in a single step which is 2FA but not 2SV, or use the second factor as a verification method, in which case it is both 2FA and 2SV. Finally one may use 2 passwords, the one after the other, which is a case of 2SV but not 2FA.
If you look at the image on the top which is found in NIST’s site you may notice that the depicted use case is that the user provides the username and password (first factor) first; the second factor is provided before the user logs in. That is a typical two step verification, and probably a two-factor authentication scenario too.
And although 2FA should in theory be more secure than 2SV, it is actually implementation specific how secure it can be.


Based on several recent attacks, the use of SMS as an authentication factor is under scrutiny. That is not surprising because most security professionals know that SMS can be intercepted, or a SIM card can be cloned or swapped without the knowlege. Whoever is promoting the idea that SMS is something you have (possession factor) is in trouble, because if it can be intercepted or cloned, there is no guarantee that you have it - or that you are  the only one to have it. So SMS is considered an unreliable second factor as it does not establish possession. That does not make it a different factor though. It is still in the general area of a possession factor (it’s definitely not knowledge or inherent factor), but a quite unreliable one.


We usually refer to one-time code generators (such as hardware device, mobile phone application or computer application) as potential possession factors. What is the difference from SMS? While SMS must be triggered, an OTP device always generates a random code based on a “seed” that code is confirmed by the server because the server knows that my seed that is in my device can only generate that number at that specific time.
And while the ‘triggering’ factor of the SMS is a problem since it is susceptible to interception, the ‘hardcoding’ of the seed is a different problem but with a similar vulnerability.

"There is no guarantee that I have my hardware device in my hands all the moment when I try to authenticate"

Your device may be  stolen or it’s a computer or phone application - the seed may have been stolen and used in a different computer or phone application. Of course no attacks have succeeded to copy the seed up to now, but that does not mean that it cannot happen. Actually some implementations exist that allow you to store the seed somewhere unprotected, making such an attack quite feasible and easy.


There exist implementations of  2FA that expect you to verify the login transaction answering a phone call or accepting it through a mobile app by clicking a button. Instead of having the seed in your phone / application you pre-register your phone number or device to receive calls and triggers by the server to verify that you authorize the login action.

If the victim’s phone is stolen, whether they use these applications phone call verification or OTP generators is irrelevant. But in that case the attacker cannot just copy the seed - there are other verification mechanisms involved making the mobile application a tad more secure.


If you think that 2SV and 2FA will save you from credential compromise, you need to think again. It may save you in some cases, but credential compromise is usually happening through phishing. It is quite easy to bypass 2SV and 2FA based on the factors that we have discussed so far over phishing, just transforming it to a typical MITM attack. Here is a typical scenario for 2SV/2FA options (and is quite simpler for 2FA without 2SV):

  1. The victim receives a phishing email for Service X.
  2. Clicks on the link and lands on a login page like the one they expect for Service X, only this time it’s the attacker’s server
  3. The victim enters their username and password in the login page
  4. The attacker relays (automatically or manually) the username and password to the original service
  5. The attacker redirects the victim to a different page where the second factor is required.
  6. The victim types in the second factor. At this point it doesn’t make any difference if it’s a hardware device or a SMS message that gives the second factor; it also doesn’t make any difference if it’s OTP or triggered.
  7. The attacker uses this second factor to login to the original Service X as the victim

All the OTP-based and SMS-based second factors are equally susceptible to this attack. The attacker does not even need to intercept SMS messages, does not need to steal or clone any devices or seeds. All the attacker needs is to phish the user. 

The same goes with challenge - based messages. If the victims think they are logging in to Service X, they will accept the challenge / verify the login in their relevant application. 

Unlike typical phishing though, the attacker does not get persistent access to the compromised service. But that’s a different thing; the attacker now has an authorized session and we shouldn’t be in this situation at all. 

 So 2FA/2SV is not good?

That would be a very wrong statement. 2FA/2SV is much better than single factor authentication. It definitely has its value, but maybe it needs to get better to provide better value.
Solutions exist and they are not even very technically challenging to implement. Although they are not widely implemented due to usability / standardization issues.


That is a simple variation of the possession factor idea. The device from which you try to reach the service has to be known and pre-registered. This transforms your device to a possession factor and takes care of MITM attacks, although it still leaves us open to stolen device attack.
It is usually based on certificates on the device. Certificates are a well known technique in security that has been proven quite efficient for several use cases. WebAuthn is a great step towards standardization and modernization of this idea extending it somehow to not be limited to the currently used device for usability reasons.

Challenge / Response is a very old idea, used in computers for ages. In this case it means that instead of the device generating OTP codes based on a seed, the codes are generated as a response to a challenge issued by the server. The challenge may be sent before authentication (making it a single step multifactor authentication) or after the first authentication (making it a two-step verification, multifactor authentication). And this is what the verification applications do.

The problem with that is for the challenge/response to make sense, the challenge has to be - as much as possible - linked to something unique. You cannot just issue a challenge for the login process; it’s too generic and as explained, unreliable.

You may issue a challenge for a bank transaction though, as many banks do (through SMS). That is a uniquely generated challenge based on the transaction details. The best solution in that case would be to have an application in your end; a dummy device that generates codes based on a seed won’t do.
Obviously, this can only be used in critical transactions. We cannot just expect users to re-authenticate every time they wish to send an email. Which brings us back to the authorized device or a combination of the two.

 Things to remember:

  • User awareness, always a necessary step.
  • Perimeter (WAF) technologies to filter out phishing sites etc should be in everybody’s agenda.
  • Endpoint (mobile or not) protection; do not leave your phone without a PIN if it has an OTP generator for example
  • Physical security; do not leave your hardware devices lying around, and do not keep them in your computer bag.



Wednesday, August 22, 2018

Are you serious about a year old whatsApp chat data?

WhatsApp chats, photos and videos are no longer going to hold and pileup your google drive. Starting 12 November, your quota won’t budge due to that particular backup bucket.

The catch: if you want to save your old WhatsApp content, you better back it all up before 12 November. That’s when WhatsApp will sweep out dusty old backups that haven’t been updated in more than a year.
Due to a new agreement between WhatsApp and Google, WhatsApp backups will no longer count against Google Drive storage quota, Google explained in an email to users.

Google will allow people to back up their chats into its servers for free, meaning that messages will be protected.

But it has a catch because any old chats that haven't been properly protected will be removed from backups at the same time.

The change will go into effect on 12 November, and all the manual backups need to be done before then.

How do I manually & automate backup messages:

  1. Open WhatsApp.
  2. Tap Menu > Settings > Chats > Chat backup.
  3. Tap Back up to Google Drive and select a backup frequency other than Never.
  4. Select a Google account that you’ll back up your chat history to. If you don’t have a Google account, tap Add account when prompted and enter your login credentials. WhatsApp suggests that you please do remember the Google account you’re using for the backup – you’ll need that to get back into the backup.
  5. Tap Back up over to choose the network you want to use for backup. Note that this could cost you: backing up over a cellular data network might result in additional data charges.

Then again, you can manually back up your WhatsApp account to Google Drive at any point:

  1. Open WhatsApp.
  2. Tap Menu > Settings > Chats > Chat backup.
  3. Tap BACK UP to begin your Google Drive backup. The process might take a while, depending on how much data you need to back up.

To get at your new backup, you’ll need to go through the same Google account that you used in the first place. If you need to install or reinstall WhatsApp on your phone, the app will prompt you to restore the backup.

For more clarifications look for the official instructions 

Saturday, July 21, 2018

Defending the iPhone Restricted Mode

Recently, Apple introduced restricted mode to protect iPhones from attacks by companies like Cellebrite and Greyshift, which allow attackers to recover information from a phone without the password or fingerprint. Elcomsoft just announced that it can easily bypass it.

There is an important lesson in this: security is hard. Apple Computer has one of the best security teams on the planet.
This feature was not tossed out in a day, it was designed and implemented with a lot of thought and care. If this team could make a mistake like this, imagine how bad a security feature is when implemented by a team without this kind of expertise.

This is the reason actual cryptographers and security engineers are very skeptical when a random company announces that their product is "secure." We know that they don't have the requisite security expertise to design and implement security properly. We know they didn't take the time and care. We know that their engineers think they understand security, and designed to a level that they couldn't break.

Getting security right is hard for the best teams on the world. It's impossible for average teams.

The Emerging WPA3

Everyone is speaking about the new WPA3 Wi-Fi security standard, and how it improves security over the current WPA2 standard.
Here is the gist is as good as any other:
The first big new feature in WPA3 is protection against offline, password-guessing attacks. This is where an attacker captures data from your Wi-Fi stream, brings it back to a private computer, and guesses passwords over and over again until they find a match. With WPA3, attackers are only supposed to be able to make a single guess against that offline data before it becomes useless; they'll instead have to interact with the live Wi-Fi device every time they want to make a guess. (And that's harder since they need to be physically present, and devices can be set up to protect against repeat guesses.)
WPA3's other major addition, as highlighted by the Alliance, is forward secrecy. This is a privacy feature that prevents older data from being compromised by a later attack. So if an attacker captures an encrypted Wi-Fi transmission, then cracks the password, they still won't be able to read the older data -- they'd only be able to see new information currently flowing over the network.
Aware that we're just getting the new standard upcoming week. Actual devices that implement the standard are still months away.

Wednesday, May 23, 2018

Yet another CPU Vulnerability

Google and Microsoft researchers have disclosed another Spectre-like CPU side-channel vulnerability, called "Speculative Store Bypass." Like the others, the fix will slow the CPU down.
The German tech site Heise reports that more are coming.
We'll be seeing a lot more of these sorts of vulnerabilities.
Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they -- and the research into the Intel ME vulnerability -- have shown researchers where to look, more is coming -- and what they'll find will be worse than either Spectre or Meltdown.
We'll be expecting lots more of these in the coming months and years, as we learn more about this class of vulnerabilities.

Sunday, May 20, 2018

Sending Silent Commands to Voice Assistants

Researchers have demonstrated the ability to send inaudible commands to voice assistants like Alexa, Siri, and Google Assistant.

Over the last two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple's Siri, Amazon's Alexa and Google's Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doorswire money or buy stuff online ­-- simply with music playing over the radio.
A group of students from University of California, Berkeley, and Georgetown University showed in 2016 that they could hide commands in white noise played over loudspeakers and through YouTube videos to get smart devices to turn on airplane mode or open a website.

This month, some of those Berkeley researchers published a research paper that went further, saying they could embed commands directly into recordings of music or spoken text. So while a human listener hears someone talking or an orchestra playing, Amazon's Echo speaker might hear an instruction to add something to your shopping list.

Some malicious codes can automatically record your cam and later demand for ransom if you deny they may threaten you with a mail like below.


Ticket Detаils: Vxx-xxx-xxxxx
Camera ready,Notification: 2x/xx/2018 05:xx:xx 
Status: Waiting for Reply 2xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_Priority: Normal 


If you were more watchful while caress yourself, I wouldn't write dis message. I don't think that playing with yourself is really bad, but when all colleagues, relatives and friends receive video record of it- it is obviously for you.

I placed malisious soft on a porn site which was visited by you. When the target press on a play button, device starts recording the screen and all cameras on your device starts working.

Moreover, my program makes a remote desktop supplied with keylogger function from your device , so I could get all contacts from ya e-mail, messengers and other social networks. I'm writing on dis e-mail because It's your working address, so u should read it.

I suppose that three hundred usd is pretty enough for this little misstep. I made a split screen video(records from screen (u have interesting tastes ) and camera ooooooh... its funny AF)

So its ur choice, if u want me to erase ur disgrace use my bitcоin wallet аddress- 1wxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  
You have one day after opening my message, I put the special tracking pixel in it, so when you will open it I will see.If ya want me to show u the proofs, reply on this letter and I will send my creation to five contacts that I've got from ur contacts.

P.S.. U can try to complain to police, but I don't think that they can help, the investigation will last for several months- I'm from Estonia - so I dgf lmao 


Beware and stay secure 

Friday, May 11, 2018

Got panic upon RDP to server throwing CredSSP encryption oracle remediation error

I’ve seen that people no longer being able to connect over RDP  to their clients or servers. I also got a call to ask for help with such an issue. The moment I saw the error message it rang home that this was a known and documented issue with CredSSP encryption oracle remediation, which is both preventable and fixable.

KB4103725 (Windows 8/10)
KB4103727 (Server 2016/2012)
KB4103718 (Windows 7) 

 enter image description here
If you can't update your servers since it requires a reboot, you could add this to your clients policy, send it out via GPO and all it takes a force gpupdate.

Rdp client machine workaround:
goto local policies:
Computer Configuration -> Administrative Templates -> System -> Credentials Delegation--Encryption Oracle Remediation

enable and set to 'vulnerable'.
Follow that link and it will tell you all you need to know to fix it and how to avoid it.
A remote code execution vulnerability (CVE-2018-0886) exists in unpatched versions of CredSSP. This issue was addressed by correcting how CredSSP validates requests during the authentication process.

The initial March 13, 2018, release updates the CredSSP authentication protocol and the Remote Desktop clients for all affected platforms.
Mitigation consists of installing the update on all eligible client and server operating systems and then using included Group Policy settings or registry-based equivalents to manage the setting options on the client and server computers. We recommend that administrators apply the policy and set it to  “Force updated clients” or “Mitigated” on client and server computers as soon as possible.  These changes will require a reboot of the affected systems. Pay close attention to Group Policy or registry settings pairs that result in “Blocked” interactions between clients and servers in the compatibility table later in this article.
April 17, 2018:
The Remote Desktop Client (RDP) update update in KB 4093120 will enhance the error message that is presented when an updated client fails to connect to a server that has not been updated.
May 8, 2018:
An update to change the default setting from Vulnerable to Mitigated (KB4103723 for W2K16 servers) and KB4103727 for Windows 10 clients. Don’t forget the vulnerability also exists for W2K12(R2) and lower as well as equivalent clients.

The key here is that with the May updates change the default for the new policy setting changes the default setting from to mitigated.
Microsoft is releasing new Windows security updates to address this CVE on May 8, 2018. The updates released in March did not enforce the new version of the Credential Security Support Provider protocol. These security updates do make the new version mandatory. For more information see “CredSSP updates for CVE-2018-0886” located at
This can result in mismatches between systems at different patch levels. Which is why it’s now more of a wide spread issue. Looking at the table in the article and the documented errors it’s clear enough there was a mismatch. It was also clear how to fix it. Patch all systems and make sure the settings are consistent. Use GPO or edit the registry settings to do so. Automation is key here. Uninstalling the patch works but is not a good idea. This vulnerability is serious.
Now Microsoft did warn about this change. You can even read about it on the PFE blog Nevertheless, many people seem to have been bitten by this one. I know it’s hard to keep up with everything that is moving at the speed of light in IT but this is one I was on top of. This is due to the fact that the fix is for a remote vulnerability in RDS. That’s a big deal and not one I was willing let slide. You need to roll out the updates and you need to configure your policy and make sure you’re secured. The alternative (rolling back the updates, allowing vulnerable connections) is not acceptable, be vulnerable to a known and fixable exploit. TAKE YOUR MEDICIN!  Read the links above for detailed guidance on how to do this. Set your policy on both sides to mitigated. You don’t need to force updated clients to fix the issue this way and you can patch your servers 1st followed by your clients. Do note the tips given on doing this in the PFE blog:
Note: Ensure that you update the Group Policy Central Store (Or if not using a Central Store, use a device with the patch applied when editing Group Policy) with the latest CredSSP.admx and CredSSP.adml. These files will contain the latest copy of the edit configuration settings for these settings, as seen below.
Path: HKLM\Software\Microsoft\Windows\CurrentVersion\Policies\System\CredSSP\Parameters
Value: AllowEncryptionOracle
Date type: DWORD
Reboot required: Yes
Here’s are the registry settings you need to make sure connectivity is restored

Everything patched: 0 => when all is patched including 3rd party CredSSP clients you can use “Force updated clients”
server patched but not all clients: 1 =>use “mitigated”, you’ll be as secure as possible without blocking people. Alternatively you can use 2 (“vulnerable”) but avoid that if possible  as it is more risky, so I would avoid that.
So, check your clients and servers, both on-premises and in the cloud to make sure you’re protected and have as little RDS connectivity issues as possible. Don’t forget about 3rd party clients that need updates to if you have those!

CredSSP, RDP Fix:

Thursday, April 26, 2018

Is your weblogic and websphere is safe from Greystars Ransomware

On April 21,an anonymous hacker used Weblogic and Websphere deserialization vulnerability to deliver Greystars ransomware to 100 enterprise servers. It encrypted important files in the server and demanded 0.08 bitcoins. The ransom is currently about RMB 4,761. "No file" attack.

 The Greystars ransomware virus draws on the "fileless" attack that has become very popular in recent years. All work is done in the Windows legitimate process Powershell - hackers exploit Weblogic deserialization exploits to attack the server and control the server to download the first stage payload hosted on Gist. And run, the load reads the image hosted on Gist with embedded malicious content, decrypts the content of the image and obtains the second stage load and finally executes it in the PowerShell process. Figure 1 shows the complete attack flow.

Figure 1 The complete attack process of Greystars ransomware

    The first stage attack payload hosting address is
The hackers successfully use Weblogic deserialization vulnerability to invade the server and execute the command shown in FIG. 2 to download the attack payload from the hosted address. Unlike most hackers who use personal domain names as payload download addresses, Greystars ransomware chose Gist to host the load. The advantage of this is that is a legitimate domain name for most antivirus software and host intrusion prevention systems. Its use as a load download address can effectively avoid interception, but it also increases the risk of hacking identity exposure.
Figure 2 Commands executed after hacking the server

Use "Image Steganography" to Hide Malicious Code
         The main function of the first stage load content is to download the embedded malicious code image from hxxps:// and obtain the malicious code execution from the image. Web.png is a special image. Hackers use the Invoke-PSImage tool to insert malicious code into it. Invoke-PSImage is a PowerShell image steganography tool developed by Barrett Adams, a foreign security researcher. It can insert malicious code into the last 4 bits of the two color channels G and B in each pixel of the image. Figure 3 shows the simple working principle of Invoke-PSImage. 
Invoke-PSImage takes a PowerShell script and embeds the bytes of the script into the pixels of a PNG image. It generates a oneliner for executing either from a file of from the web (when the -Web flag is passed)
Figure 3 The simple working principle of Invoke-PSImage
Since the last 4 bits of the color channel do not have much effect on the color rendering of the final pixel, the picture embedded in the malicious code through Invoke-PSImage is almost the same as the original picture. Hackers' hosting of such a "normal" image on Gist will not raise doubts.

Encrypt important files in your computer and ask for ransom
         The second stage of the load is the execution of the encrypted file and ransomware. The payload is also written in PowerShell language.
         For each computer, the Greystars ransomware virus generates an AES key for encrypting the file and encrypts the AES key with the built-in RSA public key. The RSA public key is stored in a certificate that is hardcoded into the code and is obtained through the PublicKey method of the .NET X509Certificates class. Because the PowerShell language has the flexibility to manipulate .NET methods, Greystars ransomware exploits this feature to make cumbersome key generation and key cryptography use a neat PowerShell language. Figure 4 shows the RSA encryption of AES keys by the Greystars ransomware virus.

Figure 4: RSA encryption of AES keys by Greystars ransomware virus

 Greystars ransomware encrypt computer with 422 kinds of file formats, including not only common documents, pictures, database files, but also includes some of the script files required for the server to run, including python scripts, PHP scripts, PowerShell scripts. During the encryption process, the Greystars ransomware virus will avoid other directories except the desktop folder and the document folder in the C drive to ensure the normal operation of the system and end the database-related processes to ensure that the database file is successfully encrypted. Because the Greystars ransomware uses the method of “encrypting the original file à generating a new file ànd deleting the original file”, some directories with only read-only privileges will appear to have the encrypted files that do not exist but the original files are deleted. This results in some The server could not recover the file by paying the ransom. Figure 5 shows the file format encrypted by the Greystars ransomware virus.


 All encrypted files will be suffixed with "". The generated ransom information requires the victim to transfer a ransom of 0.08 bitcoins to the specified address to decrypt the file. Figure 6 shows ransom information.


Figure 6 Help file information

Weblogic and Websphere server began to be favored by ransomware
       In 2017, Weblogic broke two serious de-serialization flaws CVE-2017-3248 and CVE-2017-10271, affecting Oracle WebLogic Server,,,, etc. Multiple versions, these two vulnerabilities are also widely used to implant mining Trojans into the server. To date, there are still many servers that have not updated WebLogic.
       In April 2018, AV monitored two ransomware family Satan and Greystars to attack the Weblogic server. As shown in the trend of Satan ransomware transmission in Figure 7, Satan ransomware has remained active and has affected over a hundred enterprise servers in the country. While the Greystars ransomware virus only broke out on April 21st, it still affected nearly 100 corporate servers.

Figure 7 Trend of the spread of Satan ransom in April
         Why the Weblogic server began to be favored by the blackmail virus. There are two main reasons: First, there are a large number of Weblogic servers that are not updated, and vulnerability exploitation attacks are less difficult. For attackers, the ratio of attack revenue to attack costs is very high. Second, such server systems are generally owned by enterprises. The possibility of companies paying ransoms to recover documents is much higher than that of users. Such attacked servers are generally a type of machine that is left unattended or neglected. Attacks on such servers by mining Trojan horses cannot cause too much movement to attract the attention of server administrators and corresponding enterprises, and extortion of viruses. Attacks can cause servers to crash into business operations, which is why blackmail viruses are often exposed.

Protection advice
1. Update the Weblogic server and websphere to the latest version in time.
2. Install security software to intercept such viruses.

Mitigation Actions:
Restrict PSEXEC , wmi access to the servers if they are windows.
Enforce Software Restriction Policies to ensure ransomware protection.

Import the rules that are listed below.
Block executable in %AppData%:
  • Path:
  • Security Level:
  • Description:
    Don’t allow executables to run from %AppData%.
Block executable in %LocalAppData%:
  • Path if using Windows XP:
    %UserProfile%\Local Settings\*.exe
  • Path if using Windows Vista/7/8:
  • Security Level:
  • Description:
    Don’t allow executables to run from %AppData%.
Block executable in %AppData% subfolders:
  • Path:
  • Security Level:
  • Description:
    Don’t allow executables to run from immediate subfolders of %AppData%.
Block executable in %LocalAppData% subfolders:
  • Path if using Windows XP:
    %UserProfile%\Local Settings\*\*.exe
  • Path if using Windows Vista/7/8:
  • Security Level:
  • Description:
    Don’t allow executables to run from immediate subfolders of %AppData%.
Block executables run from archive attachments opened with WinRAR:
  • Path if using Windows XP:
    %UserProfile%\Local Settings\Temp\Rar*\*.exe
  • Path if using Windows Vista/7/8:
  • Security Level:
  • Description:
    Block executables run from archive attachments opened with WinRAR.
Block executables run from archive attachments opened with 7zip:
  • Path if using Windows XP:
    %UserProfile%\Local Settings\Temp\7z*\*.exe
  • Path if using Windows Vista/7/8:
  • Security Level:
  • Description:
    Block executables run from archive attachments opened with 7zip.
Block executables run from archive attachments opened with WinZip:
  • Path if using Windows XP:
    %UserProfile%\Local Settings\Temp\wz*\*.exe
  • Path if using Windows Vista/7/8:
  • Security Level:
  • Description:
    Block executables run from archive attachments opened with WinZip.
Block executables run from archive attachments opened using Windows built-in Zip support:
  • Path if using Windows XP:
    %UserProfile%\Local Settings\Temp\*.zip\*.exe
  • Path if using Windows Vista/7/8:
  • Security Level:
  • Description:
    Block executables run from archive attachments opened using Windows built-in Zip support.
After everything is imported you get a list like this:

Update your websphere 7 with latest Fix Pack 43 (
WebSphere Application Server traditional V9.0 Fix Pack
WebSphere Application Server traditional V8.5 Fix Pack
WebSphere Application Server traditional V8   Fix Pack