Historical articles

Collecting Electronic Evidence After a System Compromise

Collecting Electronic Evidence After a System Compromise [Historical article: first published on August 2nd, 2001] Author: Matthew Braid, AUSCERT, 2001 Collecting forensic evidence for the purposes of investigation and/or prosecution is difficult at the best of times, but when that evidence is electronic an investigator faces extra complexities. Generally, electronic evidence has none of the permanence that conventional evidence has, and is more difficult to present in a way that can be readily understood. The purpose of this paper is to highlight these difficulties and to suggest strategies to overcome them. Note that no legal advice is given here – different regions have different legislation. This paper will not address everything you need to know for your particular circumstances – it is a guide only. Always seek further information, including legal advice, for your specific circumstances. Obstacles Electronic crime is difficult to investigate and prosecute – often investigators have to build their case purely on any records left after the transactions have been completed. Add to this the fact that electronic records are extremely (and sometimes transparently) malleable and that electronic transactions currently have fewer limitations than their paper-based counterparts and you get a collection nightmare. Computer transactions are fast – they can be conducted from anywhere, through anywhere, to anywhere; they can be encrypted or anonymous and generally have no intrinsic identifying features such as handwriting and signatures to identify those responsible. Any `paper trail’ of computer records they may leave can be easily modified or destroyed or may exist only temporarily. Worse still, auditing programs may automatically destroy the records left when they are finished with them. Because of this, even if the details of the transactions can be retained or restored it is very difficult to tie the transaction to a person. Identifying information such as passwords, PIN numbers, or any other electronic identifier will not prove who did it – it merely shows that the attacker knew or was able to defeat those identifiers. Currently there is nothing that can be considered a true electronic signature for the purpose of criminal law in the same way that DNA or fingerprints do for other criminal investigations. Even though technology is constantly evolving, investigating electronic crimes will always be more difficult due to the ability to alter data easily and because transactions may occur anonymously or deceptively. The best you can do is follow the rules of evidence collection as assiduously as possible. Why Collect Electronic Evidence? Given these obstacles, why bother collecting the evidence in the first place? There are two main reasons – future prevention and responsibility. Future Prevention Collecting electronic evidence involves investigating how the attack occurred. Without knowing what happened an organisation remains vulnerable to this type of attack and has little hope of stopping further attacks (including from the original attacker). It would be analogous to being defrauded for a large sum of money and not bothering to determine how the fraud was perpetrated. Even though the cost of collection can be high, the cost of repeatedly recovering from compromises is much higher, both in monetary and corporate image terms. Responsibility There are two responsible parties after an attack – the attacker and the victim. The attacker is responsible for the damage done and the only way to bring them to justice, to seek recompense and to deter further attacks is to convict them with adequate evidence to prove their actions. Victims also have an ethical, if not legal, responsibility to the community. Sites that have been compromised and used to launch attacks against third parties may find that they – not the attacker – are sued for liability for the attack. The grounds for such a lawsuit might be that by failing to comply with the accepted minimum standards in network security they acted negligently. Public companies have a particular responsibility to their shareholders to ensure that business continuity and data confidentiality and integrity are not compromised. Victims may also have a legal obligation to perform an analysis of evidence collected, for instance if the attack on their system was part of a larger attack. For ethical reasons, some victims may see merit in sharing information gathered after a compromise with others to prevent further attacks. Collection Options Once a compromise has been detected you have two options – pull the system off the network and begin collecting evidence or leave it online and attempt to monitor the intruder. Both have their advantages and disadvantages. Monitoring may accidentally alert the intruder and cause them to wipe their tracks, destroying evidence as they go. If you disconnect the system from the network you may later find that you have insufficient evidence or, worse that the attacker left a `dead man switch’ that destroys any evidence once the system detects that it is offline. How you respond should be based on the situation. The “Collection and Archiving” section below contains information on what to do in each case. Types of Evidence Before you start collecting evidence it is important to know the different types of evidence categories. Without taking these into consideration you may find that the evidence you’ve spent several weeks and quite a bit of money collecting is useless. Real Evidence Real evidence is any evidence that speaks for itself without relying on anything else. In electronic terms, this can be a log produced by an audit function, provided that the log can be shown to be free from contamination. Testimonial Evidence Testimonial evidence is any evidence supplied by a witness. This type of evidence is subject to the perceived reliability of the witness, but as long as a witness is considered reliable, testimonial evidence can be useful and almost as powerful as real evidence. Written statements by a witness can be considered testimonial as long as the author is willing to state that they wrote it. Hearsay Hearsay is any evidence presented by a person who was not a direct witness. Written statements by someone without direct knowledge of the incident are hearsay. Hearsay is generally inadmissible in court and should be avoided. The Five Rules of Evidence In order for evidence to be considered useful, it must have the following properties: 1. Admissible This is the most basic rule – the evidence must be able to be used in court or elsewhere. Failure to comply with this rule is equivalent to not collecting the evidence in the first place, except the cost is higher. 2. Authentic If you can’t tie the evidence positively to the incident, you can’t use it to prove anything. You must be able to show that the evidence relates to the incident in a relevant way. 3. Complete It’s not enough to collect evidence that just shows one perspective of the incident. Not only should you collect evidence that can help prove the attacker’s actions but for completeness it is also necessary to consider and evaluate all evidence available to the investigators and retain that which may contradict or otherwise diminish the reliability of other potentially incriminating evidence held about the suspect. Similarly, it is vital to collect evidence that eliminates alternative suspects. For instance, if you can show the attacker was logged in at the time of the incident, you also need to show who else was logged in and demonstrate why you think they didn’t do it. This is called Exculpatory Evidence and is an important part of proving a case. 4. Reliable Your evidence collection and analysis procedures must not cast doubt on the evidence’s authenticity and veracity. 5. Believable The evidence you present should be clear, easy to understand and believable by a jury. There’s no point presenting a binary dump of process memory if the jury has no idea what it all means. Similarly, if you present them with a formatted version that can be readily understood by a jury, you must be able to show the relationship to the original binary, otherwise there’s no way for the jury to know whether you’ve faked it. Using these five rules, we can derive some basic dos and don’ts. 1. Minimise Handling/Corruption of Original Data Once you’ve created a master copy of the original data, don’t touch it or the original itself – always handle secondary copies. Any changes made to the originals will affect the outcomes of any analysis later done to copies. You should make sure you don’t run any programs that modify the access times of all files (such as tar and xcopy), remove any external avenues for change and in general analyse the evidence after it’s been collected. 2. Account for Any Changes and Keep Detailed Logs of Your Actions Sometimes evidence alteration is unavoidable. In these cases it is absolutely essential that the nature, extent and reasons for the changes be documented. Any changes at all should be accounted for – not just data alteration, but physical alteration of the originals (for instance the removal of hardware components) as well. 3. Comply with the Five Rules of Evidence The five rules are there for a reason. If you don’t follow them you are probably wasting your time and money. Following these rules is essential to guarantee successful evidence collection. 4. Do Not Exceed Your Knowledge If you don’t fully understand what you are doing, then it will be more difficult to account for any changes you make and you may not be able to describe what exactly you did. If you find yourself out of your depth and if time is available learn more before continuing otherwise find someone who knows the territory. Never soldier on regardless – you will just damage your case. 5. Follow Your Local Security Policy and Obtain Written Permission During the course of your investigation you may be required to access and copy sensitive data or obtain statements from system users in which case there will be staff management issues to consider. Before commencing your investigation, it is important to ensure you have obtained written and signed permission to proceed and have clear instructions as to the scope of your investigation. Without clear authority to proceed, your actions may be, or be perceived to be, in breach of your company’s security policy and you may find yourself personally accountable as a result. If in doubt, talk to those that know, including obtaining the necessary legal advice. It is also recommended that your organisation develop appropriate policies and procedures for collecting electronic evidence so that they are in place prior to an incident occurring. This will significantly stream line the process and save valuable time before evidence is lost. 6. Capture as Accurate an Image of the System as Possible This is related to point 1 – differences between the original system and the master copy count as a change to the data. You must be able to account for the differences. 7. Be Prepared to Testify If you’re not willing to testify about the evidence you have collected, you might as well stop before you start. Without the collector of the evidence being there to validate the documents created during the evidence collection process it becomes hearsay and inadmissible. Remember that you may need to testify at a later time. 8. Ensure Your Actions are Repeatable No one is going to believe you if they can’t replicate your actions and reach the same results. This also means that your plan of action shouldn’t be based on trial-and-error. 9. Work Fast The faster you work, the less likely the data is going to change. Volatile evidence (see below) may vanish entirely if you don’t collect it in time. This is not to say you should rush – you must still collect accurate data and keep a record of your actions as you go. If multiple systems are involved, work on them in parallel (a team of investigators would be handy here), but each single system should still be worked on methodically. Automation of certain tasks makes collection proceed even faster. 10. Proceed From Volatile to Persistent Evidence Some electronic evidence is more volatile than others. Because of this, you should always try to collect the most volatile evidence first. 11. Don’t Shutdown Before Collecting Evidence You should never shutdown a system before you collect the evidence. Not only will you lose volatile evidence but the attacker may have trojaned the startup and shutdown scripts, Plug-and-Play devices may alter the system configuration and temporary file systems may be wiped. Rebooting is even worse because it may result in further loss of evidence and should be avoided at all costs. As a general rule, until the compromised disk is finished with and restored it should never be used as a boot disk. 12. Don’t Run Any Programs on the Affected System Since the attacker may have left trojaned programs and libraries on the system, you may inadvertently trigger something that could change or destroy the evidence you’re looking for. Any programs you use should be on read-only media (such as a CD-ROM or a write-protected floppy disk), and should be statically linked. Volatile Evidence Not all the evidence on a system will last for extended periods of time. Some evidence resides in storage (i.e. volatile memory) only while there is a consistent power supply; other evidence stored is continuously changing. When collecting evidence, always try to proceed from most volatile to least volatile and from most critical to least critical machines/systems. For example, don’t waste time extracting information from an unimportant machine’s main memory when an important machine’s secondary memory hasn’t been examined. To determine what evidence to collect first, draw up an Order of Volatility – a list of evidence sources ordered by relative volatility. An example Order of Volatility would be:  1. Registers and Cache  6. Main Memory  2. Routing Tables  7. Temporary File Systems  3. Arp Cache  8. Secondary Memory  4. Process Table  9. Router Configuration  5. Kernel Statistics and Modules 10. Network Topology Once you have collected the raw data from volatile sources you may be able to shutdown the system. General Procedure When collecting and analysing evidence there is a four-step procedure you should follow. Note that this is a very generic outline – it may be necessary to customise the procedures to suit your situation. Identification of Evidence You must be able to distinguish between evidence and junk data. For this purpose you should know what the data is, where it is and how it is stored. Once this is done you will be able to determine the best way to retrieve and store any evidence found. Preservation of Evidence The evidence found must be preserved as close as possible to its original state. Any changes made during this phase must be documented and justified. Analysis of Evidence The stored evidence must then be analysed to extract the relevant information and to recreate the chain of events. Always be sure that the people who are analysing the evidence are fully qualified to do so. Presentation of Evidence Communicating the meaning of your evidence is vitally important – otherwise you can’t do anything with it. It should be technically correct, credible and easily understood by persons with a non-technical background. A good presenter can help in this respect. Collection and Archiving Once you’ve developed a plan of attack and identified the evidence that needs to be collected, it’s time to start capturing the data. Storage of that data is also important as it can affect how the data is perceived. Logs and Logging You should be running some kind of system logging function. It is important to keep these logs secure and to back them up periodically. Since logs are usually automatically timestamped a simple copy should suffice, although you should digitally sign and encrypt logs that are important to protect them from contamination. Remember that if the logs are kept locally on the compromised machine they are susceptible to alteration or deletion by an attacker. Having a remote syslog server and storing logs in a `sticky’ directory can reduce this risk, although it is still possible for an attacker to add decoy or junk entries into the logs. Regular auditing and accounting of your system is useful not only for detecting intruders but also as a form of evidence. Messages and logs from programs such as Tripwire can be used to show what an attacker did. Of course, you need a clean snapshot for these to work, so there’s no use trying it after the compromise. Monitoring Monitoring network traffic can be useful for many reasons – you can gather statistics, watch for irregular activity (and possibly stop an intrusion before it happens) and trace where an attacker enters and what they do. Monitoring logs as they are created may show important information that might subsequently be deleted by the attacker. This doesn’t mean that reviewing the logs later is not worthwhile – it may be what’s missing from the logs that is suspicious. Information gathered while monitoring network traffic can be compiled into statistics to define normal behaviour for your system. These statistics can be used as an early warning of an attacker’s presence and actions. You can also monitor the actions of your users. This can, once again, act as an early warning system – unusual activity (such as unsuccessful attempts to su to root) or the sudden appearance of unknown users warrants closer inspection. No matter the type of monitoring done, you should be very careful – there are plenty of laws you could inadvertently break. In general you should limit your monitoring to traffic or user information and leave the content unmonitored unless the situation necessitates it. You should also display a disclaimer stating what monitoring is done when users log on. The content of this should be worked out in conjunction with your lawyer. Methods of Collection There are two basic forms of collection – `freezing the scene’ and ‘honeypotting’. The two aren’t mutually exclusive – you can collect frozen information after or during any honeypotting. Freezing the scene involves taking a snapshot of the system in its compromised state. The necessary authorities should be notified (for instance the police and your incident response and legal teams) but you shouldn’t go out and tell the world just yet. You should then start to collect whatever data is important onto removable non-volatile media in a standard format and make sure that the programs and utilities used to collect the data is also collected onto the same media as the data. All data collected should have a cryptographic message digest created and those digests should be compared to the original for verification. Honeypotting is the process of creating a replica system and luring the attacker into it for further monitoring. A related method – sandboxing – involves limiting what the attacker can do while still on the compromised system so they can be monitored without much further damage. The placement of misleading information and the attacker’s response to it is a good method for determining the attacker’s motives. You must make sure that any data on the system that refers to the attacker’s detection and actions should be either removed or encrypted; otherwise they can cover their tracks by destroying it. Honeypotting and sandboxing are extremely resource intensive, so may be infeasible to perform. There are also some legal issues to consider, most importantly entrapment. As before – obtain legal advice. Artefacts Whenever a system is compromised, there is almost always something left behind by the attacker – be it code fragments, trojaned programs, running processes or sniffer log files. These are known as artefacts. They are one of the important things you should be collecting, but you must be careful. You should never attempt to analyse an artefact on the compromised system. They could do anything and you want to make sure their effects are controlled. Artefacts may be difficult to find. Trojaned programs may be identical in all obvious ways to the originals (file size, MAC times etc). Use of cryptographic checksums may be necessary to determine whether files have been modified, so you may need to know the original file’s checksum. If you are performing regular File Integrity Assessments, this shouldn’t be a problem. Analysis of artefacts can be useful in finding other systems the attacker (or their tools) has broken into. Collection Steps We now have enough information to build a step-by-step guide for the collection of the evidence. Once again this is only a guide – you should customise it to your specific situation. 1. Find the Evidence Determine where the evidence you are looking for is stored. Use a checklist – not only does it help you to collect it, but it can be used to double-check that everything you are looking for is there. 2. Find the Relevant Data Once you’ve found the evidence, you must identify what is relevant to the case. In general you should err on the side of over-collection, but you must remember that you have to work fast. 3. Create an Order of Volatility Now that you know exactly what to gather, work out the best order to gather it. Following the Order of Volatility for your system ensures that you minimise loss of uncorrupted evidence. 4. Remove External Avenues of Change It is essential that you avoid alterations to the original data. Preventing tampering with the evidence helps you to create as exact an image as possible, although you have to be careful, if you disconnect the system from the network, the attacker may have left a dead man switch. In the end you should try and do as much as possible. 5. Collect the Evidence You can now start to collect the evidence using the appropriate tools for the job. As you go, re-evaluate the evidence you’ve already collected. You may find that you missed something important. Now is the time to make sure you get it. 6. Document Everything Your collection procedures may be questioned later, so it is important that you document everything that you do. Timestamps, digital signatures and signed statements are all important – don’t leave anything out! Controlling Contamination – The Chain of Custody Once the data has been collected it must be protected from contamination. Originals should never be used in forensic examination – verified duplicates should be used. This not only ensures that the original data remains clean, but also enables examiners to try more `dangerous’, potentially data-corrupting tests. Of course, any tests done should be done on a clean, isolated host machine – you don’t want to make the problem worse by letting the attacker’s programs get access to a network. A good way of ensuring data remains uncorrupted is to keep a Chain of Custody. This is a detailed list of what was done with the original copies once they were collected. Remember that this will be questioned later on, so document everything. Record who found the data, when and where it was transported (and how), who had access to it and what they did with it. You may find that your documentation ends up greater than the data you collected, but it is necessary to prove your case. Analysis Once the data has been successfully collected it must be analysed to extract the evidence you wish to present and to rebuild what actually happened. As for other procedures, make sure you fully document everything you do – your work will be questioned and you must be able to show that your results are consistently obtainable from the procedures you performed. Time To reconstruct the events that led to your system being corrupted you must be able to create a timeline. This can be particularly difficult when it comes to computers – clock drift, delayed reporting and differing time zones can create confusion in abundance. One thing to remember is to never change the clock on an affected system. Record any clock drift and the time zone in use as you will need this later, but changing the clock just adds an extra level of complexity that is best avoided. Log files usually use timestamps to indicate when an entry was added and these must be synchronised to make sense. You should also use timestamps – you’re not just reconstructing events, you are contributing to the chain of events that must be accounted for as well. It’s best to use the GMT (UTC) time zone when creating your timestamps – the incident may involve time zones other than your own, so using a common reference point will make things much easier. Forensic Analysis of Back-Ups When analysing backups, it is best to have a dedicated host for the job. This examination host should be secure, clean (a fresh, hardened install of the operating system is a good idea), and isolated from any network – you don’t want it tampered with while you work and you don’t want to accidentally contaminate others. Once this system is available, you can commence analysis of the backups. Making mistakes at this point shouldn’t be a problem – simply restore the backups again if required. Remember the mantra – document everything you do. Ensure that what you do is not only repeatable, but that you always get the same results. Reconstructing the Attack Now that you have collected the data, you can attempt to reconstruct the chain of events leading to and following the attacker’s break-in. You must correlate all the evidence gathered (which is why accurate timestamps are critical) – so it’s probably best to use some graphical tools, diagrams and spreadsheets. Include all of the evidence you’ve found when reconstructing the attack – no matter how small it is. You may miss something if you leave a piece of evidence out. As you can see, collecting electronic evidence is no trivial matter. There are many complexities to consider and you must always be able to justify your actions. It is far from impossible though – the right tools and knowledge of how everything works is all you need to gather the evidence required.   References 1. Collie, Byron S. “Intrusion Investigation and Post Intrusion Computer Forensic Analysis”. 2000. URL: http://www.usyd.edu.au/su/is/comms/security/intrusion_investigation.html 2. Collie, Byron S. “Collecting and Preserving Evidence after a System Compromise”. 2000. URL: http://mangrove.nswrno.net.au/dist/public/auugsec2000/Collecting%20and%20Preserving%20Evidence%20after%20a%20System%20Compromise.ppt 3. Romig, Steve. “Forensic Computer Investigations”. 2000 URL: http://www.net.ohio-state.edu/security/talks/2001-10_forensic-computer-investigations/ 4. McKemmish, R. (Australian Institute of Criminology) “What is Forensic Computing?” June 1999. URL: http://www.aic.gov.au/publications/tandi/ti118.pdf 5. Brezenski, Dominique and Killalea, Tom (Internet Engineering Task Force). “Guidelines for Evidence Collection and Archiving” July 2000. URL: http://www.globecom.net/ietf///draft/draft-ietf-grip-prot-evidence-01.html 6. Action Group into the Law Enforcement Implications of Electronic Commerce. “Issues Paper: Evidence and the Internet” September 2000. URL: http://www.austrac.gov.au/publications/agec/ 7. Wright, T. “An Introduction to the Field Guide for Investigating Computer Crime (Part 1)” 17 April 2000. URL: http://www.securityfocus.com/infocus/1244 8. Wright, T. “The Field Guide for Investigating Computer Crime: Overview of a Methodology for the Application of Computer Forensics (Part 2)” 26 May 2000. URL: http://www.securityfocus.com/infocus/1245 9. Wright, T. “The Field Guide for Investigating Computer Crime: Search and Seizure Basics (Part 3)” 28 July 2000. URL: http://www.securityfocus.com/infocus/1246 10. Wright, T. “The Field Guide for Investigating Computer Crime : Search and Seizure Planning (Part 4)” 1 September 2000. URL: http://www.securityfocus.com/infocus/1247 11. Wright, T. “The Field Guide for Investigating Computer Crime: Search and Seizure Approach, Documentation, and Location (Part 5)” 10 November 2000. URL: http://www.securityfocus.com/infocus/1248 12. Wright, T. “The Field Guide for Investigating Computer Crime, Part 6: Search and Seizure – Evidence Retrieval and Processing” 8 January 2000. URL: http://www.securityfocus.com/infocus/1249 13. Wright, T. “The Field Guide for Investigating Computer Crime, Part 7: Information Discovery – Basics and Planning” 26 February 2001. URL: http://www.securityfocus.com/infocus/1250 14. Wright, T. “The Field Guide for Investigating Computer Crime, Part 8: Information Discovery – Searching and Processing” 21 March 2001. URL: http://www.securityfocus.com/infocus/1251   Attached Documents collecting_evidence_after_a_system_compromise.pdf

Learn more

Historical articles

Forming an Incident Response Team

Forming an Incident Response Team [Historical article: first published on January 1st, 1995]   Forming an Incident Response Team (IRT) in the 1990s can be a daunting task. Many people forming an IRT have no experience with doing this. This paper examines the role an IRT may play in the community, and the issues that should be addressed both during the formation and after commencement of operations. It may be of benefit to existing IRTs as it may raise awareness of issues not previously addressed. 1. Introduction On 8 March 1993, the Security Emergency Response Team (SERT) commenced Incident Response operations in Australia. Prior to this, discussions had been held with other Incident Response Teams (IRTs) to discuss the establishment of the team, and what would be required. A significant amount of work was performed just before, and immediately after, commencement to establish operations and tools. Further communication with other IRTs assisted SERT to establish policies and helped SERT to grow in its own constituency, and the computer security community at large. Since that time, SERT has undergone many changes, and these transitions could not have been effected as smoothly as they were without the work that had been achieved earlier. This paper looks at the topic of what it takes to form an IRT. It examines what issues need to be addressed and resolved prior to, and after, forming an IRT. It looks at the constituency, policies, relationships, information, equipment, tools, and interaction with the wider community. Much of the information in this paper is not new. It has been steadily collected from a number of sources over time, and various amounts of it have been applied by SERT with varying success. The overwhelming message throughout this paper is: “You are not alone!” 2. How did SERT start? Dateline 1992: The Australian Academic and Research Network (AARNet) has been running for two years. It connects all the academic and research institutions in Australia, which is now the third largest country on the Internet in terms of connected hosts. During this year, many institutions started experiencing a dramatic increase in the number of computer security intrusions, particularly network based attacks. This was a new problem for Australia to face. In the past, most attacks originated locally, and were dealt with by local institution statutes. At first, the attacks had nuisance value, but they soon started to reach plague proportions. Australia was used as a launch pad for attacks to overseas sites. One particular group of individuals concentrated on the South-East Queensland corner, and used three Universities in particular. From here, they launched attacks to overseas institutions, which ultimately threatened a large amount of research funding coming into the country. Cooperation between these three Universities was always extremely good, and only a coordinated response to this problem resulted in the apprehension of the intruders. It was during these attacks that a decision was made that Australia was large enough that it must fend for itself in the international arena. The decision was that an Incident Response Team was required. Much talk was generated on the topic, but no progress was made. The three Brisbane-based Universities, Queensland University of Technology, Griffith University, and The University of Queensland, combined their efforts and applied to the Federal government for funds to establish a response team. Late in 1992, this application was rejected by the government. The Universities then made a crucial decision: the IRT was essential, so they decided to just start it anyway, and fund it themselves. During February, a number of staff members worked hard to get a team to operational readiness as quickly as possible. This included developing crude tools for incident tracking, and establishing a secure premises to operate from. On 8-Mar-1993, the SERT team was announced to its constituency. It was also during this time, that SERT communicated heavily with the CERT Coordination Centre in Pittsburgh. SERT outlined their intentions to commence operations, and received an enormous amount of assistance from CERT. Much electronic mail was exchanged, which finally culminated in a conference telephone call between the two teams. During this phone call, the two teams exchanged ideas on the issues that the SERT team would need to address to become operational. Subsequent communications between CERT and SERT clarified and defined how the two teams would interoperate. Many issues relating to the international nature of the interaction required resolution, even the little things such as date format. Is 1/8/94 the 1st of August or the 8th of January? It was during this time that CERT was forming a new relationship with the DFNCERT team in Germany as well. Subsequent incidents highlighted shortcomings in the operation, which were addressed and rectified as time went on. Many of SERT’s problems stemmed from the way it was formed: it had no authority to act, it just existed. Convincing the community that the SERT team was essential was a hard and long task. This was achieved through steady and constant communications, dedication to assisting sites with security problems, and as large a public exposure as could be achieved without burning out staff, or destroying the travel budget. Acceptance was finally realised when most of the Computer Centre Directors in the Australian Universities contributed funds towards SERT’s operation. Since that time, SERT has been transformed into AUSCERT with a formal contract signed between AUSCERT and AARNet. AUSCERT now acts with the authority of AARNet, and is seeking to extend its constituency to the whole of Australia and beyond. The operation commenced on 1 April 1994. The rest of this paper looks at the decisions that were made, or were advised should be made. At times, comparisons are drawn between SERT and CERT, to highlight some fundamental differences in the two operations. This comparison highlights advantages and disadvantages of the two types of teams. 3. Pre-establishment Tasks Having decided (or been directed) to form an Incident Response Team, there are a number of tasks that can be completed before commencing operation. The ultimate success of the ongoing team may be the direct result of how well some of these tasks have been completed. This list of tasks is not exhaustive, and cannot cater for the myriad of local issues. These will need to be addressed by each individual team. 3.1. Reason for existence Why should there be an Incident Response Team?. This question, although obvious, is crucial. There may be many answers, all of which are equally valid. Ultimately, it is the answer to this question that will earn the respect and cooperation of the constituency. Possible answers include: a local team that understands local issues; a team that operates in the same time zone as the constituency; separate security services from the network providers; to increase the security of the constituent’s computer systems; to educate system administrators in their roles; to coordinate incident response at a central point; to scope the size of the security problem; to determine trends in attacks. The lack of a clear reason for the existence of the IRT will ultimately result in a lack of support, both financially and administratively, which will lead to the demise of the team. If the constituency does not want the team, then its effectiveness will be minimised. This may lead to funding cuts, and eventual closure. In Australia, it was determined that at the time it was the third largest nation in the world in terms of the number of registered Internet hosts. That fact, coupled with the comparatively low population makes Australia one of the highest Internet users per capita. It determined that Australia should take responsibility for its own security problems rather than relying on the limited resources of the United States. In addition, the timezone difference made cooperating with the United States difficult to perform effectively; a local team that understood local issues was required. This was an important issue in the justification for SERT. 3.2. Goals Forming an Incident Response Team without a goal is like implementing computer security measures without a policy. If the goal of what needs to be achieved is unclear, then any efforts by the IRT will always be performed on an “ad-hoc” basis, without a clear picture in mind. This may cause precious team resources to be fruitlessly expended on ventures that yield limited results. One thing that is consistent across all Incident Response Teams is that they do not have sufficient resources to do their job to the ability they would like. Their working day is a continual compromise of priorities. The lack of clearly defined goals makes priority decisions arbitrary at best, opening the possibility for error resulting in mistrust from the community. Deciding goals generally follows immediately from answering the question about the reason for the IRT’s existence. Once the goals are defined, they should be communicated to the community being served. Many misunderstandings between an IRT and its community have occurred because members of that community misunderstood the role and goals of the IRT. Clear and well defined communication of the goals of the IRT is essential if the community is to work with the IRT; not against it. The expression of the goals may be made in the form of a mission statement to the constituency. The day to day operation of the IRT is then measured against the question, “Does this situation and action fit within the mission statement of the team?”. A measure of success of the team’s operations may be determined through some empirical measurement of how well these goals are being met. Some examples of goals may include: raising the floor of Internet security; assisting sites in proactive security ventures; increasing the awareness of security incidents; determining the scope of the security problem; assisting the community in applying the best security practices available. 3.3. Constituency When a team is formed, it must have a clearly defined scope of operations. The people it serves must know that they have an Incident Response Team, and the team must know who is, and who isn’t, in the constituency. The scope of the constituency is usually defined by the community that is funding the IRT (either directly or indirectly). This may be based on the network provider, geographical considerations, or organisational considerations. When decisions on the boundaries of the constituency are made, they should be communicated not only to those members that form the constituency, but also any members that do not fall within the boundaries. This might be done through other IRTs. Other IRTs also need to know where the boundaries of the constituency are defined so that they can direct appropriate queries to the correct team. At times, it is possible that a site that is not within the defined constituency will request assistance. If that site falls under the defined constituency of another IRT, it is in the best interests of the IRTs and the site in question to have them contact the local IRT for assistance. If the site does not wish to do this, then it is polite to request permission to advise the local IRT that the incident will be dealt with internally, at the request of the site. If permission is not given, then assistance should still be given to the site, with an attempt to resolve the issue of constituency as soon as possible. It is the experience of SERT that it almost never gets to this stage. 3.3.1. Defining the Constituency Defining a constituency is not as trivial a task as it first seems. Constituencies may be defined by a number of constraints: geographical boundaries; network provider; organisational dependencies. Existing IRTs are defined by a selection of all the above. In some cases, a site may be contained within the constituency of two or more IRTs. In many cases, there are sites that do not have an IRT. By default, the CERT Coordination Centre will always provide assistance to those sites, on an incident priority basis. Some IRTs are defined by their network providers, which may or may not cover the entire country. If a site is in one country, but connected by the network provider of another country (which may or may not have an IRT), this has the potential for much confusion. The United States has a large number of IRTs covering a range of constituencies, with each team being established to meet the specific needs of their constituency. 3.3.2. Advertising Having established the boundaries of the constituency, it is essential to advertise the existence of the newly formed IRT. This can be done in many ways: mailouts; electronic mail to network and site contacts; USEnet news; conferences; press releases. It is important that the constituency learns about the existence of the IRT, and then establishes communication with that team to learn about their goals, mission, and policies. The mechanisms above are useful for advising the wider community of the existence of the team. A mechanism for communicating the goals, mission, and policies could be through a “registration” procedure. By asking each site to register a 24-hour contact point with the IRT so that the site could be contacted after hours, a database of constituent sites can be established. At the same time, communication lines are opened with that site to provide information on the goals and policies of the IRT. If possible, establish a mechanism for rapidly contacting all members within the constituency (such as an electronic mailing list). In Australia, this initially met with limited success. After a few incidents that resulted in some sites being uncontactable, the number of registered sites has risen steadily. This is a fundamental difference between the SERT and CERT operations. CERT by nature of its constituency can never establish a one-to-one relationship with all of its constituents; there are too many and they are too diverse. SERT has a well defined constituent list, and has worked to establish the ability to rapidly contact any constituent site on a 24 hour basis. 3.3.3. Identifying Trusted Contacts If the IRT is to communicate security information with a site, then it needs to know whom that information is going to. If the constituency is relatively small and well defined, it is possible to establish a database of “registered site security contacts” in advance, rather than establishing a security contact for each incident as it occurs. This register should be independently verified. Initial thoughts may be to solicit this contact information by asking each site to nominate their contact. This can be easily achieved using electronic mailing lists that already exist for the operation of the network. Any contact information received should be independently verified for correctness. This method however, registers “contacts” for a site, not the “nominated security contact”. These contacts may not be the appointed security personnel of the institution. The appointed security personnel may not be technically minded, but they might contain the authority to make decisions and contact the correct staff during an emergency. It is the responsibility of each constituent site to nominate the most appropriate site security contact. Therefore, if the IRT is intending to form a register of trusted security contacts, it is strongly recommended that these contacts be determined by approaching the Chief Executive of the organisation, and asking that person to indicate who their appointed security contact is. Another concern for some countries might be that the collection and storage of this information may contravene local laws (such as Data Protection and Privacy). This must be addressed on a case by case basis. 3.3.4. Information Releases Incident investigation may require that certain items of information such as machine names and contacts be released to other parties. Rather than seek permission to release this information on a case by case basis, it may be easier to seek permission prior to any incident. Many sites do not mind their site name, contact information, or affected machine names communicated with any necessary parties to assist in the resolution of incidents. Seeking this permission in advance may reduce the time taken to resolve an incident, especially an international one where timezones become an issue, and any delay may be crucial. 3.3.5. Trusted Communications Paths Once the community is identified, the IRT needs to be able to communicate with that community in a secure way. Many people think that this means that it should be impossible for an intruder to read the electronic mail that is issued from the IRT to the sites, and mail sent from the site to the IRT. This is only one aspect of this complex topic. Electronic mail is by far the easiest form of communications for an IRT to deal with. Automated tools can be used to process the information, thus reducing the load on IRT staff. However, if a site has been compromised, then it may not be possible for them to send electronic mail (for example, if they have disconnected from the network). Other forms of communication will be required (such as phone, pager, and fax). Working in the international community and with other IRTs sometimes requires the exchange of sensitive data. “Sensitive” may merely be a copy of a draft advisory that is still being verified for correctness. Early release of this information may result in further damage to the community. Data encryption is another method of exchanging sensitive data securely. This relies on the end points of the communication being secure. If any of these end points is not secure, then the encrypted data should not be stored in plain text, and the encryption keys should be kept offline. The use of data encryption should be determined by the classification of the data. The release of any public information from the IRT should be done in such a way that if any false information is released by a third party pretending to be the IRT, the fraudulent message will be detected. This may involve the use of digital signatures, certificates, or encryption. The final topic in this area is the ability to access the secured systems within the IRT from outside the normal base of operations. This may occur for example if staff are travelling, or are operating after hours. These communications channels should also be secured against network sniffing. 3.4. Scope of Operation What types of incidents will be handled by the team? What types of incidents will the team not handle? These questions must be answered and those answers communicated to the community. For instance, the types of incidents that may or may not be handled could include: intrusions; software vulnerabilities; requests for security information; requests to speak at conferences; requests to perform on-site training; requests to perform on-site security audits; requests to investigate suspected staff; viruses; international incidents; illegal activities such as software piracy; requests to undertake keystroke monitoring. In addition, a decision must be made on what level of assistance will be provided. Will the team merely forward notification of security incidents onto the affected sites, or will they work completely with the site to determine the extent of the intrusion and help them to better secure their sites? 3.5. Identify Savings to the Community Part of the justification for forming an Incident Response Team is to identify the savings to the community. This is typical of any risk analysis situation, where the costs of reducing the risk should not exceed the costs of the potential loss. Possible savings could include: real money costs in staff time handling incidents; costs of staff gathering and verifying security information; lost opportunity costs; loss of reputation (or gaining a reputation!); threat to “sensitive” data. 3.6. Scope of Expertise Small teams in particular cannot have a complete set of skills required in today’s complex and diverse array of computer hardware and software. There is no shame in admitting to the constituency that the team does not possess the necessary skills to tackle a certain problem. If the team finds itself in this situation, they could cultivate contacts within the community that do possess the required skills. Develop a level of trust with these contacts over time and use them from time to time when the team’s skills are inadequate. Be careful of always making use of the same people as they become less reluctant to help over time (due to other work commitments), and risking the wrath of their management. In general, people are willing to assist in true emergency situations, but are more reluctant to devote time to more mundane situations, or bolster the ranks of the IRT for free if the team is inadequately staffed. 3.7. Staff size and Makeup About the only common attributes between existing Incident Response Teams are that they are under-funded, under-staffed, and overworked. Determining the appropriate number of staff to employ is a fine balance between the expected (and probably as yet, unknown) workload, and the budget constraints. It is SERT’s experience that one full-time technical person can comfortably handle one new incident per day, with 20 incidents that are still open and being investigated. Anything over this rate does not allow for any other involvement than incident response. This may have many negative aspects. Besides the technical team, there must be management, administrative, and clerical support. These services may be contracted from other organisations, or people may be employed to fulfil these roles. The biggest issue facing staffing levels is staff burnout. It is a problem that if staff are continually placed under stress by being on 24 hour callout, and working long hours on complex incidents, their mental and physical health may begin to suffer. It is highly recommended that to operate on a 24 hour callout basis, a minimum of three full time staff are required. Staff should be rotated through the high stress positions, and when they are rostered off, they should be given the opportunity to pursue other less stressful activities such as tool and course development. However, staff should always be available to assist when the emergency load becomes excessive. The incident rate is not a constant. There will be quiet times, and there will be busy times. The success of an IRT is usually measured in how they perform during the busy times, as this is when most members of the constituency are exposed to the IRT. There must be sufficient capacity in the team to effectively deal with large and complex incidents. Failure to do this will result in dissatisfaction from the constituency. There are other duties for team members to perform when the incident load is light. Seminar preparation, tool development, policy writing, and most importantly, looking to the team’s own security (which is often forgotten). It is an unfortunate fact of life that incidents do not occur at a steady rate. What may initially be a quiet moment in the office can be shattered through a single electronic mail message. Incidents can, and do, occur in bursts. This is particularly true immediately after information on how to exploit vulnerabilities is made public. The posting of an exploitation script is usually a recipe for long hours within Incident Response Teams. Possible solutions to this problem may involve the ability to recall staff at a moment’s notice to assist with the higher than normal incident rate. This has negative implications of staff burnout. Another solution is to have trusted staff from other institutions on standby who could lend technical assistance in times of emergencies. Not all incidents are created equal! This paper discusses incident load in terms of numbers of incidents. One incident may involve a single system and be dealt with in five minutes, whilst another incident may involve a large number of systems over many sites and continents, requiring an enormous amount of coordinating and analysis. Long running incidents are partially covered in the “open and investigating” incident category detailed above. This does not take into account the amount of effort required to resolve the incident, or the severity and priority of the incident. 3.8. Identify Technology Dealt With Given that it is not possible for an IRT to have all the necessary experience to deal with every platform and system, a decision should be taken as to what technology will be dealt with, and what incidents may need to be referred to other groups or other IRTs for assistance. The choices could include: hardware platforms; operating systems and revisions; vendor packages; third party packages; public domain packages; viruses; worms; Trojan horses. This information should be communicated to the other IRTs. 3.9. Identify Depth of Analysis When investigating incidents and vulnerabilities, the depth of analysis may vary, depending on the size, experience, and spare capacity of the IRT. In general, the more time spent on analysis, the faster the problem will be resolved. However, some problems take an enormous amount of time to resolve, and may be beyond the experience of the team. A decision should be made as to what level of analysis will be applied to vulnerabilities and incidents. Some IRTs merely act as a clearing house for security information, providing no assistance to the affected site to become more secure. Others will examine a vulnerability in depth, and determine not only a workaround and fix, but also an explanation as to why the vulnerability occurred, and examine other packages for similar problems. Most teams do not have this level of resource available. Possible actions of the IRT when examining incidents or vulnerabilities may include: sending information on, but providing no further assistance; assisting sites to resolve the problem; assisting sites by examining their security and providing suggestions; examining source code to find the vulnerability; providing workarounds and example fixes to vulnerabilities; assisting vendors in patching vulnerabilities and testing solutions; detailed examination of vulnerabilities to determine why they occurred; examining other packages for similar vulnerabilities. 3.10. Budget When submitting a budget for funding, the budget should contain a significant component for staff travel. This travel is used to attend conferences and workshops, meetings with constituent members, meetings with other IRTs, and meetings with the funding providers. Once the IRT starts up, it will be called upon to present papers at a variety of conferences and workshops, and this requires a large amount of travel. 3.11. Authority and Reporting Each IRT has a management structure controlling their activities, and monitoring their progress. This management requires regular reports from time to time. The management also may exercise some level of authority over the IRT (such as demanding to know information like affected sites, or vulnerability details). In addition, it is often misconstrued by the constituency that the IRT has some form of “authority” over them, and can direct other sites to “get their act together!”. In general, this is not true. The IRT usually acts as an “advisory service”, rather than an enforcing agency. Sites are more willing to report failures of security to someone that is in a position to help, rather than someone that is in a position to discipline. The authority over the constituent members needs to be clearly defined, and communicated regularly to the constituency. Mistrust in the IRT will prevent security incidents being reported, resulting in incomplete information and an inability to assist sites with security. If the IRT has no authority over the constituency, then the constituency should be left in no doubt about this situation. The lower the authority by the IRT over the constituency, the more chance there is that the constituency will be reporting security incidents and seeking assistance. In addition, any authority that may be exercised by the management over the IRT should be clearly communicated to the constituency. If the management may request access to any information, then the constituency should be aware of this, and accept it. Any reports that are generated for management should contain only the minimum of detail required for management to perform its duties. This level of reporting should also be communicated to the constituency. In general, the constituency is provided a summary of this reported information as a form of “statistics” or report of the progress of security within its constituency. 3.12. Policies There is no point in advising the constituency that they need to have security policies if the IRT does not have one itself. Policies are very important to establish early, so that all staff members take appropriate action in the majority of situations encountered. Policies should begin with a policy “framework” that shows how the various policies relate together. Policy statements contain directives of a general nature, that may be implemented using the most appropriate techniques available. For example, the statement: “Data will be transmitted using DES in ECB mode.” is not a policy statement, as technology may change. A policy statement is better worded as; “Data will be transmitted encrypted using the best available technology at the time that ensures message content confidentiality.”. Many of the policies of an IRT will need to be communicated with the constituency so that they understand the role, goals, and intentions of the IRT. This helps to build trust in the IRT as the constituency fully understands what will happen with any information sent to the IRT, and what assistance they can expect from the IRT. Some of the policies may not be considered to be public knowledge. In particular, policies relating to the internal workings of the IRT are probably best kept internal to the IRT as the constituents do not need to know this information. Determine which policies are public knowledge, communicate those policies to the constituency, and any other persons requesting them from time to time. One of the major policies to develop is how to handle the release of information to various aspects of the community. These policies will need to deal with issues about what information is public, and who is authorised to communicate that information. Example situations include: Press: The press has a job to do in getting the latest story that makes headlines and sells papers. As such, it is the experience of some people that they are not always accurately or completely reported, with some words being taken out of context. It is the policy of some of the IRTs that operational staff will not communicate with the press, but pass them to a nominated “press officer” that is briefed with the information that is public release only. Incoming Calls: When a call is received into the IRT, the way it is handled may depend on the type of request. Determine which information is public, and only release that to unsolicited callers. For example, a caller may indicate they are from site X, and ask for an update on the status of incident Y. This caller may be the intruder attempting to determine what is known about their activities. If in doubt, call the person back using the contact information that has been registered for that site. If the caller is seeking public information, then there is no problem in just releasing that. Sites: When communicating with sites, it is important to decide what they should be told in relation to their incident. For example, if other sites had already reported compromises as a result of a vulnerability, should this information be released to the caller? Should the current state of knowledge on the vulnerability be released? Law Enforcement: In some countries, it is a legal requirement to advise law enforcement agencies of any knowledge of illegal activities. This must be resolved prior to commencing IRT operations. The detail of information passed to law enforcement should be determined and communicated back to the constituency. Other IRTs: Resolving incidents will most likely involve the use of other IRTs, especially ones located in other countries. The level of information communicated with other IRTs should be determined and the constituency advised, either initially, or on a case by case basis. In general, it is almost impossible to resolve an incident without revealing the names of the source and target machines involved. In general, it is important to identify what the IRT will do in terms of its operation. It is just as important to determine what the IRT won’t do. For example, the IRT won’t: investigate individuals; communicate vulnerability information without a fix; release site names and contacts without permission; advise law enforcement without permission; fix a constituent’s security problems for them, but will offer advice. 3.13. Enforcement Once policies are determined and enacted, there must be a mechanism to determine that they are being adhered to. Failure to adhere to stated policies can lead to a breach of trust in the IRT, finally resulting in termination of its services. It is vitally important that all staff members understand the policies and undertake to adhere to them. Policies should not be overbearing. They should be implementable, acceptable, and testable. If the staff do not accept the policies, they will ultimately be forgotten. Some metric of compliance may need to be developed, to ensure that any steady relaxing of the adherence to policies does not go unnoticed. 3.14. Incident Response Prior to commencement of operations, the IRT needs to decide how it will deal with incidents as they are reported. In many cases, the IRT will lack the necessary experience to know best how to deal with incidents initially. This experience will come with time. In the meantime, some communication with other IRTs to seek information on how to handle “fictitious” situations may provide some guidelines on where to start. Make up an incident. Have someone communicate it to the IRT, and determine internally how this incident should be handled. Role playing and scenario analysis will assist the team in making rational decisions under pressure. It is at this time that contact should be commenced with other IRTs. Trust will take some time to build with these teams, so it is important to be patient. Communicate the team’s policies to the other IRTs, and let them provide some response on their experiences. Requesting information about current incidents and vulnerabilities will almost certainly be met with stony silence. There are a number of other useful groups that can be contacted at this time, other than IRTs. These groups may be doing research and development into computer security tools and products, or may be experts in areas that the newly formed IRT does not have any experience in. Security research groups will be able to educate the IRT members on the latest advances in computer security. Contacts with local vendors should also be established so that rapid comments on vulnerabilities can be achieved. One area that some teams decide not to develop expertise in is combating computer viruses. There are many vendors of anti-virus software, and a number of groups doing virus research. Contacts with these groups should be made, to allow for expert opinion when dealing with virus incidents. Contacts with law enforcement should be established as many computer security incidents involve a breach of local laws. Whilst it may not be the role of the IRT to investigate criminal activity, they may be required to liaise with the law enforcement officers to provide expert assistance. Policies should be developed between these two groups as to how they will operate with each other. 3.15. Legal Issues Local laws and conventions may affect how the IRT operates. These legal issues will require resolution prior to commencement of operations. In general, different countries will have different laws governing the various aspects outlined below. It is impossible to give a general guideline, and local legal counsel should be sought by the IRT. If an IRT gives advice on security issues, and the site is further compromised, there may be a liability issue. In general, this is not the case, provided the IRT provides the best advice possible, based upon the knowledge that was available at the time. The IRT must undertake to obtain the most up-to-date advice possible at all times. Staff should be trained in security issues, and that training regularly updated from time to time. This issue may be reflected as a “duty of care” to the constituency. Many countries now have enacted “freedom of information” (FOI) legislation that allows individuals to request access to varying amounts of data, particularly personal data, and have that data corrected if it is in error. If the laws allow individuals to request access to any data, then sensitive vulnerability data may be at risk. The law may require the appointment of an FOI officer. Any information that is stored within the IRT should remain private, unless permission is granted by the constituent site to release it. There may be certain types of information that must be kept confidential according to certain laws. As well, the storing of information that identifies individuals may contravene local laws on the use of computer databases to store personal information. If an IRT is to become involved in investigating computer security incidents, it may require monitoring network communications to determine the actions of intruders. In many countries, monitoring keystrokes may constitute a breach of privacy. For many companies, any data stored or transmitted internally is deemed to belong to the company for its official use, and therefore, is not private data. Any company data may be viewed by designated company officials, under policy guidelines. Many intruders make use of the telephone system and modems for their initial connection into the computer networks. In many (most?) countries, monitoring a telephone line is illegal, and capturing the calling telephone number may also be a breach of privacy. In the cases where the telephone line is used, it is often illegal to tap the telephone line, but not illegal to monitor the connection once the data is within the organisation’s boundaries on their networking equipment. 4. Equipment Prior to commencing operations, the Incident Response team will require a number of items of equipment. The choice of equipment will vary, depending on the chosen constituency, the scope of analysis work, the types of incidents being investigated, the size of the team, the physical and geographical location, and approximately two thousand other related issues. 4.1. Phones The IRT will require telephone access for contacting constituent sites, other IRTs, vendors, management, and other external contacts. For convenience, this phone access must be able to perform a number of basic functions. These might include: call pick from any other extension, while still maintaining the security that external personnel cannot pick up the calls; a central phone point that acts as the main contact point for the team. This point should be able to be answered by any other team member at their desk; the ability to switch calls to another party to answer calls when the team is unavailable (perhaps after hours, or during a team meeting); access to long distance and international direct dialling. The majority of the team’s work will be communicating with people who are based some distance away; compatibility with existing infrastructure equipment. The telecommunications equipment will require maintenance by other parties. The IRT may need to be mindful that the phone lines may not possess the desired security. Whilst there are a number of analogue speech scramblers on the market, many of these of not all that secure. The security of the telephone will vary from country to country, according to local laws, equipment, and telecommunication authorities. 4.2. Answering Services (24 hour contact) An unfortunate part of the IRT’s work is that the Internet is a 24-hour operation that spans the globe. To this end, the team must be able to be contacted on a 24 hour basis by constituents and other national and international IRTs. This may be done in several ways: “registering” an after hours contact with any person that needs to contact the team on a 24 hour basis. This is usually a team member’s private phone number. This has obvious implications for privacy, and is not very satisfactory as the only other point of contact is when that team member is at home; the use of pagers. This may have negative aspects as an intruder may launch a “denial of service” attack by continually paging the team after hours. There may be a number of techniques to combat this threat, many of which can be implemented by the local PTT. Many alphanumeric pagers have a number of ways of being accessed, including a data dialup service. This opens the way for electronic mail to pager access. This can then be access controlled based upon the address of the sender. call forwarding of the central number to an answering service. This service could ask a few basic questions, and then issue pager or telephone calls to the necessary team members. This option has the highest security if a form of dial-back can be established. An intruder could make a nuisance of themselves by calling the answering service, and supplying random numbers for the call back. This is especially antisocial if it is done out of hours, with calls directed to innocent bystanders. 4.3. Fax Some constituents in certain situations may not wish to send details to the IRT through electronic mail if there is a concern that the network or other central system that controls the mail has been compromised. The facsimile machine is another possibility for data transfer in this situation. The fax machine should be physically secure, and the security of the fax transmission will be as good as that for a normal phone conversation. This adds an extra burden on the IRT as the fax must be associated with a particular incident when tracking that incident. Some suggestions on mechanisms to do this are: retype the fax into the incident tracking database. This has implications of typing errors; use a fax modem and software, and store incoming faxes in electronic form (for example, bit mapped Postscript); maintain a paper file of each incident. This will soon mount up to be unmanageable. There is no one correct method. The desired method used to associate incident information that is not received in electronic format will vary, depending on the structure if the incident database, the type of information received, and the mechanism used to send that information. 4.4. Systems and Networks One of the roles of an IRT may be to analyse incidents to determine trends and intelligence of future attacks. To do this, some form of incident analysis and database tools must be used. Since most of the information supplied to the IRT is already in machine readable form, a computer system is the obvious choice of tool. The team must be able to be reached via the Internet so that information can be sent to it, and other forms of information (such as Advisories) can be sent back to the constituency. 4.4.1. IP Address Range Careful planning prior to the commencement of the team will save an amount of restructuring in the future. Since the IRT must be connected to the Internet, it must use a range of IP addresses. These may be “borrowed” from the organisation that provides the team’s infrastructure (for example, being assigned a subnet for use). A better recommendation is to apply to the Network Information Centre for a separate IP address range. This has no immediate benefits, but will have significant benefits should the team be required to relocate its base of operations to some other administrative or geographical location. 4.4.2. Domain Name The team will be required to register a domain name with the Network Information Centre and the network providers. It is important to place the team under the correct higher level domain from the outset. Both the CERT Coordination Centre and SERT originally started under one domain, and have subsequently moved to a more appropriate domain. This has implications of having to maintain backward compatibility with old names for many years. Originally, the SERT team was placed under the .edu.au domain (sert.edu.au). This was mainly due to the way that this team was formed. It was quickly pointed out that SERT’s constituency covered more than educational institutions. A number of research, government, and commercial organisations were contained within the constituency definition. Ultimately, this caused confusion and mistrust (some constituents thought that SERT would only operate for educational institutions). The migration to AUSCERT has allowed the new team to move under its correct parent domain as auscert.org.au. Since AUSCERT is a non-profit organisation without direct association with any particular form of organisation, and since it may be contracted by more than one network provider, the logical conclusion for AUSCERT was that it was an “organisation”. The CERT Coordination Centre is also now addressed as cert.org. Careful choice of a domain in the initial stages will remove the drama of changing names at a future point in time, requiring backwards compatibility. 4.4.3. Subnetting It is a good idea to be allocated a complete subnet from a larger network address space or a complete network address space, rather than be allocated a range of addresses within another organisation’s network. This allows the possibility of subnetting the address space further to form a number of different networks. The separate networks can then be protected using different security policies. Example subnetworks may include: public: this network contains public access machines such as anonymous ftp, gopher, and world wide web servers. Information stored on this subnet is deemed to be public release; test: it may be desirable to have a testing subnet. This network may or may not be secured, and any testing on this network will minimise the impact on production machines. The nature of testing vulnerabilities often leaves a machine open to attack. It would be desirable to make this network secure from outside connections (although, other IRTs may require access when cooperating on a vulnerability analysis). Should the test machines be compromised, they should not have access to the secure network, and they should not contain any sensitive information; secure: the IRT will require a network that is secure against intrusion. This network will hold sensitive information such as ongoing investigations, site contacts, site names, and vulnerability information; highly secure: the highly secure network may be used to store the most sensitive of information. It should not allow any connections into it, but may allow connections out of it. These outgoing connections must be carefully audited to prevent the accidental “down-classification” of data, by moving it to another network. There may be other requirements for separate networks. Splitting the network into four subnets should provide reasonable flexibility for future plans. For example, a complete Class C address space may be split into four separate subnets, allowing 62 hosts on each (not including the network and broadcast addresses). 4.4.4. Test Equipment If the team is to be involved in vulnerability analysis (proactive operations), then a range of test equipment will be required. This test equipment should be chosen to best serve the needs of the constituency. There will generally be insufficient funds to get one of every platform running all software. It is under these circumstances that other IRTs will be able to contribute test platforms and expertise. The test equipment should not contain any sensitive data, and should not be required in the day to day operations of the team. It is possible that testing security vulnerabilities will reduce the security of this system, or even cause it to fail. 4.4.5. Routers/Firewalls Once the availability of a security team is announced, it is likely to become a target for all sorts of reasons. As with plumbers who always have leaky taps at home, carpenters whose kitchens require repair, builders whose doors need adjusting, it is possible that in the rush to assist other sites, the IRT fails to attend to its own security. Nothing makes an intruder look better than to break into a computer run by an Incident Response Team. Nothing destroys the constituency’s trust faster than if the IRT’s machines are compromised. Sensible security starts at home. There must be dedicated hardware and software designed to increase the security of the internal systems. The router and its filters must be under the administrative control of the IRT (or appointed staff), and should be reviewed regularly for effectiveness. Solutions may involve managed bridges, routers, or software firewalls. The decision is based on expertise to establish an effective filtering mechanism, and budget constraints. 4.4.6. Non-replayable Authentication Incident Response Team staff will be required to operate from outside the secure environment from time to time. This may be as a result of visiting another site to assist them, attending a conference or workshop, or operating after hours. If access to the secured network is to be granted to team members, then they must be made aware of the possibility of trojan horses and network sniffers operating in the network. Some form of non-replayable authentication sequence is required. This may take the form of one-time password generators, software systems such as S/Key, or some other locally developed mechanisms. These systems should be secure, such that no matter how many password “tokens” are captured, the next password in the series cannot be guessed or determined. 4.5. Shredder This piece of equipment is generally quite cheap, but may be necessary. There has been an amount of literature that discusses intruders “trashing”: searching through waste paper bins for snippets of information. An Incident Response Team will be given many pieces of sensitive information. This information may not necessarily be how to break into a computer, but it might mention a sensitive site by name. The negative press generated by such a leak of information could spell the end for an IRT. Confidentiality also means destroying information when it is no longer required; hard copy included. 4.6. Safe Much attention is given to the logical security of the IRT, but what about the physical security? The budget should contain provisions for a fire-proof safe. This safe could store for example: encryption keys. If all data on the disks is stored encrypted, then theft of the disks will not reveal any sensitive information; backup media. This prevents a thief from stealing the backup media and analysing the information stored on it. It also provides a mechanism for recovering quickly from a disaster such as a fire. Such disasters may be the result of a malicious act directed at the IRT as a result of their activities. 4.7. Backup Media It is important to establish and maintain a system backup strategy. This requirement is not unique to IRTs, but should be practiced by any organisation that cannot afford extended down time or loss of data. Backup media could consist of tapes, shadowed disks, or other removable media. Provisions should be made for the secure storage of this media. If all backup media is stored on-site, then a disaster may result in the total loss of all information. Some form of secure off-site storage is required. This could include: a fire-proof safe on the premises as discussed; a trusted organisation that provides such a service; encrypting all data prior to backup. Note that any data sent off-site for storage should be afforded the same level of security as the on-line data. It should be protected from unauthorised disclosure, modification, or loss. 4.8. Information Security The information stored on the secure systems requires extreme protection from unauthorised disclosure and modification. Information may be in transit on a network or phone link, stored on a disk or tape, stored on paper, or in the memories of IRT staff. A number of requirements may be placed upon the IRT for the security of that data. The requirements may stem from classification issues, legal issues, or a need for privacy of affected sites (policies of the IRT). Regardless of the specialist requirements for security, a number of common elements of security are required. The first is physical security. The premises that house the IRT must be physically secured from intrusion by unauthorised personnel. This could include mechanisms like physical and electronic locks, intrusion detectors and alarms, security guards, and security badges. The premises must be able to be accessed by IRT staff on a 24 hour, 7 day a week basis. As indicated previously, some form of network router will be required to connect in to the Internet. In addition, traffic filtering will be essential to prevent sensitive IRT data from being sniffed on other parts of the network. This filtering should ideally be performed The filter should only allow connections into the secured subnet from a small subset of trusted hosts. Provision of a filtering mechanism to prevent unauthorised connections does not diminish the responsibility of the IRT for their own host security. Careful attention must be paid to the security of hosts on the secure subnet in case access is gained to that network by an intruder. This includes items such as integrity checks, log file and system audits, password use, security enhancement and assessment tools, and data encryption. The secured systems must be able to exist in an environment where any connections may be made from the Internet to them. It is important to ensure that resources are devoted to this task, and the task of effective system administration. It is very easy to become complacent about security when dealing with it daily. It may be advantageous to “classify” the data stored within the IRT according to some set criteria, and then define how each category of data should be handled. Possible examples are: public may be transmitted in plain text, and released to any person; private: information that is private between a constituent and the IRT. This may include incident data, site contacts, and equipment lists. This data will only be sent to registered security contacts for that organisation. No indication of the presence of this data will be made to other people. Data may be transmitted in plain text if acceptable to the constituent. Encryption may be used otherwise. sensitive: this may include information on how to exploit an old vulnerability. Whilst some of this information is in the public arena, it may not be widely known. Release of this information will result in increased incident loads. This data may be shared with trusted constituents and other IRTs on a needs basis. It must be transmitted encrypted. highly sensitive: this may include information about sensitive constituent sites (such as the military), or exploitation information about current vulnerabilities that have no solution yet. This data must be stored and transmitted encrypted. It may be shared with other IRTs on a needs basis. classified: this may include data that is otherwise classified by other organisations such as the military or law enforcement. Storage and handling of this data will only be performed by security cleared personnel according to the requirements of local laws. Mechanisms must be established for communicating with other organisations (such as constituents, law enforcement, or other IRTs) using data encryption. The tools and procedures will vary depending on local conditions at each end of the communication. As a baseline, DES encrypted, uuencoded text is acceptable to most places in the world. The issue of key management should be addressed and resolved satisfactorily for encryption to work successfully. There may be requirements for speech encryption or scramblers with some constituents or law enforcement. These issues are local to the community and should be considered as part of the overall data security policy. 4.8.1. Data Origin Authentication and Integrity An issue that is developing on the network is the ability of anyone to forge news and mail articles. This may throw doubt on the integrity of information released from the IRT. In general, any constituent can verify the origin and content of the information by contacting the IRT through some other mechanism (such as the telephone). This may become unworkable if the number of constituents is large. Mechanisms should be investigated which allows each constituent to verify the content and origin of the information released from the IRT. These techniques may include Certificates and Digital Signatures, PGP, and PEM. Current ITAR regulations on the exportability of encryption software make the interoperability of the United States and Canada with the rest of the world extremely difficult. 4.8.2. Trusted Staff A major issue for all Incident Response Teams is the selection of staff. It may be felt that one of the most important attributes of a staff member is their experience in computer security. However, ultimately the success of the team could be undermined if that team member exhibits behaviours that undermined the trust of the constituency in the team. The author’s personal opinions are that the following attributes are required in any team members, and are placed in priority ordering: integrity. A lack of staff integrity will result in the ultimate demise of the IRT through mistrust. This integrity may require a staff member to abide by the policies of the team, even if they do not agree with them; operating system administration experience. The team members must have significant experience in managing computer systems, and preferably ones that are in a large network. This is the type of person that the team is trying to assist, and experience in this area will help the team understand the problems faced by the constituency; programming experience. If the team member cannot program quickly and effectively, and cannot read source code quickly and gain an understanding of a program, then their ability to analyse new security incidents will be limited. In many circumstances, the analysis must be done quickly and effectively. There is little room for learning, and little room for error; communication skills. The team member will be required to present talks and write papers in their role of educating the constituency. If this cannot be achieved effectively, then the security incidents will continue to occur. When dealing with an incident, sometimes all that is required is a “friendly ear” and some offer of advice. Effective verbal communication skills are essential when assisting sites that have experienced a compromise; security experience. Knowledge of computer security is desirable, but can ultimately be learned “on the job”. If the team member is very experienced in this area, but lacks skills in the others, then their usefulness as a team member over time will rapidly diminish. These days of “equal opportunity”, “freedom of information”, and accountability for all actions makes staff selection a complex topic. Staff selection criteria, job advertisements, and interview and screening techniques must be carefully addressed before employing new staff. Ultimately, the staff members are accountable for their actions. The IRT may make a requirement of their staff to sign some form of “non-disclosure” agreement that binds the staff member to their responsibilities; even after they have left the team. These responsibilities may include maintaining the confidentiality of vulnerability information, site contacts, incident details, and information relating to the IRT itself. Whilst disciplinary action (such as dismissal) could be taken against staff while they are still employed for breaching this agreement, generally the only possible action that can be taken after the staff member has left the team is some form of legal action. Depending on the constituency of the IRT and the type of information that the IRT will be required to deal with, team members may be required to undergo some form of civil or military security clearance. This opens up a range of problems should the IRT wish to employ foreign nationals, or people with criminal histories. Where possible, the decision to obtain a security clearance should be left to the staff member, with no requirement being placed upon them to do so by the team. This may split the team in two however; equipment and staff that are security cleared, and those that are not. This must be handled on an individual basis. If the team is to store classified data (such as court evidence, military data, incident data) then there may be issues dictated by law or other convention on the storage and access to this data. Such issues may be the use of certain types of safes and locks, right through to the choice of the colour of the folders the data is stored in. These issues should be addressed on an individual basis. Other staff issues involve access to the premises by non-IRT staff such as cleaners, security personnel, network and system administrators, electricians, window cleaners, pest control, management, and the general public. Whilst it is usual to deny access to the public, it may be a requirement of the physical location of the team to allow access to a range of other personnel. This may be done during hours or after hours. Good practices by team members (such as locking away any sensitive data each night or when the office is unattended) will reduce the risk of allowing access to other personnel. If the team is holding classified data, there may be a requirement to obtain security clearances for other personnel, or at least have a team member present at all times that other personnel are on the premises. 5. Commencing Operations At some point, the newly formed team will transition from the development stage to the operational stage. Ideally, when this transition arrives, there should be very little for the IRT to do except commence operations. More than likely however, the IRT will still be preparing their database, acquiring staff and equipment, training staff, and establishing telecommunications. Despite the obvious chaos behind the scenes, the IRT must present a professional and educated front to its constituents from the outset. A set time must be determined to commence IRT operations. When this time arrives, an announcement should be made in the form of a press release, and that announcement should be transmitted as widely as possible. If the IRT then waits at this point for the calls to start coming in, then they have immediately failed. The IRT needs to identify its constituency and then go out and “sell” its services. Make the constituency aware of what the IRT intends to do, and why it is doing it. Educate the constituency on the policies, goals, and mission of the IRT. Tell them why the IRT was formed, and why it is important that the IRT coordinate incidents for the community. It is at this point that the IRT should solicit trusted contact information from its constituents. This is discussed later. An IRT cannot exist in isolation from the rest of the world. The IRT should at this point establish communications with other existing IRTs. A form of trusted communications should be established between the IRT and its constituents, and the IRT and other IRTs. The IRT should then consider applying to become a member of FIRST – the Forum of Incident Response and Security Teams. This application will take some time to achieve, and the IRT should communicate with other IRTs about the benefits of becoming a member of FIRST, and the procedures required to join. 6. Operations – Learning Commencing the operation of an Incident Response Team is a major task. Initially, the staff that are selected to operate the team may have little experience in incident response, or even security issues in general. A steep learning curve is experienced, with obvious drawbacks along the path. The constituency will have an expectation that they are dealing with “experts” in the field. There are few people in existence who can claim that they know about every operating system, every hardware platform, every piece of third party software, every security device, every book published, and all the security implications that go along with this. It is important to recognise from the outset that the team members cannot be expected to know everything. The constituency should be advised when some particular query or situation falls outside the expertise of the IRT. In this way, there are no false expectations formed by the constituency, which ultimately leads to disappointment and mistrust in the IRT. The solution to the problem is to identify trusted members of the constituency who are experts in one field or another. Establish and foster those contacts, and use them when required. Be careful not to use a small set of contacts all the time, or that resource may be withdrawn from the IRT. These trusted contacts need not be given all the information relating to a query (for example, there is no need to disclose the originating site name). They should be supplied with enough information to assist with your query, and should be prepared to accept that they will not always be told the complete story. The amount of information released will depend on the type and sensitivity of the query, and the level of trust with the contact. 6.1. Report incidents Get the sites within the constituency to report all security incidents, and then analyse these incidents. Initially, this is a large task, and requires many hours. As the experience of the team increases, many incidents fall into the same generic “class” of attack or vulnerability, and analysis will proceed more quickly. Reporting all security incidents, no matter how minor, allows a central reporting facility such as the IRT to determine the “bigger picture” of security within the constituency. For example, a site detects four tftp probes that were unsuccessful. In isolation, it is rather an innocuous and poor attempt to exploit a vulnerability. However, if more than 50% of the constituency reports four tftp probes from the same site, then this represents a coordinated and determined attack. It may be that upon further analysis of this incident, other sites are identified in which this very same attack was successful without detection. Determining the scope of the security incidents within the community assists not only the IRT in justifying staff and equipment funding, but assists the constituency in justifying their security staffing levels. In general, only a small number of security incidents are ever reported. Many of the poorer attempts are repelled by a site, and ignored. These incidents may have only been repelled due to the ability of experienced staff to configure the systems appropriately. Therefore, the existence of those staff members can be justified since the attack was not successful. 6.2. Contact other IRTs Other IRTs in the world may be able to assist with the learning curve. In general, each IRT will have a collection of security related literature and tools that they are more than willing to disclose to new teams. Additional information that may be of use is an idea of the “state of the world” in security. How many incidents per 1,000 hosts? What platforms, operating systems, versions are experiencing the most incidents? Why? Information like this may help to channel learning efforts into more productive areas initially. 6.3. Documents and Tools The Internet has a wealth of documents and tools relating to computer security. Where does one start? A number of sources could be explored initially: archie searching; USEnet news groups like alt.security, comp.security.misc, and comp.security.unix; mailing lists such as bugtraq and firewalls; information supplied by other IRTs and constituent members; anonymous ftp areas like ftp.cert.org and ftp.auscert.org.au World Wide Web – start at www.first.org and see where that leads! 6.4. Library of Reference Material There are many books written on the subjects of computer, network, and data security. Over time, the team should build up a library of reference material that can be referred to during incident and vulnerability investigation. Some books are better than others. Many books are reviewed and these reviews are published either in USEnet news, or in popular computing magazines and journals. 6.5. Journal Subscriptions There are also a number of journals that are devoted to security, and networking issues in general. A subscription for regular issues will ensure that no important articles are missed. It may be that many articles are of no immediate benefit to the team, but the background knowledge gained will assist the team members in keeping up to date with current trends and technology in the security world. 6.6. Staff Training A number of courses in security are run as a commercial enterprise. These may or may not address the issues that affect an Incident Response Team. Many of these courses are aimed at the commercial and government infrastructures, covering topics such as security policies, viruses, and data encryption. If the staff lack the necessary system administration skills, then they should attend a course on this topic. Solid system administration skills, coupled with the latest knowledge in security, is a good recipe for protecting computer systems. If the system administration skills are poor, then the security knowledge is wasted. Ultimately, it is the experience of many IRTs that they are presenting the courses, rather than attending them! 6.7. Visits to Existing Response Teams A common belief in many new teams is that all the other IRTs will simply hand over all of their information. This is simply not true, due to many considerations. The major consideration is the policies under which this information was obtained in the first place. There is a moral obligation for IRTs to protect their data, and they cannot release it to any new team that announces itself. In order to build up trust within the IRT community, face to face visits are required. Without knowing the members of a team on a personal basis and understanding their policies and level of integrity, it is almost impossible to exchange any information of a sensitive nature. Trust takes time to build, and can be destroyed in seconds. If it is the intention of the new IRT to contribute to the cooperation of IRTs around the globe, some effort is required to establish credibility over time. 7. Operations – Reactive The highest priority task for an IRT is to respond to incidents as they occur. This may involve working with the affected site to determine the cause of the incident and help them become secure again, or it may involve finding a solution to a vulnerability that is being actively exploited to compromise many sites. Reactive response is always done on a priority basis; where are the team’s resources most effectively utilised? 7.1. Routine The day to day operations of an IRT are very difficult to define as much of their work is event driven. A single telephone call or electronic mail message can change the structure of an entire week! There should always be a number of background activities occurring and these should be scheduled for attention from time to time by all staff members. This may include reading journals, papers, and books, or auditing the security of the systems and networks within the team. 7.2. Operations Manual Many of the team’s operations should be standardised and documented, so that team members can make informed and appropriate decisions in the majority of cases. All standard operations should be documented, and reviewed from time to time. The operations manual needs to address at least the following issues: handling vulnerabilities; creating advisories; handling difficult contacts; handling unauthenticated callers; information disclosure; coordinating with other IRTs; systems management; backup strategy; disaster recovery; off-site operations. 7.3. Administrative There are many day to day administrative issues that must be addressed. These may include period reporting to management and constituency about incident levels and intelligence on attacks, plus some form of measurement of progress and success. It is not possible to base a measure of success on incident levels; overall, success can be measured by the severity and number of incidents, and the type of incidents being observed. IRT staff must not be left on incident response for long periods or they will become detached from the changing world of security and system administration. Staff should be given the opportunity from time to time to pursue other interests to allow for a break from incident response, as well as providing a growth path for their professional development. Classified data and access to it by security cleared staff may require a period activity such as auditing the procedures used to access data, submitting period report forms to an agency, or changing locks and keys. These should be adhered to as instructed by local law and convention. If staff have been required to sign a non-disclosure agreement or acceptable use policy document, then they should be reminded of their responsibilities from time to time. It is the role of management to ensure that all staff abide by these agreements. If a staff member leaves the team, procedures should be in place to terminate their system access, change passwords, change encryption keys, retrieve physical and logical access devices, and to debrief staff on their continued obligation for confidentiality and privacy of the information. Policies should be put into place to increase the physical security of the premises. This could include such items as storing encryption keys in a safe, locking all documents away at night or when the office is unattended, locking computer screens to prevent access to the systems from accounts left logged in, and locking backup media away. Ideally, encryption keys, physical keys, passwords, safe access, and so on should be granted on a “need to know” basis. This may prevent non-security cleared staff being exposed to classified material, and will increase the auditability of all staff actions. Passwords should be changed regularly, and not reused. If the team is to be available on a 24 hour basis, then there must be some form of roster to rotate the responsibility of answering emergency calls. If a call is received and responded to after hours, a decision must be made whether there is assistance that can be provided immediately, or whether the caller can wait and contact the IRT during office hours. Often, if a caller has placed an emergency call to the IRT out of business hours, they perceive it to be an emergency. In the majority of these calls, the caller is seeking some assistance to give them confidence in their immediate actions. The caller should be reassured as to what course of action is best to take, and then left to perform those actions. They can then be followed up during business hours with more detailed information about a solution to their problem. It is rare that an emergency call cannot be dealt with reasonably quickly, and deferred to the next day. There is a small subset of emergency calls that require detailed analysis of a problem, development of a solution, and communication of that solution back to the constituency after hours. In these cases, there will always be another IRT somewhere on the globe that has team members awake and alert, and able to provide assistance. Many smaller teams may find that their management structure is actually larger than the team itself. When there are several managers involved, it is important to have a clear understanding of the chain of command, which manager is responsible for which decisions, and what the correct reporting structure is if a manager is unavailable. The concept of “too many chiefs” may cause the team to become fragmented and confused. Just as the team must present a unified front to its constituency, so must the management present a unified front to the IRT. The management structure should be based on the size of the team, and the structure of the constituency it serves. There may be cause to establish a management structure that is based upon the types of service provided to the constituency. This may vary depending on the service, so that the management needs of that service are best met. 7.4. Contacts If the IRT has a reasonably small and well defined constituency, it is highly advantageous to build a database of contacts within each of the constituent sites. These contacts should be verified independently as the database is formed so that trust can be placed in the integrity of that information. When soliciting this contact information, it may be best to approach the CEO or head of the constituents and ask them to nominate their appointed security contact. In this way, the IRT will be dealing with contacts who have the knowledge and authority to act on situations. Useful contact information includes: names (get more than one in case the first one is unavailable); addresses; organisations; main switch telephone numbers; contact office telephone numbers; electronic mail addresses; IP address ranges; list of hardware and software in use; after hours contact points such as home phones, pagers, mobiles. For after hours contact information, it is best to make this optional as some sites may choose not to include this information. If the IRT does have contact information, then they are able to contact sites after hours and warn them of potential vulnerabilities and threats rather than waiting until the next working day. If the IRT does not have registered contact information for a site, then they can use public information to track down a contact. This may include NIC databases, telephone books, electronic mail lists, or word of mouth. It is also advantageous to seek the necessary permissions for information disclosure prior to any incidents occurring. This can be done as part of the registration process. When investigating incidents, it may be difficult not to reveal the affected system’s name to the other party. As well, it may be necessary to work through another IRT to achieve resolution. In some countries, it may be legally binding that if an individual has knowledge of a crime being committed, they must advise the law enforcement authorities. It is advantageous to seek prior approval from constituents to pass only the necessary data on to third parties when assisting with the investigation of incidents. These third parties include other sites, other IRTs, or law enforcement. The information may include hostnames and connection records, site contact information (not after hours), and site names. 7.5. Unsolicited/Unauthenticated Calls If the IRT receives a call from a person claiming to belong to a site, requesting information about a particular incident, what will the appropriate action to take be? The caller may be the intruder, seeking information about how much is known about the incident. It is important to have clearly defined policies and procedures for dealing with unauthenticated calls and messages. Messages are easier to handle as they need not be responded to immediately, and further confirmation may be sought through some other means. Phone conversations are more difficult to defer. It is possible to authenticate the user by either knowing them personally, asking them for particular information that only they could know (perhaps part of a previous phone conversation), or the call could be terminated, and then reestablished by calling the registered contact point for that person. Establishing the caller’s identity is important if information is not to be leaked to unauthorised people. Another mechanism for dealing with unauthenticated calls is to determine in advance exactly what information is deemed to be public. In that case, all staff will be able to decide if the caller’s request can be satisfied by releasing public information (such as a request for an Advisory, or information about the team). Information that is not public should not be released to any unauthenticated person. The presence of this information should not be revealed, nor any indication that the team will likely be given such information. 7.6. Point It is desirable to establish a roster of staff that act as the focus for all information flow into and out of the IRT. This prevents confusion resulting from misunderstandings as to which team member is performing which function. All information into the team must be logged and replied to. All calls into the team should be answered by a single person. This way, a professional and unified front can be shown to the constituency. Failure to establish this regime may result in messages being lost and going unanswered, calls being taken and not logged, updated information on incidents not being passed back to the constituents, or messages being answered differently by two different team members. Some IRTs call this position “point” or “point duty” (adapted from the original military role of being the first person in a patrol). Staff who are not rostered on point duty should be given the ability to stay away from the high interrupt load that this task entails, and be left to concentrate on other issues such as education and tool development. This allows staff to develop professionally, and gives them a rest from the pressure of dealing with security incidents. The staff not on point duty may be called upon in times of emergency to assist with problems if required. The tasks of point duty may include: answering and logging all incoming telephone calls and faxes; answering and logging all incoming electronic mail; reviewing all outstanding incidents for action; updating the incident database with new information; administrative duties such as backups; When answering calls or electronic mail, a number of items of information should be sought immediately. This will reduce the overhead of dealing with the incident at later times. This information is: primary contact details: name, telephone, fax, electronic mail; secondary contacts if the primary is not available; affected machine names and addresses; how the incident was discovered; the source of the incident if known; action taken to resolve the incident, including other sites contacted; action the site wishes the IRT to perform. Messages should be responded to in real time wherever possible. This may mean sending a short reply thanking the person for the message, and advising that it will be looked at within a set period of time. This gives that person confidence that the IRT will address the issue, and indicates how long they should wait before taking further action. Communicating expectations to the constituency is extremely important. They must understand exactly what will be done with information, when, and by whom. This removes any misunderstandings, and establishes more trust in the professionalism of the IRT. The expectations that should be made clear are: what will the IRT do with this information; when will it be done; will it be forgotten; who will followup the information; will the information be passed to other people; what should the reporting site do at this point. Whenever any action is taken on an incident, the reporting site and any other affected sites should be kept informed of any progress or changes in the incident status. In this way, the sites are reassured that their needs are being addressed by the IRT, and that reporting information to the IRT has received attention. Sites are more likely to continue reporting information in this case, which is important to the IRT’s operations. 7.7. Incident Numbers and Database When incidents are reported, they should be logged into an incident database. This database should be used to collate all information relating to the incident in one place for all IRT staff to view and act upon. This information should include all electronic mail, telephone conversations, facsimile transmissions, and IRT staff notes. All staff must be able to update the incident database as new information is received and the status of an incident changes. They must be able to determine the status of an incident with a high degree of confidence, and each member must be able to arrive at the same decision as to the required action to be taken as a result of stored incident information. The integrity of the database should be protected from multiple, simultaneous updates. The database should be able to be searched based upon a number of criteria including site, dates, vulnerable software, methods of intrusion, geographical location, IP address, incident status, duration of incident, and other criteria required by the IRT. To distinguish individual incidents uniquely, some form of incident identification is required. Protecting the privacy of affected sites is paramount, so some form of numbering is appropriate. These incident numbers should not contain any information about the affected sites, nor the severity of the incident. Some teams use random numbers. The SERT team uses numbers calculated from the date and time the incident was first logged. The incident number is a 10 digit number that consists of YYMMDDHHMM. The structure of these numbers allows for automated analysis and summary reporting. It may however reveal some information about the incident if it is known that a site experienced an incident at a particular time. The database should be used to generate statistics on the incidents. These statistics should be periodically reported back to the constituency and the IRT management structure. Statistics could include: number of incidents, open and closed; number of calls for help; number of queries received; number of phone calls received; number of electronic mail messages processed; time an incident requires before closure; breakdown of severity of incidents; analysis of incident trends; other statistics as required by the constituency or management. 7.8. Hot Lists and Refer Again As incidents are logged, certain actions are required to be performed such as contacting affected sites, contacting vendors, or seeking further information from the affected site. This may take some time to retrieve. A useful tool is the ability to place an incident on “hold” for a set period, and have the incident brought to the attention of IRT staff (the person on point duty) after that period expires. In this way, incidents are not forgotten, and constituent sites are followed up to ensure that they also do not forget to perform any actions requested of them. 8. Operations – Proactive What does taking a “proactive” role mean? An Incident Response Team may find that it does not have sufficient resources to deal with any more activity than reacting to incidents as they occur. They spend all of their time communicating with affected sites, assisting them to recover, and collating the data. What is required is a way of analysing the incidents, identifying patterns and trends, determining intelligence on the likely next wave of attacks, and working to prevent these attacks before they reach large proportions. Ideally, it is good if a security vulnerability can be identified and fixed prior to any exploitation of it. This has an enormously positive effect of reducing the incident load, and increasing the security of the constituency, and the Internet as a whole. Vulnerability analysis may be a difficult task, and should not be undertaken lightly. It requires the skill to be able to read source code very quickly, gain an understanding of the problem, and the appreciate the complicated subtleties of possible solutions. Many solutions are less than optimal due to the large number of platforms and operating systems that they must be compatible with. Experience in multiple platforms is a major bonus when examining vulnerabilities that affect more than one type of system. Some vulnerabilities occur in vendor controlled modules. Many of these modules are shipped to customers (IRTs included) in binary form only. This may prevent the IRT from examining and testing the vulnerability. In this case, effective communications must be established with the correct team within the vendor to work towards a solution. Vendors have a responsibility to test their solutions, and distribute them to all of their customers (not just the ones connected to the Internet). Documentation must be produced, media copied and distributed, and customer support centres advised and trained. This takes time! Some vendors are now releasing their security patches to the Internet as soon as they become available. This has a positive effect in that sites that are connected to the Internet may fetch the patches quickly. Many vendors now make their security related patches available to anyone for free (without a software maintenance contract). If the vulnerability affects more than one vendor (since many of the vendor’s operating systems have been taken from the same original source code tree), then the problem of coordinating fixes becomes extremely complex. Releasing information about a vulnerability and solution for only a subset of vendors may reveal information about the vulnerability existing for other vendors that do not have a solution yet. Withholding information about the vulnerability until all vendors have a solution allows more time for the vulnerability to be exploited. Ultimately, it may be in the vendors best interests to make a small subset of their source code available to (at least) the IRTs. This source code should include the modules that run privileged on their operating systems. This assists the vendors as they now have access to a large number of security specialists, all working to benefit the vendor! The intruders often already have source codethat they are examining. 8.1. Proactive Roles to Prevent Incidents Having decided to take a proactive role with vulnerabilities, the IRT must decide on what activities they will expend their resources on. Vulnerabilities are usually discovered by a constituent site, either when analysing an incident, or uncovering it by accident. If the discovery of the vulnerability was due to the analysis of an incident, then the intruders must already know the information, and are exploiting it. This reduces the amount of time that the IRTs and vendors can work on the problem to determine an optimal solution. If the discovery was by accident, then so long as the IRT can rely on the integrity of the constituent, then there may be sufficient time to analyse the problem and ensure a total solution. However, any vulnerability that can be discovered by accident at one site, can easily be discovered by another site. Solutions created under pressure have an extreme potential to contain other related vulnerabilities, or cause some other functionality to fail. If a section of software is complex enough that the original programmer made a mistake, then it is still complex enough that a code maintainer will also make a mistake. When a vulnerability is reported, it is important to determine a number of basic facts: is this vulnerability easy to reproduce; does this vulnerability affect different versions of this software; what previous level of access is required to exploit this vulnerability; does this vulnerability grant privileged access; does this vulnerability affect other vendor versions; is this vulnerability being actively exploited; can this (or other) vulnerabilities be further exploited to gain privileged access; how many systems within the constituency and the Internet are affected. This gives some form of metric as to the severity of this vulnerability, and the resources required to effectively deal with it. A number of courses of action are open to the IRT: the IRT should report the vulnerability to the vendor or vendors. The responses will require coordination to ensure uniform release of information to the community; the IRT may actively examine the source code to assist with understanding and fixing the vulnerability. The IRT members will require exceptional programming skills to perform this task effectively; the IRT may become involved with testing patched software to determine that the solution removes the vulnerability, does not introduce new vulnerabilities, and does not cause any functionality to fail. Many IRTs do not have sufficient resources to pursue this type of activity on a full time basis. Forming trust relationships with other IRTs and requesting their assistance is one mechanism for combining many skilled personnel onto one problem. Many solutions are not determined within a few hours. Some may take several days. Since the Internet is a global network, it may be advantageous to establish relationships with other international IRTs that are in different timezones. The vulnerability and its current state of analysis can then be passed from one IRT to the next in a global chain, following the daylight and office hours around the world. 8.2. Education and Training An extremely important role for the IRT is to educate the community on issues relating to security. This may be done in several ways, depending on the requirements of the constituency. 8.2.1. Advisories Advisories generally are a document that raises a single issue about computer security. They are usually a long living document, and may be referred to from time to time. Examples of content may be the announcement of a vulnerability and solution, a suggestion relating to some administrative matter (such as the use of login banners), or the announcement of tool kits. Advisories that announce a vulnerability should contain information on the scope of the vulnerability (the versions and platforms affected), a description of the severity of the problem (including any exploitation), and one or more solutions. It is then up to the constituent to decide the most appropriate solution to apply in their situation. 8.2.2. Conference Presentations Conference presentations are a mechanism to discuss the latest research or latest trends in computer security. This is a good forum to relate back to the constituency some information that affects them directly, such as the number and severity of the incidents, and general trends that have been determined. 8.2.3. Workshop Presentations Workshops may take the form of a conference style presentation, or may be more hands on. Hands on security workshops are an effective teaching aid to assist new system administrators in the techniques require to monitor and audit their systems. This requires a lot of preparation time and resources (a laboratory full of systems). 8.2.4. Panel Sessions This type of session allows several people of differing experience and focus to come together and provide a session of much broader content. This is usually an interactive session with comments and questions invited from the audience. It places the security professionals within reach of the constituency. This is important as the IRT must always maintain contact with its constituency. 8.2.5. Journal Articles Formal papers may be written and published in journals. Less formal papers may be published in magazines and editorials. These papers should always be made available to the Internet, provided it does not breach copyright. 8.2.6. Exercises This is a concept that was developed by SERT, but has not been actively employed yet. A “security exercise” was designed to be a short 10 to 15 minute activity that increased the security of the computer systems by a small amount. It was felt that the constituency contained a wide range of experience and expertise among the system administrators, and some of the basic skills of system administration and security auditing could be steadily improved. Many system administrators are too busy to attend lengthy courses. The security exercise was in essence a correspondence course without assessment, and without lengthy study. An example of a security exercise might be to request system administrators to examine one day’s system log files. Any lines contained in that log file that are not understood should be investigated and researched. These exercises would be issued regularly. 8.2.7. Book Reviews Security is a rapidly growing topic. Many books are appearing, some better than others. It is impossible for each member of an IRT to read all the books and understand their content. The IRT must choose a subset of available literature for its library, and the chosen books must suit the needs of the IRT and the constituency. This can be determined prior to purchase by reading book reviews. If the IRT reads a book that they feel is of benefit to the wider community, then they should make a book review available to the constituency and the Internet. 8.2.8. Courses Traditional education involves classrooms, lectures, and tutorials. This is still an effective form of educating the constituency. Courses may be developed, and run at regular intervals, either at the base of the IRT, or within the constituent sites. These courses may be presented as a paid service, which covers the cost of preparation, staff time, and travel. As further advances are made in multi-media, it will not be long before courses that are accessed through the Internet start to appear. 8.2.9. Security Audits and On-site Consulting Many IRTs are requested to provide on-site consulting and security audits. This may involve policy formulation, examining procedures and suggesting improvements, to acting as a “tiger team” by trying to actively break into the site. Tiger teams in general are not a good idea, as there are legal implications of actively trying to break into a computer system, and it may reveal sensitive exploitation details to the general public. 8.2.10. Goals It is the goal of the education process to raise the community’s awareness to security. It is the author’s experience that the majority of incidents occur due to poor system configurations and poor system management. A competent, educated, and diligent system administrator has a much better chance of defending against intruders and detecting them quickly if there is an intrusion, thereby reducing the severity and scope of the incident. The education role must give sufficient information to all system administrators to raise their awareness of security issues. It may involve discussing new tools and techniques, highlighting when new versions of software fix vulnerabilities, describing methods of attack used by intruders, or assist in resolving local legal issues. The community must be made aware that security is a total community response. One vulnerable site may put the entire community at risk. “I don’t need a good password because all I ever do is word process”. This attitude requires modification. Once this account is compromised, it provides a stepping stone into the community. Step by step, the intruder may steadily compromise systems. Denying the intruder the initial foothold into the network prevents these attacks. 8.3. Research and Development The IRT may choose to perform active research and development with the aim of providing tools and techniques that improve security. This is especially true of IRTs that are based at research or educational institutions. Many excellent tools have been developed which are now in common use. Without these tools, many more systems would be compromised. Research and development is extremely important, but must be adequately funded to achieve any results. Many more tools are required that assist users who are not computer literate. As the cost of computing decreases, this allows more inexperienced system administrators to be connected to the network with their own machines. Configuration tools should make decisions on behalf of the system administrator, and set up sensible default configurations that are secure as well as useable. Research may also take the form of analysing coding structures, developing tool kits for programmers to use, writing educational material, or developing new ways for information to be processed, presented, or configured. 9. Operations – Off-site As indicated before, Incident Response Team staff will be required to operate from outside the secure environment from time to time. This may be as a result of visiting another site to assist them, attending a conference or workshop, or operating after hours. If access to the secured network is to be granted to team members, then they must be made aware of the possibility of trojan horses and network sniffers operating in the network. This may be the result of using equipment that is administered by people other than the IRT. Some form of non-replayable authentication sequence is required. This may take the form of one-time password generators, software systems such as S/Key, or some other locally developed mechanisms. These systems should be secure, such that no matter how many password “tokens” are captured, the next password in the series cannot be guessed or determined. Since computer incidents may occur 24 hours a day, 7 days a week, it is important that team members be able to operate from a number of bases, including their private home. This reduces the impact of incidents on the team’s private lives by not requiring them to be physically located on the premises during the investigation. This may require extra equipment such as secondary telephone lines (allowing access to the systems simultaneously as voice access), terminal equipment, modems, pagers, mobile phones, and so on. If staff are to be on call 24 hours a day, then they require mechanisms for making long distance telephone calls without incurring a charge to the premises they are calling from. This alleviates the problem of being at a friend’s place when required to make several international phone calls. A mobile phone removes this requirement, but a mobile phone may not possess adequate security. All team members should be able to be contacted during emergency situations. This may require home telephone numbers or the use of pagers. If a team member knows they cannot be contacted (for example, on a boat fishing!), the other team members should be made aware of this. On-call staff members must be able to contacted by the constituency and other IRTs, without invading on the privacy of those members. In addition, it should be possible to rotate the on-call status among staff members without the adjusting the way the community contacts the team. This may be achieved through call forwarding, pagers, or staff to answer the central phone 24 hours a day. One issue often overlooked is the ability to travel into the work premises should it be required. It is not possible to ask team members to dedicate their lives to the IRT 24 hours a day. People may for example be attending a celebration in which an amount of alcohol might have been consumed. If an incident occurs at this point, the team member may not be able to drive into the office. Mechanisms should be made available to allow for the use of taxis or some other arrangements in unusual circumstances. During large conferences (particularly ones hosted by the sponsoring organisation), it may be required that a significant number of the team attend the conference. If the team is small, this could easily account for all members. Plans and equipment should be put into place to allow the entire operations to be moved between cities. This may involve telephone access, the ability for the team to be contacted, and access to the secure computer systems. This setup could also be used in the case of emergencies where the office is inaccessible (for example, a bomb threat during an incident). Response time to incidents may be critical. Careful thought given to off-site operations may significantly reduce the response time to an incident, and allow many team members to contribute effort. This is especially important when the incident is large and complex. 10. Working with the Larger Community Ultimately, the aim of most Incident Response Teams is to reduce the number and severity of incidents. This cannot be effectively achieved by sitting in the office and waiting for the phone calls to advise of a new incident. Only through education and understanding of security issues can a reduction of incidents be achieved. The education role is one of the most important, and can take a significant amount of resources from the IRT. However, successful ventures in this area will ultimately have a positive effect on the rest of the IRT by reducing the number and severity of incidents to respond to. Education may be achieved in several ways. With each incident, some small amount of extra effort should be spent in increasing the knowledge of the affected system administrator. Introduce them to a new security tool, or work with them so they completely understand why this incident occurred and how to prevent it happening again. This helps one system administrator. Analyse the incident. Why did this incident occur? Inexperience, or is it a general problem to the constituency? If the wider community may benefit, then spend more effort in designing an “education package” that can be given to the rest of the constituency. This package need not explain who was affected by this vulnerability, nor even how to actively exploit it. If the constituency trusts the IRT, then they will act on the information and seek independent verification later. The package should contain a number of items: A description of where the problem lies. This should include affected version if possible as not all versions may be affected; A description of the severity of the problem. If this problem can be used to gain privileged access, then it should be acted upon quickly; An idea of how widely this information is distributed. If it is well known, and currently being actively exploited widely, then the constituency should act quickly to resolve it; A solution to the problem. Sometimes the solution is not optimal due to the vulnerability affecting more than one platform. Solutions such as “disable the service” may be the only option if no adequate solution can be found quickly. Include a description of the impact of applying each solution. This decision of which solution to adopt should be made by the constituents; not the IRT. Each site knows their own risks and will act accordingly. Solutions such as “disconnect from the network” are far more severe as in general, the final solution will be distributed through the network. A better solution in these circumstances might be to filter all but trusted sites and the IRT until further notice. Make it a policy of the team to never post information without also posting a solution! This helps no one. In general, the information package that is released will become public information. It may be challenged in the future, and the team must be able to defend it. Check each statement for truthfulness, and act according to the best of the team’s ability in the present situation. Many teams already release this type of information in a document called an “Advisory”. These advisories assist sites to increase their security, thus preventing compromises utilising the same vulnerability. Many sites will wish to know the extent of the security problem so that they can justify the required level of security staffing. Attendance at conferences and presenting papers containing statistics, trends, and future predictions provides good public relations to the constituency, as well as feedback of the situation. It is very easy for a site to become complacent about security if they believe that no incidents are occurring. It may be that many sites around experiencing security incidents continually. Conferences, workshops, “birds-of-a-feather” sessions, rump sessions, panel sessions, and so on are an ideal forum for providing education on security. The number of topics that could be covered are almost limitless including security policies, secure programming practices, good system administration skills, disaster recovery, and tool analysis. Well-presented papers and sessions will increase the respect of the constituency for the professionalism of the IRT. Another forum could be a security training workshop, dedicated to only security issues. This is a lot of work, and needs to be well organised. If the equipment can be obtained, this is the best place to organise hands-on training of configuring systems, and making them more secure. Many basic system administration skills must be learned on a running system, and production systems are not always the best platform to do this. Once the basic system administration skills are covered, security tools could be installed, and a demonstration of their effectiveness explored. Hands-on training is more effective than conference proceedings, advisories, or telephone calls. These workshops could be performed as a charged service, helping to recover the cost of preparing them, and the use of the equipment. As the operating systems, third party packages, and configuring them becomes more complex, it is becoming increasing difficult to state with certainty that a system is configured correctly. IRTs are well placed to contribute to the pool of available security tools. In particular, it will become more important to assist novice system administrators with basic system administration skills. Configuration tools, security assessment and enhancement tools, and a number of “wrappers” to make their use easier for a system administrator will little or no knowledge will ensure that these tools are at least applied in some minimal form, thereby increasing the security of those machines. If the tools are too difficult to use, or contain too many options, they will not be used at all. For example, SERT in conjunction with Sun Microsystems developed the Megapatch. One problem with applying security patches to SunOS was that it was difficult to determine which patches should be applied and in which order. The Megapatch is a tool that is applied to a newly installed SunOS system, and applies all known security patches in the correct order. In addition, it installs and configures a number of security assessment and enhancement tools such as COPS, Tripwire, and TCP Wrapper, and enables C2 security. These security enhancements are provided with conservative initial configurations that protect the system from unauthorised intrusion. The tool is designed to make it easy to apply by the novice system administrator. Ultimately, it is the role of the IRT to become the trusted source of security information in the community. The constituents should be given the opportunity to learn that the IRT has competent and diligent staff. If the IRT indicates that a security vulnerability exists, then the constituents should be confident that the IRT has either tested the vulnerability and solutions, or has a high degree of confidence that the information is correct. In addition, the constituents should learn that the IRT has integrity and honours the privacy of each institution. 11. Working with FIRST The Forum of Incident Response and Security Teams (FIRST) is a collection of IRTs, vendors, and other interested parties that are working together to improve computer security. Since many of the IRTs are formed to cater to the particular requirements of their constituencies, they cannot effectively deal with other constituents. This is the reason that there are many IRTs. FIRST is designed to improve the communication and cooperation between the IRTs and registered vendors. FIRST basically supplies an umbrella secretariat to assist communication between all of its members. Much of the work within FIRST is done on a volunteer basis, and supplied from within the various FIRST members. FIRST provides a forum for IRTs and other security experts to discuss security vulnerabilities, and cooperate to find an acceptable solution. Other information that is shared may be intelligence on methods used by intruders, warnings of security situations to be aware of, draft advisories for review, and ensuring that all members see publicly released information from the wide range of sources. The benefits to be gained from membership in FIRST are directly proportional to the amount of effort that the IRT is willing to supply. 12. Conclusion Forming an Incident Response Team in the 90s is a difficult task. Fortunately, there are many willing individuals that are able to provide guidance that will help the newly formed team avoid many pitfalls. It is possible, and highly desirable, to perform much of the establishment work prior to commencement of the team. Once operations start, then the time available to formulating the new team will become limited. Policies, procedures, equipment, premises, contacts, and staff should be established before commencing operations. More likely however, is that many of these items will be missing or inadequate. The team must struggle on as best as it can while it is forming. Clear communication with the constituency will alleviate the startup problems and any confusion that might be caused by them. Obtaining good advice from other established teams and establishing good practices will make the startup of the new team far less difficult, and will take the team from strength to strength in their operations. Once the incident load increases, there will be little resource to “redo” some aspect of the operation of the team. Getting it right the first time will remove the need to expend precious resources on fixing a problem, as well as converting over the existing procedures and data to the new operation. 12.1. Acknowledgments What makes me an expert on this topic? Simple – I had to do it once! I could not have achieved the formation of a security IRT in Australia without the tremendous support from many individuals. Firstly, Tom Longstaff for assisting with the ideas that are contained within this paper. We met in Pittsburgh in August 1993 and were discussing the various issues that require resolution when forming an IRT. Next thing, Tom had captured all of our ideas in a set of notes that formed the basis of this paper. Moira West: If ever there was a heroine in the security field (in my opinion), it is Moira. She has weathered my abuse, my triumphs, my disappointments, my anger, my frustration, and my humour throughout the time that SERT has been operating. Through all of this, she has provided enormous support and guidance, and for that I will always be grateful. Barbara Fraser: Barbara visited Australia for a conference just prior to our learning that our government funding request was unsuccessful. Barbara paved the way for forming the IRT in Australia. She firmly placed the idea into the minds of the people that had the power to make this happen. She showed Australia what an IRT was all about, and why Australia needed to have its own. Without that visit, Australia may not have an IRT today. IR Group in CERT: These people are modern day heroes. They wear abuse, scorn, derision, lies, and back stabbing, and still keep trying to help the very people that do this to them. Being at the forefront of this technology and procedures means that mistakes are made. The Internet community is not forgiving of mistakes. Keep your chin up guys – there are more people out there that appreciate your efforts than there are who fight you! Klaus-Peter Kossakowski: Just as SERT was commencing, Germany formed its DFNCERT team. I didn’t learn of this until August. This was the start of more work to resolve international cooperation issues. Peter has had to fight just as hard for funding as we have. He has supported the SERT team absolutely, and I look forward to further cementing our relationship with DFNCERT. I asked Peter to contribute to this panel as they have recently also been through the exercise of forming an IRT, so they are also experts! Georgia Killcrece: Although Georgia is part of CERT’s IR group, I have singled her out for agreeing to participate in this panel. Georgia knows what it is like to try and operate in a hostile environment, and her experience has helped us face our constituents with more confidence. Sandy Sparks: Sandy also agreed (was coerced) to be on this panel of presenters. SERT has had only minor dealings with CIAC, but has been impressed on all occasions with their integrity and professionalism. This cannot be achieved in a team without strong management, which Sandy will now educate me in! Alan Coulter, Geoffrey Dengate, John Noad: The Directors of the Computer Centres of the three cooperating Brisbane Universities. Without their vision and support, the SERT team would still be a part of people’s imaginations. Graham Rees: My immediate manager and good friend. He has had to tolerate and soothe my ruffled feathers when the going got tough, the budget was lean, and there was no more resource to apply to the problem – “Just do the best you can!”. Graham was always willing to help and support, and that is a great boost when times get tough. Finally, Rob McMillan. Rob and I created the initial stages of SERT. Every IRT could use a person like Rob: intelligent, talented, full of integrity, full of great ideas, trustworthy, and grossly underpaid! 13. Information Sources This section contains a number of papers, articles, security tools, and general information sources. These are not the sources of information used to create this paper, but are sources of security information that a newly forming IRT may find useful to obtain and peruse. These references have been used at different times by the author in other papers. 13.1. Papers [Alv90] De Alvare A. M., How Crackers Crack Passwords or What Passwords to Avoid, Proceedings of the UNIX Security Workshop II, Portland, August 1990. [BB91] den Boer B. and Bosselaers A., An Attack on the Last Two Rounds of MD4, Proceedings of the Crypto’91 conference, Santa Barbara, August 1991. [BB93] den Boer B. and Bosselaers A., Collisions for the compression function of MD5, Pre-proceedings of the EUROCRYPT 93 conference, Lofthus, May 1993. [Bis87] Bishop M., How to Write a Setuid Program, ;login, Volume 12, Number 1, January/February 1987. [Bis92a] Bishop M., Proactive Password Checking, Proceedings of the 4th Workshop on Computer Security Incident Handling, Denver, August 1992. [BKS90] Baran F., Kaye H., and Suarez M., Security Breaches: Five Recent Incidents at Columbia University, Proceedings of the UNIX Security Workshop II, Portland, August 1990. [BM91] Bellovin S. and Merritt, M., Limitations of the Kerberos Authentication System, Proceedings of the USENIX Winter 1991. [Bra90] Brand R., Coping with the Threat of Computer Security Incidents: A Primer from Prevention through Recovery. CERT 0.6, June 1990. [Bro93] Brown L, On Implementing Security Extensions to the TCP Transport Layer, Proceedings of the 16th Australian Computer Science Conference (ASCS-16), Brisbane, February 1993. [Che92] Cheswick W.. An evening with Berferd in which a Cracker is Lured, Endured, and Studied, Proceedings of the Winter USENIX Conference, San Francisco, January 1992. [Cly93] Clyde R., DECnet Security (Not Necessarily an Oxymoron), Computers and Security, March 1993. [Coh92] Cohen F., A Formal Definition of Computer Worms and Some Related Results, Computers and Security, Volume 11, Number 7, November 1992. [Cov90] Covert J., Functional Specification for Callouts for LOGINOUT & DECnet Session, Version T1.0.0, Digital Equipment Corporation, July 1990. [Cur90] Curry D., Improving the Security of your UNIX System, ITSTD-721-FR-90-21, SRI International, April 1990. [Din90] Dinkel C., Secure Data Network System (SDNS) Network, Transport and Message Security Protocols, NIST, NISTIR-90/4250, March 1990. [Edw90] Edwards B., How to Survive a Computer Disaster, Proceedings of the DECUS Symposium, August 1990. [FIP77] Federal Information Processing Standards Publication 46, Data Encryption Standard, National Bureau of Standards, U.S. Department of Commerce, January 1977. [HY92] Harn L. and Yang S., Group Oriented Undeniable Signature Schemes without the Assistance of a Mutually Trusted Party, Proceedings AUSCRYPT ’92, Gold Coast, December 1992. [JM91] Janson P and Molva R., Security in Open Networks and Distributed Systems, Computer Networks and ISDN Systems, Volume 22, Number 5, October 1991. [KC90] Kaplan R., and Clyde R., Viruses, Worms, and Trojan Horses – Part VI: The War Continues, Proceedings DECUS Fall 1990, Las Vegas, 1990. [KCS90] Kohl J., Neuman B., and Steiner J., The Kerberos Network Authentication Service, MIT Project Athena, Version 5 Draft 3, October 1990. [KK92] Koblas D. and Koblas M., SOCKS, Proceedings of the USENIX Security Symposium, 1992. [Kle90] Klein D., “Foiling the Cracker”: A Survey of, and Improvements to, Password Security, Proceedings of the UNIX Security Workshop II, Portland, August 1990. [Kur90] Kuras J., An Expert Systems Approach to Security Inspection of UNIX, Proceedings of the UNIX Security Workshop II, Portland, August 1990. [LAB92] Lampson B., Abadi M., Burrows M., and Wobber E., Authentication in Distributed Systems: Theory and Practice, acm Transactions on Computer Systems, November 1992. [Lau92] Laun, R., Asymmetric User Authentication, Computers and Security, Volume 11, Number 2, April 1992. [Law93] Lawrence L., Digital Signatures – Explanation and Usage, Computers and Security, Volume 12, Number 3, May 1993. [LS93] Longstaff T. and Schultz E., Beyond Preliminary Analysis of the WANK and OILZ Worms: A Case Study of Malicious Code, Computers and Security, Volume 12, Number 1, February 1993. [Mor90] Moraes M., YP is not secure, Security Digest, Volume 3, Issue 12, May 1990. [RSA78] Rivest R., Shamir A., and Adleman L., A Method for Obtaining Digital Signatures and Public-key Cryptosystems, Communications of the ACM, February 1978. [Spa88] Spafford, E., The Internet Work Program: An Analysis, Technical Report CSD-TR-823, Department of Computer Science, Purdue University, November 1988. [Spa92] Spafford E., OPUS: Preventing Weak Password Choices, Computers and Security, May 1992. [TAP90] Tardo J., Alagappan K., and Pitkin R., Public Key Authentication using Internet Certificates, Proceedings of the UNIX Security Workshop II, Portland, August 1990. 13.2. Books [Arn93] Arnold N., UNIX Security: A Practical Tutorial, McGraw-Hill Inc., 1993. [Bha93] Bhaskar K., Computer Security: Threats and Countermeasures, NCC Blackwell, 1993. [CLS91] Caelli W., Longley D., and Shain M., Information Security Handbook, Stockton Press, 1991. [DEC88a] Guide to DECnet-VAX Networking Version 5.0, Digital Equipment Corporation, April 1988. [DEC88b] VMS Access Control List Editor Manual Version 5.0, Digital Equipment Corporation, April 1988. [DEC89a] Guide to VMS System Security Version 5.2, Digital Equipment Corporation, June 1989. [DEC89b] VAX C Run-Time Library Reference Manual Version 3.1, Digital Equipment Corporation, December 1989. [DEC90] VMS Authorize Utility Manual Version 5.4, Digital Equipment Corporation, August 1990. [Far91b] Farrow R., Unix System Security: How to Protect your Data and Prevent Intruders, Addison-Wesley, April 1991. [Gro93] Grottola M., The UNIX Audit: Using UNIX to Audit UNIX, McGraw-Hill Inc., 1993. [GS91] Garfinkel S. and Spafford G., Practical UNIX Security, O’Reilly and Associates, Inc., 1991. [IBM89] Virtual Machine/Directory Maintenance – Operation and Use, Release 4, International Business Machines,1989. [MP92] Mui L. and Pearce E., X Window System Administrator’s Guide, O’Reilleys & Associates Inc., 1992. [OSI92] The OSI Security Package, OSISEC Users Manual V0.2, July 1992. [SS94] Shaffer S. and Simon A., Network Security, AP Professional, 1994. [Ste90] Stevens W., UNIX Network Programming, Prentice Hall, 1990. [Sto89] Stoll C., The Cuckoo’s Egg, Doubleday, 1989. [Sun90a] System and Network Administration, SUN Microsystems, Revision A, March 1990. [Sun90b] SunOS Reference Manual, Volume 1, SUN Microsystems, Revision A, March 1990. [Sun90c] SunOS Reference Manual, Volume 2, SUN Microsystems, Revision A, March 1990. [Tan89] Tanenbaum A., Computer Networks, Prentice-Hall International Inc.1989. 13.3. Security Tools [Bis92b] Bishop M., README file for passwd+, anonymous ftp from dartmouth.edu, June 1992. [Far91a] Farmer D., README.1 file from COPS system, anonymous ftp from cert.org, November 1991. [Goa92] Goatley H., Supervisor Reference Guide, anonymous ftp from ftp.spc.edu, October 1992. [Hei90] Heirtzler J., shadow.howto file from shadow system, anonymous ftp from csc2.anu.edu.au, April 1990. [Hoo90] Hoover C., README file from npasswd system, anonymous ftp from ftp.cc.utexas.edu, March 1990. [KHW93] Karn P., Haller N., and Walden J., S/Key One Time Password System, anonymous ftp from thumper.bellcore.com, July 1993. [KS92] Kim G. and Spafford E., README file from Tripwire system, anonymous ftp from cert.org, November 1992. [LeF92] LeFebvre W., Restricting Network Access to System Daemons under SunOS, securelib system, anonymous ftp from eecs.nwu.edu, 1992. [MLJ92] McCanne S., Leres C., and Jacobson V., README file from tcpdump system, anonymous ftp from ftp.ee.lbl.gov, May 1992. [Muf92] Muffett A., “Crack Version 4.1” A Sensible Password Checker for Unix, anonymous ftp from cert.org, March 1992. [Ney92] Ney S., README file from TAP system, anonymous ftp from ftp.cs.tu-berlin.de, March 1992. [SSH93] Safford D., Schales D., and Hess D., Texas A & M Network Security Package Overview, anonymous ftp from sc.tamu.edu, July 1993. [Ven92] Venema W., BLURB file from TCP Wrapper system, anonymous ftp from cert.org, June 1992. [Zim92] Zimmermann P., README file from PGP system, anonymous ftp from ghost.dsi.unimi.it, November 1992. 13.4. Articles [CER92] Computer Emergency Response Team, Internet Security for UNIX System Administrators, Presented at AARNet Networkshop, December 1992. [CER93] Computer Emergency Response Team Advisory 93:14, Internet Security Scanner (ISS), September 1993. [Hey93a] Van Heyningen M., RIPEM Frequently Asked Questions, USEnet newsgroup alt.security.ripem, 31 March 1993. [Hey93b] Van Heyningen M., RIPEM Frequently Noted Vulnerabilities, USEnet newsgroup alt.security.ripem, 31 March 1993. [SER93] Security Emergency Response Team Advisory 93.04, Guidelines for Developing a Sensible Password Policy, June 1993. [TIS93] TIS/PEM FAQ (Frequently Asked Questions), anonymous ftp from ftp.tis.com, June 1993. 13.5. Standards [ISO92] International Standards Organisation ISO 9594-8: The Directory: Authentication Framework, 1992 (also known as CCITT Recommendation X.509). [RFC783] Sollins K., The TFTP Protocol (Revision 2), Network Working Group, RFC783, June 1981. [RFC1094] Sun Microsystems Inc., Network File System Protocol Specification, Network Working Group, RFC1094, March 1989. [RFC1319] Kaliski B., The MD2 Message-Digest Algorithm, Network Working Group, RFC1319, April 1992. [RFC1320] Rivest R., The MD4 Message-Digest Algorithm, Network Working Group, RFC1320, April 1992. [RFC1321] Rivest R., The MD5 Message-Digest Algorithm, Network Working Group, RFC1321, April 1992. [RFC1421] Linn J., Privacy Enhancement for Internet Electronic Mail: Part I: Message Encryption and Authentication Procedures, Network Working Group, RFC1421, February 1993. [RFC1422] Kent S., Privacy Enhancement for Internet Electronic Mail: Part II: Certificate-Based Key Management, Network Working Group, RFC1422, February 1993. [RFC1423] Balenson D., Privacy Enhancement for Internet Electronic Mail: Part III: Algorithms, Modes, and Identifiers, Network Working Group, RFC1423, February 1993. [RFC1424] Kaliski B., Privacy Enhancement for Internet Electronic Mail: Part IV: Key Certification and Related Services, Network Working Group, RFC1424, February 1993.

Learn more

Major security incidents

Wannacry ransomware incident

Wannacry ransomware incident [For a short version of this alert, please read just the THREAT and RECOMMENDED ACTION sections below] UPDATE 1: Microsoft published a blog that will serve as their centralized resource for these attacks. [10], and have made patches available for previously unsupported systems. There is now no reason NOT to patch “we made the decision to make the Security Update for platforms in custom support only, Windows XP, Windows 8, and Windows Server 2003, broadly available for download” [10] UPDATE 2: see APPENDIX for scripts to find vulnerable systems in your network and also to also identify infected systems in your network UPDATE 3: See Introduction for update on affected organisations and information on the malware’s operational aspects. See the Recommended Actions section for additional information on applying IOCs. UPDATE 4: A Wannacry in-memory key recovery for WinXP document has been released. [17] INTRODUCTION An ongoing widespread ransomware worm attack has occurred against organisations in approximately 150 countries. AUSCERT has not received any local reports of such attacks at the moment. Confirmed reports of WannaCry infections have been received from countries in the APAC region. Indonesia is the closest such example with Healthcare organisations being targeted. Attacks have been reported against the NHS, University of Waterloo, Nissan in the UK, the Interior Ministry, banks, railroads in Russia, Telefonica users in Spain, German Rail, a mall in Singapore and ATMs in China, among others. The attacks do not appear to target any particular industry sectors. [1, 14]. The worm part of the malware launches the EternalBlue exploit against Windows hosts vulnerable to CVE-2017-0144. This achieves privilege escalation and Remote code execution within the target host. The worm then proceeds to download the ransomware component. The Double Pulsar exploit is launched to install a backdoor in infected hosts, thereby gaining persistent access. Analyses flag encrypted files containing different extensions. Encrypted file extensions are renamed to “.wnry”, “.wcry”, “.wncry” and “.wncrypt”, likely due to variants of the ransomware. The ransomware targets files with the following extensions: .123,.3dm,.3ds,.3g2,.3gp,.602,.7z,.ARC,.PAQ,.accdb,.aes,.ai,.asc,.asf,.asm,.asp,.avi,.backup,.bak, .bat,.bmp,.brd,.bz2,.cgm,.class,.cmd,.cpp,.crt,.cs,.csr,.csv,.db,.dbf,.dch,.der,.dif,.dip,.djvu,.doc,.docb, .docm,.docx,.dot,.dotm,.dotx,.dwg,.edb,.eml,.fla,.flv,.frm,.gif,.gpg,.gz,.hwp,.ibd,.iso,.jar,.java,.jpeg, .jpg,.js,.jsp,.key,.lay,.lay6,.ldf,.m3u,.m4u,.max,.mdb,.mdf,.mid,.mkv,.mml,.mov,.mp3,.mp4, .mpeg,.mpg,.msg,.myd,.myi,.nef,.odb,.odg,.odp,.ods,.odt,.onetoc2,.ost,.otg,.otp,.ots,.ott,.p12, .pas,.pdf,.pem,.pfx,.php,.pl,.png,.pot,.potm,.potx,.ppam,.pps,.ppsm,.ppsx,.ppt,.pptm,.pptx,.ps1, .psd,.pst,.rar,.raw,.rb,.rtf,.sch,.sh,.sldm,.sldx,.slk,.sln,.snt,.sql,.sqlite3,.sqlitedb,.stc,.std,.sti,.stw, .suo,.svg,.swf,.sxc,.sxd,.sxi,.sxm,.sxw,.tar,.tbk,.tgz,.tif,.tiff,.txt,.uop,.uot,.vb,.vbs,.vcd,.vdi,.vmdk, .vmx,.vob,.vsd,.vsdx,.wav,.wb2,.wk1,.wks,.wma,.wmv,.xlc,.xlm,.xls,.xlsb,.xlsm,.xlsx,.xlt,.xltm,.xltx,.xlw,.zip RECOMMENDED ACTIONS: AlienVault’s Open Threat eXchange (OTX) has a number of threat indicators. [2] (A zip file of the threat indicators is available for download at the end of this publication – wannacry_ioc.zip ) Members are strongly advised to apply these threat indicators, which include: 1. Domains In general domains should be blocked outbound, as these represent C&C servers to which the ransomware attempts to connect. However, among these are two domains that are kill switches for the ransomware. If infected hosts can resolve these domains, the malware exits and propagation ceases. The domains are iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.com and ifferfsodp9ifjaposdfjhgosurijfaewrwergwea.com. It is advisable to not block outbound traffic to these sinkholed domains because they can help identify infected hosts. Caution: Updated malware is likely to omit the killswitch feature or amend it. 2. Remote IPs/ports Apply blocks/checks in ACLs,IPS/IDS, network firewalls both inbound and outbound. The IPs represent C&C servers for the ransomware, additional resource download URLs and Bitcoin payment sites. 3. Hostnames Same as above. 4. File paths Applied to Host IDS and/or integrity checkers helps identify known dropped files for the ransomware. 5. Registry keys Applied to Host IDS and/or integrity checks can help identify creation or modifications of registry keys by the ransomware. 6. Snort Applied to IDS/IPS, helps detect EternalBlue exploit activity. 7. Yara YARA signature(s) to detect the presence of ransomware in hosts. [15] 8. BTC Known Bitcoin wallet addresses that are used to receive ransom payments. Outbound traffic to these URLs could help identify infected hosts attempting payment. The accessed URLs will be of the form: https://blockchain.info/address/ + BTC Wallet 9. File Hashes (MD5, SHA1, SHA256) Network security devices such IDS/IPS, SIEMS, Firewalls should be tuned to block these domains, IPs and Host names, both inbound and outbound. Host IDSs should be tuned to monitor changes in Windows hosts for the indicated file paths, file hashes. The malware targets a remote code execution vulnerability in SMB (CVE-2017-0144). This vulnerability was addressed in Microsoft’s update MS17-010. [3] All Windows hosts should be patched immediately, to address this vulnerability if they already haven’t. (See the AUSCERT Security bulletin). [4] Organisations that are unable to patch certain systems, for example, hospitals operating specialised equipment, are advised to consider implementing Private VLANs to isolate such systems. This would help prevent lateral movement. ADDITIONAL RECOMMENDATIONS MS-ISAC issued an advisory addressing the remote code execution vulnerabilities in SMB server that is currently being used to propagate the WannaCry ransomware. MS-ISAC has provided the following recommendations to mitigate the vulnerabilities: “Apply appropriate patches provided by Microsoft to vulnerable systems immediately after appropriate testing. Disable SMBv1 on all systems and utilize SMBv2 or SMBv3 after appropriate testing. Run all software as a non-privileged user (one without administrative privileges) to diminish the effects of a successful attack. Remind users not to visit un-trusted websites or follow links provided by unknown or un-trusted sources. Inform and educate users regarding the threats posed by hypertext links contained in emails or attachments, especially those from un-trusted sources. Apply the Principle of Least Privilege to all systems and services.” [5] AUSCERT recommends the following measures to mitigate risk of exposure: Anti-virus signatures should be updated immediately If patching is not possible, make a business decision to disable SMB. [6] Block SMB traffic from all but necessary and patched systems (Firewall ports 445/139 ). Segment your networks. Disable or restrict Remote Desktop Protocol (RDP) access – see http://support.eset.com/kb3433/#RDP A snort rule for ETERNALBLUE was released by Cisco as part of the “registered” rules set. Check for SID 41978. [7] Emerging threats has an IDS rule that catches the ransomware activity: (ID: 2024218). [8] AUSCERT has compiled a list of indicators of compromise based on analyses conducted by external parties [11-13]. AUSCERT will continue to issue additional alerts as and when new information becomes available. POST-INFECTION For ransomware, prevention is the best possible outcome. However, if a ransomware infection has occurred, consider the following measures: 1. Immediately isolate the infected host from the network to prevent lateral movement 2. Submit samples of infected files to Crpyto-sheriff. This might help identify a decryptor to recover encrypted files. [16] REFERENCES: [1] http://www.telegraph.co.uk/news/2017/05/12/nhs-hit-major-cyber-attack-hackers-demanding-ransom/ [2] https://otx.alienvault.com/pulse/5915db384da2585b4feaf2f6/ [3] https://technet.microsoft.com/en-us/library/security/ms17-010.aspx [4] https://portal.auscert.org.au/bulletins/45238 [5] https://msisac.cisecurity.org/advisories/2017/2017-024.cfm [6] https://support.microsoft.com/en-us/help/2696547/how-to-enable-and-disable-smbv1,-smbv2,-and-smbv3-in-windows-vista,-windows-server-2008,-windows-7,-windows-server-2008-r2,-windows-8,-and-windows-server-2012 [7] https://isc.sans.edu/forums/diary/ETERNALBLUE+Windows+SMBv1+Exploit+Patched/22304/ [8] https://isc.sans.edu/forums/diary/Massive+wave+of+ransomware+ongoing/22412/ [9] https://blog.malwarebytes.com/threat-analysis/2017/05/the-worm-that-spreads-wanacrypt0r/ [10] https://blogs.technet.microsoft.com/msrc/2017/05/12/customer-guidance-for-wannacrypt-attacks/ [11] https://www.symantec.com/connect/blogs/what-you-need-know-about-wannacry-ransomware [12] https://securingtomorrow.mcafee.com/executive-perspectives/analysis-wannacry-ransomware-outbreak/ [13] https://www.troyhunt.com/everything-you-need-to-know-about-the-wannacrypt-ransomware/ [14] https://gist.github.com/rain-1/989428fa5504f378b993ee6efbc0b168 [15] https://blog.malwarebytes.com/threat-analysis/2013/10/using-yara-to-attribute-malware/ [16] https://www.nomoreransom.org/crypto-sheriff.php [17] https://github.com/aguinet/wannakey APPENDIX Please read the DISCLAIMER [17] before using these scripts. IDENTIFICATION OF VULNERABLE SYSTEMS To detect systems on a network (x.x.x.x/xx) that are vulnerable (i.e that are not patched to mitigate MS17-010) a python script is available https://github.com/RiskSense-Ops/MS17-010 This is a standalone version of a corresponding METASPLOIT detection module – https://www.rapid7.com/db/modules/auxiliary/scanner/smb/smb_ms17_010 UBUNTU installation/Usage $ sudo apt-get install prips $ wget https://github.com/RiskSense-Ops/MS17-010/raw/master/scanners/smb_ms17_010.py $ prips x.x.x.x/xx | xargs -l1 python ./smb_ms17_010.py # If the above script is too slow, then you can identify just the Windows servers in you network to pass to smb_ms17_010.py <ip> with the nbtscan tool. $ sudo apt install nbtscan $ nbtscan x.x.x.x/xx IDENTIFICATION OF INFECTED SYSTEMS To detect systems on a network (x.x.x.x/xx) that are already infected (by virtue of DOUBLEPULSAR malware also being installed as part of the worm), another detection script is available: UBUNTU Installation/Usage $ pip install netaddr –user $ git clone git@github.com:countercept/doublepulsar-detection-script.git $ cd doublepulsar-detection-script/ $ python detect_doublepulsar_smb.py –net x.x.x/xx REVISION HISTORY Version Published Changes 1.0 13th May 2017 Original version published 2.0 13th May 2017 Update 1 – Microsoft issues out of band patches 3.0 14th May 2017 Update 2 – Appendix added 4.0 15th May 2017 Update 3 – Additional campaign related information, Indicators of Compromise and reference resources. Post-infection section added 5.0 17th May 2017 Update4 – Wannacry in-memory key recovery for WinXP released AUSCERT Team [17] DISCLAIMER AUSCERT has made every effort to ensure that the information provided is accurate and the advice is appropriate based on the information we have received. However, the decision to use or rely upon the information or advice is the responsibility of each organisation and should be considered in accordance with your organisation’s site policies and procedures. AUSCERT takes no responsibility for adverse consequences which may arise from following or acting on the information or advice provided.   Attached Documents wannacry_ioc.zip

Learn more

Member information

AUSCERT Bulletin Formats

AUSCERT Bulletin Formats AUSCERT publishes two security bulletin formats: External Security Bulletin (ESB) – produced by vendors that are summarised and re-released by AUSCERT in a consistent format. AUSCERT Security Bulletin (ASB) – produced by AUSCERT with Overview, Impact and Mitigation information. ASBs typically describe critical vulnerabilities and emerging threats. They are collated from a variety of resources including vendors, security researchers and incident response teams around the world. Every AUSCERT bulletin contains a Bulletin Summary which highlights the essential information to assist in the vulnerability management process. The Bulletin Summary consists of the following categories (where relevant): Product Publisher Operating System Resolution CVE Names Original Bulletin URL Comment CVSS (Max) EPSS (Max) CISA KEV (if applicable) These categories are described in further detail below. ESB Structure Bulletin Titles and Email Subject Lines Bulletin titles and bulletin email subject lines display information in a concise format. The title includes the bulletin ID (eg ESB-2024.1234), revision number if applicable (eg ESB-2024.1234.2) and may include an ‘ALERT’ flag if the contents of the bulletin are time critical or reference a serious actively exploited vulnerability. The title also lists operating systems or hardware types that the vulnerability affects, and the product or product family. Example of a bulletin title: ESB-2024.1234 libarchive   Example of an email subject line: ESB-2024.1234 [SUSE] libarchive: CVSS (Max): 7.3 Bulletin Header The bulletin header consists of the ESB (or ASB) ID, a short summary of the purpose of the bulletin, and the date. Bulletin Summary The bulletin summary is an overview of the essential information in the bulletin typically used in the vulnerability management process. Both ESBs and ASBs contain a summary with individual fields as shown in this example: Product The product field displays the affected product name and version numbers (if any). Both ESBs and ASBs will have a Product field. Publisher Only present in an ESB, the Publisher field gives the name of the original source of the bulletin. This is often a vendor such as SUSE or Red Hat but it may also be another security team or research group. Operating System This field gives a list of operating systems or operating system families that are affected by the vulnerability. Resolution The Resolution field gives a quick indication on how to protect against the vulnerability. The values are: None: No resolution is currently available. Patch/Upgrade: A patch or new, unaffected version of the product is available. Note that only official vendor patches are acceptable as a patch – third party patches would be considered a mitigation. Mitigation: There are mitigation steps available that may be used, however there is no specific fix to the vulnerability. Alternate Program: Another program with similar functionality is available that is not vulnerable. CVE Names This field lists any CVE identifiers that relate to this vulnerability. CVEs are effective for tracking vulnerabilities that affect multiple products. Original Bulletin URL This field lists the URL of the original bulletin source. The original bulletin will often have additional links for further information. Comment This field contains any additional information that AUSCERT believes should be highlighted, including: CVSS (Max) EPSS (Max) CISA KEV (if applicable) These categories are described in detail further below. CVSS (Max) The Common Vulnerability Scoring System, or CVSS score, is included in all AUSCERT ASBs and ESBs in the Comment field. The CVSS is a published standard for assessing security vulnerabilities which classifies and scores vulnerabilities based on their severity. Scores are calculated based on a formula that depends on several metrics including required access, impact and authentication. The scores range from 0 to 10, with 10 being the most severe. This field consists of the CVSS (Max) CVSS Score, CVE-ID and CVSS description of the CVE with the highest score. If there is no CVSS (Max) score available at the time of publishing, the Comment field will show as “CVSS (Max): None”. For further information about how the CVSS (Max) is calculated and used, please see https://auscert.org.au/blogs/bulletin-impact-access-to-cvss-migration. EPSS (Max) Where an Exploitation Prediction Scoring System (EPSS Score) is available, this will also be included in the Comment field of a bulletin as “EPSS (Max)”. EPSS employs advanced algorithms to forecast the likelihood of vulnerabilities being exploited in real-world scenarios. A higher EPSS score will indicate a higher risk of exploitation which may provide input into the vulnerability management process. The syntax of the EPSS (Max) score is: EPSS (Max): (*Probability) (**Percentile) (CVE Number) (Date EPSS calculated). Probability: The likelihood of exploitation of the given CVE within the next 30 days Percentile: The vulnerability’s relative severity compared to others, ranking it within a distribution of similar security issues based on their assessed risks and potential impacts. AUSCERT advises members to research EPSS thoroughly before considering its application in vulnerability management. Understanding EPSS can require effort, and its suitability can vary depending on the environment. See articles below for further details on use and interpretation: https://www.first.org/epss https://www.first.org/epss/user-guide https://www.first.org/epss/faq https://vulners.com/blog/epss-exploit-prediction-scoring-system/ https://blog.stackaware.com/p/deep-dive-into-the-epss https://asimily.com/blog/epss-and-its-role-in-vulnerability-management/ https://security.cms.gov/posts/assessing-vulnerability-risks-exploit-prediction-scoring-system-epss CISA KEV A CISA Known Exploited Vulnerability (KEV) is also present in the Comment field if applicable. The KEV catalogue is a CISA-maintained authoritative source of vulnerabilities that have been exploited in the wild. It is recommended that all members review and monitor the KEV catalogue and prioritize remediation efforts of the listed vulnerabilities to reduce the likelihood of compromise by known threat actors. The field consists of the CISA KEV CVE(s) and the CISA KEV url for reference. For example: For further information about CISA KEV, please see https://www.cisa.gov/known-exploited-vulnerabilities. Bulletin Updates and Versioning An ESB or ASB can be updated in the event of crucially new or updated information becoming available since the original date of publication. Updates will have a version number appended to the bulletin ID, eg ESB-2024.1234 will become ESB-2024.1234.2, and the ‘UPDATE’ tag will be added. ASB Structure An ASB contains the same bulletin title, bulletin header, bulletin summary and comment sections as an ESB, however the main body of an ASB differs from an ESB. The main body of an ASB generally consists of four headings: OVERVIEW: This is a summary of the vulnerability being reported and the products that are affected. IMPACT: This section outlines in more detail what the vulnerability allows attackers to perform (eg remote code execution), and the potential outcome of these vulnerabilities (eg significant data breaches, circumvent firewalls, intrusion detection systems, etc). MITIGATION: This section outlines steps to mitigate the risk. This can range from applying available patches to address the vulnerability to restricting or segmenting access to the network, including deploying additional monitoring and alerts against specific criteria. REFERENCES: This is a list of websites that report on the vulnerability. It can be a third-party website or the vendor itself. The websites are referenced within the ASB as the source of information being reported. Examples Full example of an ESB:     Full example of an ASB:    

Learn more

Member information

Membership Services and Benefits

Membership Services and Benefits AUSCERT provides members with proactive and reactive advice and solutions to current threats and vulnerabilities. We’ll help you prevent, detect, respond and mitigate cyber-based attacks. As a not-for-profit security group based at The University of Queensland, AUSCERT provides a range of comprehensive services to strengthen your cyber security strategy. AUSCERT services are split across three capability pillars: Incident Support, Vulnerability Management and Threat Intelligence. These services are all included in AUSCERT Membership. Incident Support Incident Support – Assists your organisation to detect, interpret and respond to attacks from around the world. Includes access to our highly skilled team of analysts and developers who are available through email, Slack or a 24/7 hotline. Phishing Takedown – Designed to help your organisation with targeted phishing, spear phishing and whaling attacks. Vulnerability Management Security Bulletins – Provides information on threats and vulnerabilities affecting a range of platforms, applications and devices. Member Security Incident Notifications – Customised composite security report containing incident notifications relevant to your organisation’s domains and IP ranges. Proactively informs about security incidents affecting your organisation’s data, systems or networks. Early Warning SMS – Receive SMS notifications for the most critical security threats and vulnerabilities. Threat Intelligence AusMISP – Our MISP service provides threat indicators acquired from trusted communities and organisations to enhance your cyber security posture. Malicious URL Feed – AUSCERT provides a list of active phishing, malware, malware logging or mule recruitment web sites which can added to your firewall blacklist. Sensitive Information Alert – Alert notification for sensitive material and breached credentials found online by our analyst team which specifically targets your organisation. Additional Benefits Member benefits for the annual AUSCERT Cyber Security Conference, Australia’s longest running information security conference. The next conference will be held in May 2025 at The Star Gold Coast. Further details are available here: https://conference.auscert.org.au/ Reduced registration price (available to all members) 50% off one conference registration or 1-day registration (small members) One or more conference registrations (medium members and above). Member pricing for AUSCERT’s range of cyber security training courses. Course information, pricing and calendar are available here: https://auscert.org.au/services/training/ Access to AUSCERT member meetups, workshops and events. Download AUSCERT Membership Services & Benefits (PDF)

Learn more

Member information

A guide to AUSCERT Member Security Incident Notifications: MSIN

A guide to AUSCERT Member Security Incident Notifications: MSIN Introduction As part of its ongoing efforts to enhance member services, AUSCERT has launched its Member Security Incident Notification services. What’s an MSIN? An MSIN is a daily customised composite security report targeted towards AUSCERT member organizations. It contains a compilation of “security incident reports” as observed by AUSCERT through its threat intelligence platforms. Daily MSINs are issued on a daily basis. They are only issued to a member if at least one incident report specific to the member is detected within the past 24-hour period. This also means, if there are no incidents to report, you will not receive an MSIN! So it follows, the more security incidents spotted corresponding to your organization, the more incident reports will be included in the MSIN, the larger the MSIN you receive! Customised MSINs are tailored for each member organization, based on: IPs and Domains provided To receive accurate and useful MSINs, it’s important you keep this information updated (see FAQ) Severity Individual events in MSINs are categorised into the following severity levels: Critical Highly critical vulnerabilities that are being actively exploited, where failure to remediate poses a very high likelihood of compromise. For example, a pre-auth RCE or modification or leakage of sensitive data. High End of life systems, systems that you can log into with authentication that are meant to be internal   (SMB, RDP), some data can be leaked. Sinkhole events end up in this category. Medium Risk that does not pose an immediate threat to the system but can over time escalate to a higher severity. For example, risk of participating in DDoS, unencrypted services requiring login, vulnerabilities requiring visibility into network traffic (MITM without being able to manipulate the traffic) to exploit, attacker will need to know internal systems/infrastructure in order to exploit it. Low Deviation from best practice – little to no practical way to exploit, but setup is not ideal. Vulnerabilities requiring MITM (including manipulating the traffic) to exploit. For example, SSL POODLE reports may end up in this category. Info Informational only. Typically no concerns. Review in accordance with your security policy. These severity levels are based on those used by Shadowserver. Events which have not been assigned a severity will be marked as Unknown. A summary of reports by severity level can be found at the top of your MSIN. For example: Summary of reports based on severity: * Critical: accessible-ssh 3 * High : vulnerable-exchange-server 1 * Medium : accessible-cwmp 1 The MSIN subject will be prefixed with the highest level severity seen in the report. For example: [Severity:CRITICAL] AusCERT Member Security Incident Notification (MSIN) for “Member Name” Composite Each MSIN could potentially consist of multiple incident TYPE reportsFor example, it could contain an Infected Hosts report which highlights hosts belonging to a member organization that have been spotted attempting to connect to a known botnet C&C server, followed by a DNS Open Resolvers report listing open recursive DNS resolvers that could be used in a DNS amplification DDoS attack. Each incident type report could also include multiple incident reportsFor example, this “infected hosts” report contains 2 incidents:Incidents Reported     Timestamp:                      2015-08-25T00:20:34+00:00     Drone IP:                       123.456.789.abc     Drone Port:                     13164     Drone Hostname:                 abc.xxx.xxx.xxx.au     Command and Control IP:         aaa.bbb.ccc.ddd     Command and Control Hostname:   imacnc1.org     Command and Control Port:       80     Malware Type:                   redyms     Timestamp:                      2015-08-25T00:20:34+00:00     Drone IP:                       321.654.987.cba     Drone Port:                     2343     Drone Hostname:                 def.xxx.xxx.xxx.au     Command and Control IP:         ddd.eee.fff.ggg     Command and Control Hostname:   imacnc2.org     Command and Control Port:       123     Malware Type:                   dyre All timestamps are in UTC It is imperative these incidents be reviewed and handled individually. Structure An MSIN has the following basic structure. ==================HEADING FOR INCIDENT TYPE 1============== Incident Type Name of the incident and any known exploited vulnerabilities and associated CVEs. Incident Description Further information on potential attack vectors and impacts. Incidents Reported List of individual reports sighted by AUSCERT Incident report 1 Incident report 2 … Incident report n AUSCERT recommended mitigations Steps for resolution of incidents or mitigation of vulnerabilities which could be exploited in the future. References Links to resources referenced within the report Additional Resources Links to additional material such as tutorials, guides and whitepapers relevant to the report aimed at enhancing the recipients understanding of the addressed vulnerabilities, potential impacts and mitigation techniques. =============================END OF REPORT========================= =====================HEADING FOR INCIDENT TYPE 2==================== Incident Type Incident Description Incidents Reported Incident report 1 Incident report 2 … Incident report n AUSCERT recommended mitigations References Additional Resources =============================END OF REPORT========================= … … =====================HEADING FOR INCIDENT TYPE X==================== =============================END OF REPORT========================= Frequently Asked Questions How can I update domain/IP information for my organization?If you are a Primary AUSCERT contact simply write to AUSCERT Membership at membership@auscert.org.au and provide the updated information.If you have a privileged account in the Member portal you can request changes through the portal. AUSCERT will perform a validation check to ensure the domains are under your organization’s ownership or control prior to including them in the monitoring list. Where does the information in an MSIN come from?AUSCERT receives information relating to compromised and/or vulnerable resources from several trusted third parties, through secure means. The trust relationship between AUSCERT and third parties entails conditions which prevent  disclosure of the source(s) of information.

Learn more

Policies and agreements

Cyber Leaders Network: Terms and Conditions

Cyber Leaders Network: Terms and Conditions Version 1.0 (11 July 2024)   At the core of the Cyber Leaders Network is a commitment to fostering a collaborative and respectful environment for all members. These T&C’s outline the principles and expectations that guide our interactions. This is a living document and may be subject to revisions as the initiative evolves. Mission Statement The Cyber Leaders Network is a group of like-minded cybersecurity professionals that regularly meet, under the coordination and organisation of The University of Queensland (UQ) and AUSCERT, to share best practices and exchange ideas on all cybersecurity matters. The Network functions as a trusted, collaborative, and multi-disciplinary ecosystem, bringing together cyber-professionals with varying degrees of experience in the field, to nurture the development of future leaders. Our primary objective is to provide comprehensive, evidence-based resources while fostering the exchange of best practices and innovative ideas. For this purpose, the Network is vendor and technology agnostic. Participant’s engagement with, and participation in, the Network, stems from the two-fold perspective of seeking personal/professional enrichment and mentoring and leveraging the experience for the betterment of the cybersecurity practices in their workplaces. Code of Conduct Inclusivity: The Cyber Leaders Network is committed to creating an inclusive environment that values and respects the unique perspectives and backgrounds of all participants. Discrimination and exclusion are strictly prohibited. It embraces a culture that encourages active participation from all members regardless of role, experience, or any other factor. Everyone’s insights contribute to the collective success of the network. Respectful Communication: Treat all members with courtesy and professionalism, both in-person and in digital communications. Avoid offensive language, personal attacks, or any behaviour that may create a hostile environment. Members are to demonstrate respect by actively listening to other perspectives, even if they differ from their own. Open dialogue is encouraged. Collaboration and Participation: Members in the Network are expected to maximise their attendance to the events, to bring value to the initiative. In case of the inability to attend an event, the Network member can delegate attendance to a suitable work colleague. Shared knowledge, insights, and resources for the benefit of the entire Cyber Leaders Network can only be pursued through collaboration. Feedback shall be focused on constructive and supportive language. For example, creative critique of ideas, as opposed to individuals, whilst aiming to offer solutions and alternatives. Professional Integrity: Act with transparency and honesty in interactions within the Cyber Leaders Network. Disclose any potential conflicts of interest that may impact the integrity of discussions or decisions within the Network. Uphold ethical values by abiding by legal and ethical standards in all activities related to the Network. Avoid engaging in any actions that could compromise the trust and credibility of the Cyber Leaders Network. The Chatham House Rule Applies: When a workshop, coffee catch-up, or other related meeting is held under the Chatham House Rule, members are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed. By participating in the Cyber Leaders Network as a member, you accept these terms and conditions. As a member you also commit to upholding all principles and contributing to a positive and inclusive Network focused on advancing cybersecurity leadership. Violations of this Code of Conduct may result in the coordination team re-considering membership and participation. Payment AUSCERT/UQ will provide a Tax Invoice to the member upon finalisation of registration and payment. Or if requested, a tax invoice prior to payment can be provided. Registration and participation in the network will be confirmed upon receipt of full membership fee payment. Fees, Cancellations & Refunds Membership is an annual fee. This period commences from the first workshop which will be scheduled in Q4 2024. AUSCERT/UQ reserves the right to cancel workshops and other informal catch-ups due to unforeseen circumstances and will provide participants with written notice in such circumstances. However, 4 workshops and 4 ‘coffee catch-ups’ a year are guaranteed to be delivered to the network. If a workshop or other event offered by the Network cannot be attended, no refund is given. AUSCERT/UQ is not responsible for any expenses that may have been incurred in attending or related to the attendance of any Cyber Leader event. Intellectual Property Rights IP produced by the members of the Network remains the property of the Network itself. Members can use such IP for professional purposes within their organisation (e.g., for training and awareness purposes; for best practice sharing; etc.), but cannot use the IP for commercial purposes (e.g., re-selling the IP). Further IP arrangements will be discussed during the initial stages of the Network creation, for the Network members to be able to contribute their views. Privacy Any personal information provided to AUSCERT/UQ will be subject to UQ’s Privacy Management Policy, which can be viewed here. More information on privacy in relation to AUSCERT/UQ can be obtained from the Right to Information and Privacy Office here. The member consents to AUSCERT taking photographs and videos of the services associated with membership, which may include images of the member and agrees that AUSCERT can use those images in the ordinary course of its business. If a participant wishes to opt out of this consent, they can inform the coordination team beforehand.

Learn more

Policies and agreements

AUSCERT Education: Terms and Conditions

AUSCERT Education: Terms and Conditions   Eligibility Registration to participate in the course is restricted to employees of AUSCERT Member organisations. If a non-member is found to have registered, they will be refused entry into the course and refunded. If required, eligibility can be confirmed by AUSCERT prior to registration. Payment The University of Queensland (UQ) will provide a Tax Invoice to the Participant upon finalisation of registration and payment. Or if requested, UQ can provide a tax invoice prior to payment. Registration and participation in the course will be confirmed upon receipt of full course fee payment. Cancellations All cancellations must be 2 business days or more before a course delivery date. Should the cancellation be made in this timeframe, participants have ONE of the following options: Transfer their booking to an alternate course to be held within 12 months of the original course. Send a substitute in their place, OR AUSCERT reserves the right to cancel courses due to unforeseen circumstances and will provide participants with written notice in such circumstances. If the course is cancelled, Participants may choose between: Transferring their registration to a new date of the same course 100% of the course fee paid to be refunded. The University of Queensland is not responsible for any expenses that may have been incurred in attending or related to the attendance of a course. Intellectual Property Rights AUSCERT owns all Intellectual Property Rights in the Services and Deliverables and in anything (including in electronic form) used or created by AUSCERT or its personnel (including staff, contractors and subcontractors) for or in connection with the supply of the Services. Confidentiality and Privacy The Participant must obtain AUSCERT’s written approval before publishing or publicising any information relating to AUSCERT or the Services. AUSCERT may publish material relating to the conduct and conclusions of the Services, including the Deliverables. Subject to clause below, if any personal information is provided to AUSCERT, that personal information will be subject to UQ’s Privacy Management Policy, which can be viewed here. More information on privacy in relation to AUSCERT and UQ can be obtained from the Right to Information and Privacy Office here. AUSCERT may retain, use and disclose personal information provided by the Participant to: provide the Services and Deliverables; inform the Participant of future events or activities at AUSCERT; undertake statistical analysis of de-identified data; provide to third party contractors that are performing some or all of the Services under this Contract; and assist AUSCERT in relation to exercising or enforcing AUSCERT’s rights. The Participant consents to AUSCERT taking photographs and videos of the Services being provided which may include images of the Participant and agrees that AUSCERT can use those images in the ordinary course of its business.  

Learn more

Reports

Cyber Threat Signal 2021

Cyber Threat Signal 2021 Cyber Threat Signal 2021 Proud to have worked and collaborated alongside a number of fellow CERT colleagues from CERT-In, KrCERT/CC and Sri Lanka CERT|CC on this publication. Today (Monday 7 December 2020) we released a joint prediction of the most pertinent cyber threats that 2021 may deliver. Perhaps to no one’s surprise, ransomware attacks is expected to dominate the sector in 2021 in both volume and its impact. This joint publication follows a diagram and summary points of observations from 2020 that is extended into 2021 along with point-form mitigation advice. To read and download a copy of this publication, see link provided below. Contributors: CERT-In Indian Computer Emergency Response Team is the National Incident Response Centre for major computer security incidents in its constituency. i.e. Indian cyber community. KrCERT/CC KrCERT/CC is the National Computer Emergency Response Team in Korea.  KrCERT/CC takes the lead in raising technical capability for the protection of Critical Network Infrastructure, Internet communication networks and for reinforcement of prediction and alarm systems. Sri Lanka CERT|CC Sri Lanka Computer Emergency Readiness Team | Coordination Centre (Sri Lanka CERT) is the single trusted source of advice about the latest threats and vulnerabilities affecting computer systems and networks, and a source of expertise to assist the nation and member organizations, in responding to and recovering from cyber-attacks. AUSCERT AUSCERT is a not-for-profit Cyber Emergency Response Team based in Australia. AUSCERT delivers 24/7 service to its members and helps them prevent, detect, respond and mitigate cyber-based attacks. Attached Documents cyber_threat_signal_2021-full-report.pdf

Learn more