Orchardcore: ISO/IEC TS 19249:2017 Compliance

Created on 6 Aug 2020  Â·  8Comments  Â·  Source: OrchardCMS/OrchardCore

Hi there,

I would like to use the CMS, but as a requirement it should comply with ISO/IEC TS 19249:2017.

I found no information about this so far. Could you help me with this?

Thanks for the answers!

All 8 comments

To my knowledge, nobody checked this before. Could give some pointers, or ask about concrete areas so we don't have to buy and read a 26-page standard? :)

Thanks for the quick response.
I got a pretty lengthy list of requirements instead of the ISO cert itself, a 'short' criteria list with ~160 point to evaluate if Orchard does comply with the targeted use. I would say that it complies by this with the ISO, but i don't know the exact content of the ISO cert.

B.2.1 The manufacturer has created a list of all roles that are directly or indirectly involved with IT security.
B.2.2 The manufacturer has provided evidence of the IT-security expertise for each role.
B.2.3 The manufacturer has records (e.g. training documents) that lead to the conclusion that the people in question actually have this expertise.
B.2.4 The (software) development plans define the (additional or deviating) expertise on a product-specific basis.

C.1.a.1 The manufacturer has identified all neighboring systems (medical devices, IT systems) that may be connected to the product.
C.1.a.2 The manufacturer has created a list of roles (people, neighboring systems) that may interact with the product.
C.1.a.3 The manufacturer has identified all markets and all the regulatory requirements that are relevant in these markets.
C.1.a.4 The manufacturer has identified the intended primary and secondary users with their IT expertise.
C.1.a.5 The manufacturer has defined the intended user environment.
C.1.a.6 The manufacturer has analyzed the risks (hazards) that result if the system is used in the specified user environment by someone who is not a specified user.
C.1.a.7 The manufacturer has described in the risk management documentation what the IT security threats are and what the consequences would be for patients, users and third parties.
C.1.a.8 The manufacturer has traceably generated the risk acceptance criteria based on the product’s use and the state-of-the-art.
C.1.a.9 The manufacturer has developed a system it can use to evaluate IT security-related risks.

C.1.b.i.1 The manufacturer has identified all data interfaces.
C.1.b.i.2 The manufacturer has specified the protocols and standards used for each data interface.
C.1.b.i.3 For each data interface, the manufacturer has specified the functions offered via the interface.
C.1.b.i.4 The manufacturer has analyzed each function’s security relevance (in terms of hazards).
C.1.b.i.5 The manufacturer has documented the effects of the safety-relevant (in terms of hazards) functions in the risk management documentation.
C.1.b.i.6 The manufacturer has tested all usage scenarios 8 in which risks are generated due to a display of information that has not been specified (e.g. no display, incorrect display or display is too late).
C.1.b.i.7 For each role and neighboring system, the manufacturer has defined the product functions that they may have access to via the corresponding interface.
C.1.b.i.8 The manufacturer has justified its choice of authentication procedure (user name/password, biometric procedure, token, e.g. card) for all roles and all neighboring systems.
C.1.b.i.9 Where necessary, the manufacturer has requested additional mechanisms to minimize the probability of unauthorized access.
C.1.b.i.10 The manufacturer has analyzed, in the risk management process, the effects on patient safety if a person cannot access patient or device data (e.g. no authorization, they forget their password), and defined appropriate measures.

C.1.b.ii.1 The manufacturer has created a list of all data managed by the system.
C.1.b.ii.2 The manufacturer has assessed how worthy of protection these data are in relation to confidentiality and their impact on patient safety.
C.1.b.ii.3 The manufacturer has evaluated, in the context of risk management process, the effect if particularly sensitive data is no longer protected.
C.1.b.ii.4 The manufacturer has investigated, in the context of risk management, the consequences of overloading the system with too many requests (e.g. DoS) or requests with volumes that are too large, and has defined actions if necessary.
C.1.b.ii.5 The manufacturer has, in the context of risk management, analyzed the consequences of the network no longer being available or no longer being available in the expected quality.
C.1.b.ii.6 The manufacturer has, in the context of risk management, analyzed the consequences of the loss of data and establishes actions, such as making a backup, if necessary.
C.1.b.ii.7 The manufacturer has established, in general or for specific products, the criteria for the checking of external data before they are processed further.

C.1.b.iii.1 The manufacturer has a documented plan of how patches are applied and removed again. This plan includes the development, distribution, installation and review of patches.
C.1.b.iii.2 The manufacturer has a list of all SOUP/OTS components.
C.1.b.iii.3 The manufacturer has assessed how often patches are required and how they should be installed.

C.1.b.iv.1 The manufacturer has established how the medical device informs the users in the event that cybersecurity is compromised.
C.1.b.iv.2 The manufacturer has assessed what functionality the medical device must guarantee in the event that cybersecurity is compromised. 

C.1.c.1 The manufacturer has documented all SOUP/OTS components (incl. version, manufacturer, reference to information on updates, release notes).
C.1.c.2 The manufacturer has analyzed the specific risks resulting from the choice of technologies (in particular programming language, SOUP/OTS components).
C.1.c.3 The manufacturer has taken measures to ensure that the tools used (e.g., development environment, compiler) as well as the platforms and SOUP/OTS components are free of malicious code.
C.1.c.4 The manufacturer has created a list of all services 15 that the product offers or uses "externally" (e.g. through its operating system).
C.1.c.5 For each service, the manufacturer has justified why it has to be visible externally (no time limitation).
C.1.c.6 If the product provides an interface, the manufacturer has described how attacks via this interface are controlled in the context of risk management.
C.1.c.7 The manufacturer has identified the process offering/running this service for each externally visible service.
C.1.c.8 For each process, the manufacturer has identified the user (at the operating system level) and, if this user does not run with minimal rights ("worst case" as root), justified this.
C.1.c.9 The manufacturer has systematically identified the risks that would be caused by deficient IT security using threat modeling.
C.1.c.10 The manufacturer has analyzed the risks that result from the (auto-)update of anti-malware software.
C.1.c.11 The manufacturer has established how the product detects compromised IT security, document (log) this and react to it quickly.
C.1.c.12 With regard to the audit log, the manufacturer has determined where its data is stored, how it is protected and updated and how this can be automatically analyzed.
C.1.c.13 For all software components 16, services and processes, and data and software components, the manufacturer has analyzed which risks arise if they do not behave in accordance with the specifications due to a problem with IT security.
C.1.c.14 The manufacturer has taken the software requirements into account in the software architecture.

C.1.d.1 The manufacturer has created coding guidelines that establish specific requirements for IT security.
C.1.d.2 The manufacturer only plays code where reverse engineering and RAM readout cannot lead to unacceptable risks.
C.1.d.3 The manufacturer either tests the software (source code and binaries) for malicious code before delivery and/or has protected all computers involved in the development and "production" of the software against malware.
C.1.d.4 The manufacturer has defined measures that can find and eliminate buffer overflows.

C.1.e.1 The manufacturer has defined at least one method that is used to check compliance with the coding guidelines.
C.1.e.2 The manufacturer requires code reviews for all components that map (IT) security-relevant functions.
C.1.e.3 The manufacturer has concrete test criteria 19 in its specification documents for the code reviews.
C.1.e.4 The code reviews are carried out according to the four-eye principle and only by people who have the necessary expertise. The manufacturer has documented this expertise.
C.1.e.5 The manufacturer has established which tests (e.g. unit tests) are necessary with which test cases and which degrees of coverage are necessary.
C.1.e.6 The manufacturer has described how all SOUP and OTS components have to be verified.

C.1.f.1 The manufacture includes port scans at all relevant network interfaces in the test plan and also performs them.
C.1.f.1 The manufacturer includes penetration tests at all relevant data interfaces and/or for all known vulnerabilities of the OTS components used in the test plan and also performs them.
C.1.f.2 The manufacturer includes the use of “vulnerability scanners” in the test plan.
C.1.f.3 The manufacturer includes fuzz tests at all relevant data interfaces with at least one tool in the test plan and also performs them.
C.1.f.4 The manufacturer includes a security check against the usual attack vectors in the test plan.
C.1.f.5 The manufacturer includes the testing of robustness and performance in the test plan.
C.1.f.6 The manufacturer includes the testing of all system/software requirements (see above) in the test plan.
C.1.f.7 The manufacturer also has its software checked by IT security experts with regard to the above measures.
C.1.f.8 The manufacturer includes third-party test reports (e.g. from SOUP manufacturers) in the system test (if available).

C.1.g.1 The manufacturer has addressed the most common errors 26 and the resulting hazards in the risk analysis or can at least explain how these risks are controlled.
C.1.g.2 The manufacturer discusses the risks posed by all relevant attack vectors (see above) in the risk analysis and shows how these risks are controlled.
C.1.g.3 The manufacturer has checked the effectiveness of all risk-control measures.
C.1.g.4 The manufacturer has created a traceability matrix it uses to document that there are measures that control all risks related to IT security.
C.1.g.5 The manufacturer has prepared the risk management report and the IT security report.
C.1.g.6 The manufacturer has drawn up the necessary plans for the post-development phase (e.g. post-market surveillance and incident response plan).
C.1.g.7 The manufacturer has tested the completeness of the tests using a traceability matrix that links the tests to the requirements.

C.2.a.1 The manufacturer has described how it ensures that only the exact intended artifacts (files) in exactly the intended version are delivered in the product or as a product.
C.2.a.2 The manufacturer has described how the people responsible for the installation know which is the latest version and how confusion during installation can be prevented.
C.2.a.3 The manufacturer has described how it ensures during the installation that the requirements specified in the support materials (see above) are actually met.
C.2.a.4 The manufacturer has established procedures that ensure that it can communicate quickly with operators and users of its products.

C.2.b.1 The manufacturer has created a post-market surveillance plan.
C.2.b.2 The manufacturer has described which information is collected from the downstream phase.
C.2.b.3 The manufacturer has described how and through which channels information is collected from the downstream phase.
C.2.b.4 The manufacturer has described what information is analyzed and evaluated from the downstream phase.
C.2.b.5 The manufacturer has described the resulting measures.
C.2.b.6 For each OTS component, the manufacturer has defined at least one source through which it is informed of IT security problems and how often it is monitored and described the role this analysis performs with which tools.
C.2.b.7 The manufacturer has described how it monitors that the technologies and procedures used (e.g. cryptology) are still secure.

C.2.c.1 The manufacturer has created an incident response plan.
C.2.c.2 The incident response plan governs the criteria the manufacturer uses to evaluate information from the market and when it implements the emergency plan...
C.2.c.3 Who develops and releases the patches and how and within what deadlines.
C.2.c.4 How the customer obtains the patches.
C.2.c.5 How the manufacturer ensures that the patches are also installed.
C.2.c.6 Who informs the customers, how and within what deadlines.
C.2.c.7 In which cases decommissioning or other product recalls is ordered and how.

D.2.a.1 The product only allows users to use it if they have authenticated themselves to the product.
D.2.a.2 The product allows the neighboring systems (e.g. other medical devices, IT systems) connected at each data interface to exchange data only if they have been authenticated by the product.
D.2.a.3 The product allows password authentication only if this has a defined minimum length of which at least one is a non-alphanumeric character and it contains at least one uppercase and one lowercase character. [^C2a-02]
D.2.a.4 The product does not have a default password or requires that a password be changed during the first use.
D.2.a.5 The product blocks users and neighboring systems for m minutes after n attempts, with the manufacturer able to define the n and m values or their lower limits. The manufacturer has analyzed the "safety-related” risks resulting from such a blocking and, if necessary, has implemented measures to minimize these risks.
D.2.a.6 In the event of an unsuccessful login, the product only displays information that does not allow the user to identify the exact cause of the blocking, e.g. incorrect user name or password.
D.2.a.7 The product terminates user and neighboring system sessions after n minutes of inactivity, with the manufacturer setting the value for n or its upper limit.
D.2.a.8 The product assigns a role to each user and each neighboring system for authentication.
D.2.a.9 The product allows each role to access only the functions it is authorized for. This applies in particular for product updates/upgrades.
D.2.a.10 The product allows authorized users to block other users and neighboring systems.
D.2.a.11 The product allows authorized users to reset the authentication of any required elements (passwords, cryptographic keys, certificates) of other users and neighboring systems.
D.2.a.12 The product allows authorized users to delete other users and neighboring systems.
D.2.a.13 The product does not allow users to change their own permissions.
D.2.a.14 The product allows permissions to be canceled (“breaking the glass”), and identifies/documents the person and the reasons.
D.2.a.15 In a client-server architecture, all cybersecurity measures are determined and checked on the server side.
D.2.a.16 In a client-server architecture, all client inputs are checked on the server side.

D.2.b.1 The product allows users to permanently delete all patient-specific data. The product allows you to restrict permissions to do this (e.g. to roles).
D.2.b.2 The product protects data from accidental deletion.
D.2.b.3 The product only transmits data (or at least security-related data) via its data interfaces in encrypted form. This also applies to storage on external data carrier.
D.2.b.4 The product protects the integrity of the data against unwanted modification, e.g. through cryptographic procedures.
D.2.b.5 By default, the product rejects all incoming connections (e.g. USB, TCP, Bluetooth).
D.2.b.6 The product checks all user inputs and all incoming data on the basis of verification criteria defined by the manufacturer (see above) before further processing.
D.2.b.7 The product does not use wireless transmission for the transmission of time-critical data relevant to patient safety.
D.2.b.8 The product stores passwords as “salted hash” only.
D.2.b.9 The product stores characteristics that could be used to identify a person in encrypted form only.
D.2.b.10 The product protects critical data against accidental change and loss.
D.2.b.11 Every time the program is restarted, it checks whether the mechanisms used to protect the data against loss and modification are in sync.
D.2.b.12 The product allows users to deactivate data interfaces (e.g. USB, remote access).
D.2.b.13 The product checks the integrity of the program code every time it is restarted.
D.2.b.14 In the event of that security is compromised, the product provides an emergency mode for functions that have an effect on patient safety.

D.2.c.1 The product allows patches (own code, SOUP/OTS components) to be applied.
D.2.c.2 The product allows you to remove defective patches again (“roll-back”).
D.2.c.3 The product limits the ability to apply or remove patches to users with the corresponding permissions (authenticated and authorized).
D.2.c.4 The product checks changed program code (patches) for integrity before first use and when restarted.

D.2.d.1 The product logs all essential actions on/in the system in an audit log, including day and time and actor (user, system).
D.2.d.2 The product ensures that it has the correct system time.
D.2.d.3 The product protects the audit log against change.
D.2.d.4 The product implements mechanisms that can detect penetration or an attack and react to them.
D.2.d.5 The product allows the exchange of certificates.

D.3.1 The software only uses tried and tested libraries/components (no self-implementation) for all cryptographic functions (e.g. encryption, signing). 
D.3.2 The software uses different technologies or keys for different functions (e.g. encryption of communication, encryption of data).
D.3.3 The software is protected against malware (viruses, worms etc.) as far as is technically possible.
D.3.4 The software is based on versions of the SOUP/OTS components that do not contain any security vulnerabilities. Exceptions are justified.

D.4.1 The instructions for use establish the intended IT environment for operation.
D.4.2 The instructions for use specify which activities the operator must perform, as well as how and how often they should be performed.
D.4.3 The installation and service instructions establish which other roles (operator, service technician) are responsible for which activities and how often they have to be performed.
D.4.4 The support materials describe how to deal with lost or stolen authentication elements (e.g. cards, certificates, cryptographic keys) and forgotten passwords.
D.4.5 The support materials describe how users can recognize an IT security problem with the product and what to do in this case.
D.4.6 The support materials describe which anti-malware software has been approved for the product and where (e.g. link) it can be obtained and who is responsible for updating it.
D.4.7 The support materials contain the manufacturer's contact details, which can be used to contact the manufacturer, for example, in the event of problems with IT security.
D.4.8 The support materials also give a technical description of the product.

It seems to me that only rules D.2.a.1 to D.3.4 (excluding some of them) are relevant to Orchard. Upon a quick glance, it seems to me that these are pretty standard InfoSec best practices and Orchard does comply with most of them. Do you have a question on a particular rule?

Orchard core is open source software, licensed without warranty. I guess none of open source are comply with Such list as they all come without warranty.

When you use orchard core and create derived work, you have to comply with the list.

I do hope this goes without saying (and is also in the license). But I didn't understand @CorporatePittypang asking for anybody to take responsibility.

You can, of course, hire someone to audit Orchard against these requirements, and then they can take responsibility.

@Piedone That’s what I mean, orchard core has solid community, have support material, an active development.

However It’s just source part of it and comes without warranty.

But when you deploy the derived work - IT infrastructure part is still depends on users of software, the release cycle and when to deploy new version is also depends on the users of software and infrastructure they use.

You can do it on your own or you hire someone to do it for you.

I wouldn't push people who contribute to an opensource project to take this kind of responsibility.
I wanted to know, if the core itself has been audited before.
Thanks for the answers :)

@CorporatePittypang IMHO open source software does't comply with such standard when you take the code or binary.

Let's put it this way -

  • Chromium -is open source software . However Google provides support on Google Chrome.
  • .NET core is open source, However Microsoft provides LTS support on .NET CORE SDK.

Getting open source software is only one side of story.
Whereas getting support and being complaint to standards is different part of the story.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

kevinchalet picture kevinchalet  Â·  4Comments

aghili371 picture aghili371  Â·  3Comments

webmedia1012 picture webmedia1012  Â·  4Comments

khoshroomahdi picture khoshroomahdi  Â·  4Comments

jardg picture jardg  Â·  3Comments