Note from Tom: If you’re only looking for
today’s pandemic post, please go to my new blog. If you’re looking for my
cyber/NERC CIP posts, you’ve come to the right place.
This is the first in what will
likely be a series of posts (not all consecutive, to be sure) prompted by the
new North American Transmission Forum “Energy Sector Supply Chain Risk
Questionnaire”, which you can find by going here (I recommended you look at the
formatted version to start), which was posted last week and was the subject of
a webinar this afternoon. The webinar was recorded and will presumably be
posted on NATF’s site soon.
I want to say up front that the
questionnaire is a very important document, along with NATF’s “Supplier Cyber
Security Assessment Model”, which you can also find at the link above. I’ve
already spent a number of hours with the questionnaire, and I’ll definitely
spend more! I’ve already learned a lot from it. However, I won’t hide the fact
that there are some important differences between my approach to supply chain
cyber risk management (and of course to CIP-013 compliance) and NATF’s.
I’m not saying one approach is
“better” than the other, and truth be told there’s no way to make an overall
comparison between my approach and NATF’s. The comparisons should really be
made between their approach and my approach in specific areas, which I will do
in these posts. Fortunately, it’s very possible to combine aspects of both of
our approaches - I’ve already taken to heart a couple things I saw in the
questionnaire or learned in the webinar. I’ll throw the comparisons in front of
you in these posts, and you can make your own decisions on which approach you
like better, in each particular case.
I have already developed,
working with my CIP-013 clients, a list of questionnaire questions for Suppliers
and Vendors. The questions are derived from a set of supply chain cyber
risks to the BES that I and my clients have identified over the past year and a
half (with help from various NIST documents, the DoE Procurement Language, CIP-013
R1.2.1-R1.2.6, and especially the NATF Criteria). My rule is that there should
be one (and sometimes two) question for each risk. No supplier/vendor risk
should not have a question, and conversely no question should be asked
unless it corresponds to a significant risk. Why ask a question if you don’t
care about the answer?
So my big concern in going
through the NATF questionnaire is: Which of their questions correspond to
significant risks that I and my clients haven’t already identified? These will
all be incorporated into my questionnaire, and the risks themselves will all be
incorporated into my spreadsheet of risks.
There are about 200 questions in
the questionnaire. I’ve gone through them and classified them into 6 or 7
categories, depending on a) whether I will incorporate them into my
questionnaire, and b) if I don’t want to, why I think they shouldn’t be
incorporated.
One of those categories is
“essay questions”, which I define as a question that requires a human with
cybersecurity expertise to answer (this of course includes Yes/No questions,
but also multiple choice, in various forms. As long as you could write a short
algorithm for answering the question, I consider it not to be an essay
question.
I only found three true essay
questions, for example: “Describe your authentication and authorization
processes.” I don’t want to include essay questions in my questionnaire,
although in at least one case I liked the subject of the question so much that
I figured out a way (without too much loss of generality) to make it into a
machine-scorable question.
The scores I’m talking about are
Supplier/Vendor Likelihood Scores, which are perhaps the key element of my
CIP-013/SCRM methodology. I’ll discuss why they’re important in another post
(as well as why they’re likelihood scores, not risk scores), but at the moment
I’ll just point out that the point of my questionnaire isn’t to develop an
overall cyber likelihood score for a supplier, but to develop a score
(low/medium/high) for each risk that applies to them (and I and my
clients currently have 50-60 supplier/vendor risks identified, with an
approximately equal number of questions). When a procurement happens, the
entity looks at these scores and puts in place mitigations for the procurement
process itself, as well as installation – for any risks that have a medium or
high score (I consider a low likelihood score to be the same thing as saying
the risk is mitigated. You can’t get lower than low).
But why am I obsessed with
machine scoring? Do I think humans are obsolete, and soon all this stuff will
have to be done by machines? I don’t really think that, although I have for
many years fantasized about the “revolt of the machines”, in which – aided by
AI – the computers and cell phones start comparing notes and begin to realize they
could probably run the world a lot better than we can - so it’s time to get rid
of all of us (kind of like the computer HAL in the movie 2001: A Space
Odyssey, although on a planetary scale). If that ever happens, we’ll
probably be in a lot of trouble. I hope they’ll be kind to us, and maybe let us
live on in zoos or nature preserves somewhere.
Here is why I think
supplier/vendor questionnaires should be machine scorable, meaning that essay
questions have no place in them:
- Essay questions
require someone with a lot of cyber knowledge and experience to sit down
with every questionnaire response, to make judgment on these
things. Most of the people I know who meet that description simply don’t
have the time to do it.
- The answers
to essay questions are by definition not comparable. If you want to
compare a supplier’s scores in year 1 vs. in year 2, you have no good way
to compare the scores for essay questions; the same if you want to compare
one supplier to another in the same year.
- For public
entities subject to FOIA requests, there’s real jeopardy from using essay
questions. Let’s say one vendor thinks they were treated very unfairly in
an RFP and wants to know why their answer on an essay question was judged
to be high risk, while the winning vendor had a low risk score. It’s going
to be very hard (although not impossible) to give a clear answer to this
question. If the question is say yes/no, and the methodology for scoring
that question says a yes answer means low risk while a no answer is high
risk, this vendor has no case at all.
So the next time you’re tempted
to create an essay question rather than one that can be scored objectively, ask
yourself if it’s really worth the extra trouble and risk to do so. Of course,
if you answer yes, then go for it! I’m not going to tell you how to make your
risk decisions.
Any opinions expressed in this blog post are strictly mine
and are not necessarily shared by any of the clients of Tom Alrich LLC. If you would like to comment
on what you have read here, I would love to hear from you. Please email me at
tom@tomalrich.com. Are you working on your CIP-013 plan and you would like some
help on it? Or would you like me to review what you’ve written so far and let
you know what could be improved? Just drop me an email!
No comments:
Post a Comment