I have so much to say in the last week, but neither the time nor the energy to say it. My blog has been lonely.


If you have an observer (o) floating motionless in the water and a a dude (d) on a boat going by at 20 kph then the observer sees the dude moving at 20 kph and the dude sees the observer moving at -20 kph. The movement is relative to the field of observation. Thus relativity.

If the dude throws a baseball at 80 kph then he sees it moving away from him at 80 kph. The observer (assuming the ball is thrown in the same direction the boat is moving) sees the ball moving at 100 kph (80 +20). Cool?

But if the dude shines a flashlight, he sees the light moving away from him at the speed of light (1, 080,000,000 kph). The observer sees the light moving away at the speed of light also. NOT the speed of light + 20 kph (1,080,000,020 kph). Nope. Weird right?

This is the theory of special relativity. It is weird and counter-intuitive. For things going very, very fast physics just doesn’t add up… OK – it does still add up, but you’ve got to be ready to have your mind blown. If both the dude and the observer see light moving at the same speed from different points of reference that means that something else is changing when the boat is moving 20 kph away from the observer. For the dude, it must mean that either time is getting shorter or distance is getting longer. That is less time is passing for the dude on the boat than the observer in the water. Wild!

Now if the boat is moving fast enough, this actually starts to get noticeable. If the dude wears a wrist watch and the boat moves away from the observer at near the speed of light for a while and then comes back near the speed of light for a while, when the dude and the observer meet up and compare their watches, the dudes watch will be behind.

Still here?

Now the dude absolutely cannot move faster than light. At the speed of light, he would experience no time passing at all. His watch would show the same time he left when he gets back to the observer. And he really, really absolutely can’t move faster then the speed of light or when he got back his watch would show a time earlier than he left and that is just wack-a-doodle. (Meanwhile who knows how long our poor observer has been out there treading water! Poor observer – fortunately, I’ve chosen an observer who is a very good swimmer.)

Now we get to this week, when some scientists say they’ve spotted something going faster than the speed of light. They’ve fired this neutrino and measured how long it takes to get from point A to point B and it shows up some billionths of a second before it should be possible.

It is tempting to say that this changes everything! Modern physics – our understanding of electrodynamics, astronomy, creation, quantum mechanics and a bunch of other stuff is based on relativity. It may be counter-intuitive, but experimentally and logically it is the way things work. Zowie and wowsers! The universe is tilting under our feet!

But, but, more likely what will happen is that they’ll find a mistake. The way the measurement is done will have a small error. Or they’ll have forgotten to take some complication into account in their formulas. When all is figured, the neutrino will go slower than is being measured.

Or, maybe, just maybe, it will turn out to be true. That will be nifty. Not because it invalidates everything that came before, but because now they have to figure out how to explain it without throwing out everything else. When relativity was discovered it “invalidated” Newtonian dynamics. Well, kinda. At normal everyday speeds Newtonian dynamics work just fine. They (the dude, the observer (who is now out of the water with really pruny skin) and some scientists_ just had to concoct some alternative formulas to explain the slow moving digital watches when things are moving really, really fast.

Wouldn’t that be exciting? I think so. Figuring that out would be tres nifty.

I’m hping for that instead of there being an error in measurement. I’m a romantic.

But it might all just be relative. (Sorry, that was cheesy, but I couldn’t resist.)

Note: changed all speeds to kph because the fact I had used miles was just bugging me.


Ethernet, ATSC and MPEG-2

So last night we switched things up and debated technology stuff instead of ethics, politics and ideological systems.  Oddly even though this was an area in which the amount of knowledge around the table is significantly greater we flailed with the same random intensity as any other debate we have.   Funny.

First we talked about digital tv bandwidth, compression and problems.  This is a subject I’m quite interested in as it forms the basis of my current job, but it is one that I am just learning.  I didn’t provide information as authoritatively as I should have – so I did some research this morning…

(First protocols and standards – I’m using the words interchangeably although there is some difference.  Both imply a set of rules used to achieve a reproducible set of outcomes.  They are standardized by various bodies like IEEE, ITU, MPEG and the IETF.  The standards set by the IETF are known as protocols and define most of the Internet standards.  TCP would be an Internet protocols.  But Ethernet is an IEEE standard (802.3).  The main difference is the standards body that ratifies and keeps them.)

DTT – or digital terrestrial television is the signals that got out over the air.  Analog signals are governed by a standard called NTSC in North America (different elsewhere in the world).  That has been replaced with digital signals in the States and will be replaced in major urban centres in Canada this coming August.  The digital standard is ATSC.

Raw video data is not transmitted in ATSC.  It transmits encoded MPEG-2 transports streams.  Raw video would be about 1 Gb/s.  The max bitrate for ATSC encode MPEG-2 streams is 19.6 Mb/s.  Producers commonly encode their HD broadcasts between 15 and 18 Mb/s.

MPEG-2 is the same protocol used to encode video on DVDs.  It is also what is used to encode the Shaw cable cable signals.   A normal SD stream is around 7 mb/s.  HD is encoded at the same levels over cable as for DTT so between 15 mb/s and 18 Mb/s.  Some cable companies (Shaw among them) further compress the MPEG-2 stream received from the broadcasters to save bandwidth.This can mean an HD stream is carried in 12 – 13 Mb/s. (QAM or PSK is the equivalent of ATSC that dictates how the encoded stream moves through the media (ATSC= air, QAM/PSK = cable).

(Blueray disks don’t use MPEG-2 encoding, but MPEG-4 which offer greater compression.  IPTV systems might carry MPEG-2 streams for their SD channels and MPEG-4 streams for their HD channels – these are passed over the internet using  the RTP protocol.)

Many factors can cause artifacts in the video when watched.  The quality of a picture depends on many more factors than the final bitrate of the encoded stream.  Poor encoders, lossy media and decoder errors can all produce artifacting that presents as macroblocking, pixelization and other problems.

Obviously encoding an encoded stream a second time (by the broadcaster and by the cable company) is more likely to introduce such artifacts.

The second question was “What is Ethernet?”  We had a hard time giving this answer as we know quite a bit about the general subject of networking, but none of us are really Ethernet experts.  We got bogged down in symmantic arguments about the meaning of standards vs. protocols, the OSI model (OSI is defined by yet another standards body the ISO) and others.  I think in the end we provided several good answers.  I’ll try and summarize –

Ethernet (IEEE 802.3) is a set of standards used in a LAN.  It is the most common standard that defined how Internet traffic is passed along the wires in the common home and business network.  It defined how voltage should be transceived and received along the wires connecting your computer to your local switch, router or modem.  It can run over many wires including twisted pair, coax and fiber optics, but the most common is what people call CAT-5 cable with RJ-45 connectors (often just called ethernet cable).

So the standard takes about what cables and connectors are allowed, how to pull signal on and off the wire, how to deal with noise and signal loss, etc.  This is all data that composes what is called the layer 1 physical layer of the OSI model.  Ethernet also talks about addressing for devices on your local network – so that your switch has a different address than your computer.  These are known as physical addresses or MAC addresses (MAC = Media Access control).  This is activity at the layer 2 data link layer of the OSI model.

Now Ethernet is just one of many protocols.  Other LAN protocols might be Token Ring.  Equivalent protocols for sending wireless signals would be IEEE 802.1n (1a, 1b and others too).  WAN protocols for crossing larger distances are varied and could include frame relay, ISDN (both mostly obsolete) or ADSL.   (I don’t know much about WAN protocols).   Protocols for sending data to storage arrays (Dave’s specialty) would include ESCON and FICON and Fiber Channel.  (I know nothing about these).

As the traffic moves from one wire to another it would be bridged from one of these standards to another.  The neat thing about the OSI model is that higher levels of the network stack are masked from the changes of wire beneath them.  The level about 2 is 3 (surprise) the network layer.  This is where the IETF protocol IP (Internet Protocol) operates.  So the same IP packet moves across the network from my PC to the server which might be miles and miles away and across several different underlying wire technologies (or wifi or wireless or satellite, etc).  Above IP is four more layers in the model (which is not strictly adhered to by any real life network stack, but is very useful for reference). Layer 4 would be Transport and have TCP as an example.

Both Ethernet and IP defined addresses for network objects.  MAC addresses for Ethernet and IP addresses for IP.  A MAC address is like 00:1f:5b:d8:c1:0b and is used to tell one ethernet host apart from another on the same wire.  An IP address (like is used to tell one IP host apart from another across the Internet.  Ethernet only care about getting you from one end of a wire to another.  So it is addressing specific to that purpose.  IP cares about sending packets across the Internet so it needs a higher level address for that purpose.

(Here’s a place where the OSI model fails a little – there is a protocol called ARP (Address resolution protocol).  It is used to convert IP addresses to MAC addresses. It sort of exists between Ethernet and IP.)

Now to join the two discussions – say you were watching an HD IPTV show on your XBOX.  The full protocol stack as it crosses the wire between your Xbox and switch could be:

Ethernet-> multicast IP->UDP (an equivalent to TCP)-> RTP (real time protocol used for streaming video/audio of TCP/IP networks) -> Encryption -> MPEG-4.

Once it gets to your ADSL modem the next hop from the modem to your local DSLAM would be:

ADSL -> multicast IP->UDP (an equivalent to TCP)-> RTP (real time protocol used for streaming video/audio of TCP/IP networks) -> Encryption -> MPEG-4.

Any better?

Other protocols at the IP level would include IGMP and ICMP and IPv6.  TCP and UDP are the most common next protocols at level 4.  HTTP for web traffic is the most well known protocol at layer 7.  Others could include DHCP (dynamic IP acquisition), DNS (IP address to host name translation), SMTP (internet mail) and SSH (secure shell – used to make an interactive session or tunnel with a remote host).  Layer 7 is my balliwick although I’m not bad at layer 4 and I know layer 3 too.

IAM – Object management Part 6

Part the last.  It has been well over a month since part 5.  I wanted to finish this overview before I do ny talking about specific solutions.

I’ve read several papers lately that provide an overview similar to what I’ve attempted.  Better than this.  Most focus on planning an IAM project.  They generally identify three goals: cost reduction, security and compliance.  Hopefully there has been information on all three of those goals peppered into my blogs until now.

Today is specifically about auditing and compliance.  Through every part of this I’ve mentioned keeping a log, but mostly focused on a change log.  The full auditing that is required for your IAM solution will likely be greater than that.

When you look at this there are four areas of logging to consider:

1) User activity – both normal and abnormal.  User activity would be authenticating, authorizing, password changes and self-managed data changes.  Abnormal activity is attempts to access unauthorized data or resources, authentication attempts outside of allowed windows, multiple logon failures, logons from unauthorized terminals, etc.  Normal activity should have a policy for retention.  Additionally you will want to give some thought to how the data will be used.  If you want to use the data to form a trail of the users activity it will need to be mined from the complete logs.  This might require additional tools or functionality in you IAM solution.  Normal data can also be used for capacity management for the system or for the systems that the IAM solution is protecting.  Abnormal data will need to be actively monitored with threshholds for investigation and alerting.  You might need different policies on alerting and investigation for normal accounts vs. privileged accounts.   Or normal accounts attempting access on privileged data or resources.

2) Object Administration – This will fall into two pieces.  Change logs from the change management system and audit logs of changes within the IAM solution itself.  In general this is simply another form of normal activity for the product so all the guidelines I just mentioned apply.  Think about what reports will be needed and what will need active alerting.

3) Configuration Changes – Once again there will be two pieces.  The change logs and audit logs for the product.  Some configuration might be done outside the product as well – OS, database, configuration files, etc.  Another item to consider for both this and object administration is whether you want to have configuration management controls.  That is controls within the product or integrated into the product that prevent changes without approval.  A basic form of this exists in Windows – an object can be marked to prevent deletion.  The prevent deletion box needs to be unchecked explicitly.  The concept can be expanded to any change within the tool.  The main reason for such a change is to prevent accidents.  Mass deletions or adding ‘Everyone’ to a priviliged access group.

4) Access tracking – none of the activity logs will be able to produce some of the basic reports you will need.  Can Suzy access resource X?  Who can access resource X?  What are all the resources Suzy has access to?  These questions range from simple to difficult.  For instance in Windows the first can be answered by looking at the ACL and seeing if Suzy is explicitly there or a member of any listed groups.  With group nesting this might take a few minutes, but can be done by hand petty quickly.  The second question is much trickier and will take a while to do by hand, but is pretty easy to script.  The third is nearly impossible in Windows.  Either a 3rd party product is needed or a script that can walk all the acls looking for anywhere Suzy or her groups SIDs are located – this is both hugely time and resource intensive.  In other systems the questions might be much easier to answer or harder.

When answered what is needed it is important to also determine what isn’t.  The volume of auditing data can potentially be huge.  Turning on all auditing could be a resource hog and produce so much data as to be practically useless.   If there are legislative or other compliance regulations or security policies look to them.  Determine what will be needed to troubleshoot normal problems.  Determine what will be needed to demonstrate ongoing efficient and secure operation of the system.  Resist the urge to turn on logging in addition to this.

If additional logging will be needed during change implementation or troubleshooting determine what the impact of enabling it will be.

Also remember that the auditing data is also part of a person’s personal information and will need to comply with the privacy regulations you are under.  In particular that means don’t collect what you don’t need, don’t disclose and delete when it is no longer required.

That is all I have to say about that.  If I do any additional IAM blogs they will be about specific issues and/or products.


IT Services, utility services and shared services

So I want to do a post about outsourcing, but I need some groundwork before I can do that.


What is a service?  This isn’t an IT definition particularly.  You have a business or wish to accomplish a goal.  You are responsible for the end product.  To do that you take a variety of inputs from service providers.  They enable you to accomplish you goal.  The other distinction is that a service isn’t a good – something you can own.

Let’s take making a meal.  There is a giant service industry around that.  If you want to prepare and cook your own meal you buy your goods at a grocery or from a farmer/producer directly.  If you are lazy, you start to pile on services.  Grocery delivery would be a service.  Or lazier – a meal prep service that delivers the portions.  Or lazier – a Meals on Wheels deal that delivers hot food to your door.  Or lazier – pizza delivery (which I am waiting for now).

Information Technology (IT) may or may not be a service.  In most cases it is.  Your business is not selling IT.  In some cases it isn’t.  Just as you might cook your own meal you might do part of the IT yourself and let service providers provide other parts of IT.  Like my meal example you end up with a variety of potential scenarios.

  1. You do everything yourself.  You buy some hardware and some software.  Some hardware and software you might even produce yourself.  You run all of those applications.    Maybe you are too small to afford a service.  Maybe you are too unique to find a service to fit in for you.  Maybe you just want that level of control.  Maybe you are a service provider and are reselling IT as a service.
  2. You run some IT yourself.  IT that directly supports your business – unique applications you run yourself.  The other IT services are provided by a service provider.
  3. You have a service provider provide all your IT.

There aren’t just 3 options.  There is a continuum along the whole spectrum.

In option 2 the usual spits between what you might run and what you ask someone else to provide are Line of Business applications (LOB), Consumer/Off the Shelf apps (COTS), Infrastructure IT (like network and storage) and IT processes (like a Service Desk or Desktop Support).

The flaw in the whole analogy is that IT is not quite as mature as the food industry.  Rather than providing a generic service like food delivery an IT service Provider must know your business in depth.   Even Infrastructure IT like storage must be tailored to your specific needs.  This isn’t a fatal flaw, but it means that a business can’t treat IT just like catering.  The service provider and receiver relationship is tighter.

Maybe in 20 years you will be able to pull your IT service off the shelf like at the grocery and just get it delivered to you.  Cloud Computing is an example where the industry is moving rapidly in that direction.  However, it certainly isn’t there for all IT yet in my opinion.

One final option is delivery an IT Service internally to your business.  They are a separate group with their own business goals.  This can be layered onto any of the options.   We can call it an insource IT model vs. an external IT service provider being an outsourced IT model.

Utility Service

So a utility – power, water, garbage collection.  There is a current trend in IT to sell and buy IT as a utility service.

With a utility service there are are many constraints.  You choose your service off a small menu.  The service provider does everything else.  When you want the service you get it.  You pay according to your use.

I see the benefits of this model.  You offload the risk onto the service provider.  You simplify the contracts.  You should have lower costs.  This is a fine goal to be aiming for.  But I see huge problems with it currently because of the immaturity of IT as a service.

You can’t select off a small menu.  You need IT to be catered to your business.  IT is not as reliable as power and water or if it is it is with increased costs.  You need IT to change with your business.  It needs much more flexibility than a utility service.  Finally IT is not just a quantitative service.  IT is not just X watts of power or y liters of water.  There is also a qualitative aspect to it.

IT will mature and treating it like a utility service will become more and more possible.  But for the moment, I believe that IT needs to be treated as a partnership service except for under specific circumstances.  Buying IT as a utility service now will require accepting very broad constraints on your business.

Shared Services

I am a big fan of shared IT services.  But it too comes with constraints on your business.

Shared service means providing an IT across an organization from a single service provider.

Shared Services provide a large number of benefits.  Cost reduction is a huge one.  It is achieved by removing the duplication of services and by achieving economies of scale.  It allows for developing specialist expertise rather than just generalists.  It allows for standardization of service which should lead to further cost reduction, better completeness and correctness.  It also allows investment in IT to be leverage across the organization rather than just a single unit which enables taking higher risk to achiever greater opportunities (if you have that sort of organization).  Some services do not make sense to deploy for 2000 people, but they might make sense for 10,000.

Most importantly though is is basically impossible to achieve IT security, compliance and auditing without deploying a shared IT service.  Since these three items are often legislated or required by contract the existence of IT as a shared service might be necessary.

However implementing a shared IT service doesn’t come without its own drawbacks.

  1. Less agility – because you are implementing a service across a whole organization it will not be as nimble and able to react to change as a dedicated service.
  2. Less flexibility – What might be a good idea for a business unit might be a bad idea for the whole organization.  Because you must keep the needs of the whole organization in mind the service might not be as catered to each business unit.
  3. The business unit will have less control.  No one likes to surrender control.
  4. Finally, that tie between the business and IT will be strained.  i said that IT needs to be a partnership and shared services make that partnership less personal.

These are not reasons not to do shared IT, but they are concerns that should be examined in developing such a portfolio of services.

OK – that is what I had to say about IT services for today.

Chicken Counting – Job Hunt

So throughout this process I’ve maintained a very disciplined don’t count your chickens attitude.  Which is good because so far there would have been far more disappointments than cheering.

So I am not counting my chickens today either following my second interview, but I am starting to look up recipes for chickens if you follow my metaphor.

I am not sure what information I heard and saw today would be considered confidential.  I will give an overview of the exciting bits here, but either at a high level or a low level.

Team and Management

The team seems very good.  I have met David, Karl, Terry, Cliff and Paul plus the manager Michael.  That is about half of a 10 person team.  They seemed both confident, casual and knowledgeable.  That is a good combination.  The interview today was mostly for team fit I think.  (Interview is likely a strong term for what occurred.)

I think I did alright.  I just hope I communicated my enthusiasm.  I don’t get gushy in person, but there is a lot about this job to gush about.

Management wise I also saw no red flags.  It looks very demanding, complex and pressure filled, but I prefer that to the alternatives.  I also think the manager keeps most of the BS away from the team and lets them work.  Part of today was also selling me on the company.  It was interesting that Michael didn’t paint a 100% rosy picture.  Of course no environment is perfect.  It is a good bit of honesty to reveal a couple of warts.  None of those warts were worrying.


The technology is awesome.  The scope and scale is impressive.  Furthermore it is leading edge.  That is exciting and worrying.  It seems though that the full backing of the company and their partners is there for this offering.  That is great.

Questions that arose include:

  1. Why is IIS a critical part of the infrastructure?  I cannot see how it fits in.
  2. Why are encoders needed after broadcasts are received?  Don’t them come appropriately encoded?  (My guess is they come as some flavour of NTSC signal (r something more modern) and need conversion to an IP based encoding.)
  3. The underlying server OS seems to clash with the middleware platform.  I wonder how that works.

What is very exciting is the amount I will need to learn tech-wise.  This is a challenge I am confident of excelling at and learning is always one of the best parts of the job.

Data Centre

The facilities were very nice.  And huge.  They have equipment for the service on three floors.  Each floor is a bit smaller than the NCC Data Centre.  But it looks bigger – partially due to the 18′ ceilings.  They also rack much more compactly than we did so they fit more per square meter.

Especially impressive was how clean and organized it was.  I was quite proud of the cabling at NCC, but it looks like a dog’s breakfast compared to the way they have theirs organized.  There is someone who take pride in making that look that good somewhere in the company.  I bet everyone is just indoctrinated into that mindset now.  They would be shocked at how some other data centres are run.


OK – I saved the most impressive bit for last.  Go pull out your Watchmen TPBs or think back to the movie if that isn’t too painful.  Ozymandias had a bank of monitors in his Antarctic retreat.  Imagine that, but larger, with current technology and cooler.  That is the monitor room.  I kept expecting Bubastis to wander out from around a corner.

I even got to see it in action as there was a minor problem during my tour.  The monitor person had isolated the problem feed and was looking for a solution.  During the 5 minutes I was in the room it seemed to get solved.  The techs were professional and quick.  There was no panic.  That was very impressive.  (I want to highlight that it was a minor issue as well – let’s not start slagging my potential future employers.  🙂 Entirely within the realm of the reason that IT folk like me need to exist.)

The bad

It isn’t perfect.  That also makes me happy.

The big issue for me is that there is no parking.  It won’t be an issue during the summer, but ice is not my friend.  So I’ll need to work on a solution for that before the fall.

I also got a PFO from AIMCo today just before my visit.  I interviewed with them on May 10th.  I had given up on that position.   But it means that one of my other options is officially gone now.

This job doesn’t have a set start date.  They are trying to determine the date before the end of the month.  They sound confident for August, but I’m willing to let it stretch until September.  Coordination is always tough during the summer holiday season.  But the job is really there.  Provided I didn’t burn any bridges today I should be able to get it.

Finally I booked the parkade for two hours, but I was there for 2.75 hours.  That is good because I think it means that they didn’t rush me through anything.  But bad because I racked up a $35 ticket.  Nuts I say.

To sum up

Yay!  (Cautiously.)

IAM – Object Management – Part 4

Everything that came before brought us here.  Pithy.  Your IAM system is not doing you much good unless you can control access to your applications.

A lot of what you manage for access control is similar to other objects and properties.  Enrollment resembles registration and provisioning.  Removing access can resemble deprovisioning.  Many of the same controls apply – methods of request, approvals, logging.

Let’s discuss what is different.

Groups and Roles

There are two primary methods of access control: groups and roles.  A group contains a set of users and is applied to a resource.  Membership in the group allows access to authenticated users to the resource.  (Note: discussing how that is done – via claims, tickets, a list of allowed access delivered with authentication – is beyond the scope of this current discussion.)  The actual group setup can be quite complicated.  Your IAM product may have different group types or allow nested groups.  The key to this type of management is that you look at each resource and decide who should be allowed access.

Roles work in the reverse way.  You design your roles around your organization and its capabilities.  Those are mapped onto the resources.  For instance a role might be mapped onto your Sales team.  Roles will then provide access to a group of resources.

The advantage of roles over groups is ongoing administration.  User do not have to request specific access they just need to say that Jane has joined the Sales team or Jordan has joined the new app development project.  The disavantage is the setup time.  When a new resource is made available, analysis of which roles need access to it needs to be completed.

Many modern applications include built in roles.  This lessens the setup time.

User types

Bad news.  I hid this from you until now.  The question here is is one credential enough? (Assuming your IAM system can provide authorization for all your apps.  If not you’ll need multiple credentials in multiple IAM systems – but SSO is a subject for a later post).

What types of applications and data do you have in your organization?  Are they all created equal?  Are you comfortable with the same credential that a user uses to access their desktop and surf the Internet being the same one they use to manage your critical control systems?

Even on less critical applications there are three kinds of access.  I call them user, administrator and maintainer.

  1. User – The basic user of your application
  2. Administrator – Those with greater access within the application.
  3. Maintainers – Those who keep your application running.

Let’s illustrate with your IAM system itself as our example.  Users are folks who authentication and authorize against the IAM system.  Your administrators are those who create the objects – users and groups.  They setup your workflows and approval chains.  The maintainers make sure it stays running.  They sart and stop services, backup and restore the data, maintin the backend database, etc.

It is entirely possible that that a single person can use the application in all kinds of access within their job in different situations.

It is common in setting up your IAM system to create privileged use IDs.  Users must use these IDs instead of their normal IDs when serving as administrators or maintainers or when accessing critical applications of data.  The approvals for getting a privileged use ID are stricter, more approvals are necessary and they are more carefully logged.  In setting up groups or roles perhaps you will specify that only privileged use IDs will be allowed in a specific group.  Or maybe a certain role will require the creation of an privileged use ID.

Integrate with applcations

We have not talked very much about how applications integrate with your IAM system.  In authentication the biggest concern is whether you will be able to integrate the application to use the authentication mechanism of the IAM system.  It isn’t quite that simple with authorization.  Nuts.

To create your groups and roles you need to work with the application owners.  To design your authorization you need to ensure that what you design is applicable for the application it protects.

Fine-grained authorization

Finally, so far I’ve only discussed either getting access to the application or being denied or dividing access into three broad access categories: users, administrators and maintainers.  We can call this coarse-grained authorization.  It should be obvious that reality is more complicated.

Fine-grained authorization is controlling access to the application in a more granular way.  Perhaps you are allowed to create new records in a database, but not new tables.  Perhaps you can create users in your IAM system, but not privileged use IDs.


Access Management can cause bloat.  Much like deprovisioning, you might not get informed when things change.  When users no longer need to be in certain groups.  Perhaps a role has been re-org out of existence.  Perhaps an entire aplication no longer exists anymore or an upgrade changes the roles needed to access it.  You can end up with empty grups or groups that are never used to access anything.  You can end up with hundreds of groups per user.

Periodically you need to review your authorization structure.  On an application basis, or role basis or group basis and make sure it all still makes sense. To simplify where possible.


What processes do you need to setup?

  1. User Enrollment
  2. Role creation and application integration in roles
  3. Group creation/ adding and removing users from groups
  4. Privileged Use IDs creation and assigning to roles and group
  5. Integrating applications to the IAM system
  6. Logging and Reporting
  7. Designing initial roles or group structure
  8. Review

Copyright Op-Ed by Loreena McKennitt

So this link (here) is to an Op-Ed piece by Loreena McKennitt.  Once again I think is is a nice contrast to my own point of view.  Another interesting note is that Russell McOrmand, who has posted here in my comments, is the first poster in the Op-Ed comments as well.  He must have his C-32 alerts turned up to high or be following ALL the cool twitters.  He points out his copyright FAQ which I meant to provide feedback to last week, but did not.  (here)

Miss McKennitt and I certainly agree that the purpose of copyright legislation is to provide a fertile ground to artists – to enable those with talent and fans to potentially earn a living through their art.  I am also fine with implying that rampant piracy is detrimental to this goal and that copyright legislation should deter the commercialization of such privacy and make it clear to consumers what is improper behavior.

However, there are two claims I take a bit of issue with.  She draws a line from poor record sales to poor touring income.  I have heard that many performers are trying to recoup lower record sales (due to privacy and economics and other factors) by enhanced touring.  That is a tough market, but it may be an adaptation required to survive.  (Alternatives like developing online interactions with fans is another I think).  But I do not think piracy leads to low concert ticket sales.  I do not see the link.  A concert experience is not the same as a taped experience – even on a bootleg live MP3.  I think that music pirates and legitimate purchasers of music are both just a likely to attend.  While I have no data to back me up, I’d surmise that if touring is tough it is because it has always been tough and because the economy overall has suffered in recent years.

The second point she says is that users do not have real rights.  BS!  As I have said before, we want the artists to make a living because it provides a benefit to the consumers.  Not just because it is nice to live a bohemian lifestyle.  The value of art I think is underestimated in society (while there is also a ton of garbage).  I think it provides enormous value.  But it is that value which we are safeguarding first and foremost.  Artist get to make a living as an offshoot of the value (real or perceived of their product).  So the rights must flow first from the consumer/user to the artist.  Those are the primary stakeholders in copyright.

I’m starting to hate the term ‘balance’ because it can be used by any advocacy group to push for their version of balance.  But we are looking to provide benefit to society and culture and in doing so promote a marketplace where artists, publishers and other offshoot business can survive and be encouraged to produce more work of value to society.  I believe, any view promoting only a single entity is short-sighted.