Friday, November 6, 2009
Tuesday, September 29, 2009
Library Journal Oct09 "Have We Created a Monster"
We librarians live in a nonprofit world founded on a belief in serving the common good. Trusting by nature, we have yet to learn how to protect ourselves when doing business in the competitive world of information brokering. Businesses supposedly price according to the old adage "Charge what the market will bear." As purchasing librarians, we are the market, yet we have allowed the cost of databases to get completely out of hand. The time has come for the market to correct itself—with a little help from us.
In 1993, the year I started working in libraries, there were reputedly 284 locations on the entire World Wide Web. Within a few years, over 170 million domain names were in use. Between 1998 and 2001, when I was lucky enough to have my first directorship in a small library in Pennsylvania, spending on subscription databases had increased from $17 million to $50 million a year. Fortunately, Pennsylvania had instituted a program called AccessPA Power Library (now facing state cuts, see News, p. 12), which made a host of subscription databases available to libraries free of charge from the State Library. At the time, we were all aware that we needed to market our databases to the public, convinced that they were not being used simply because the public did not know what was available.
Usage-based pricing
Back then, the advantages of simultaneous users, remote access, and freed-up shelf space had us all giddy, and we allowed anticipation of usage to define pricing. We accepted the idea of billing based on population or number of patrons, even though we had no usage statistics to show whether this made sense economically.
Now that we have these statistics, we must ask whether the original pricing model makes sense. Perhaps a pricing mechanism based on actual usage would be better, especially as rumblings are being heard about the need to increase database usage, cut database budgets, or both. By looking at current usage statistics, rather than projected usage, we could see more easily whether we could justify the expense of a particular database.
If we insisted on making usage statistics the focus, so that pricing reflected actual demand rather than what we hoped to achieve, we would be doing due diligence—and reducing upfront costs. Certainly, vendors would then have to take a more active role in promoting their products to the public. Database publishers would have to recognize that while there are costs associated with access and updating, they are saving on printing costs and cannot indulge in price gouging. (Are we subsidizing the costs of print with our online subscriptions?) And both vendors and pubishers would have to recognize that we are now beyond the need to price by expectation—the results are already in.
Supply and demand
How closely do we follow web page statistics, much less individual database stats? Probably not as much as we should. In our defense, maybe the technology for measuring these statistics was not there before, but why didn't we ask for it to be created? And why aren't we demanding that it be used now? Of course, vendors might be reluctant to volunteer this level of support; it could mean lower fees for their products. But, as the market for these databases, we would save.
At the moment, there is no meaningful relationship between database supply and demand in the library world. We have created the demand for products that we helped produce. We are effectively testing the products, but our vendors often capture information from our tax-supported programs or projects, use it to create new products, and then turn around and resell these products back to us. No question, there's "learning by doing" here for publisher, vendor, and library, but we aren't being smart about our contribution to the process and the need to fight for what serves us best.
Subscription-based vs. general
Maybe we are trying too hard. Often a patron comes into the library wanting a simple answer for a simple question, and we bombard them with a range of resources. Given a choice between vertical, specialized databases that are subscription- or fee-based and horizontal, general databases or search engines, we opt for the former under the assumption that they alone are authoritative and can be trusted. But as more information becomes available for free, this argument holds less sway.
Right now, librarians are essentially marketing often unknown products to the public, but shouldn't our mission be simply to provide these products, not create a need we have no hope of filling? And shouldn't the cost of these products be based on actual demand? In classic McLuhanese, the quality of the massage/message can only be determined by the receiver. What kind of deals have your sales reps been offering you recently to renew? Do you think that this has been out of generosity and good will? Or perhaps we are seeing a crash of the authoritative database market comparable to the current fiscal crisis, giving us an opportunity to reinvent a world where information just wants to be free.
--------------------------------------------------------------------------------
In 1993, the year I started working in libraries, there were reputedly 284 locations on the entire World Wide Web. Within a few years, over 170 million domain names were in use. Between 1998 and 2001, when I was lucky enough to have my first directorship in a small library in Pennsylvania, spending on subscription databases had increased from $17 million to $50 million a year. Fortunately, Pennsylvania had instituted a program called AccessPA Power Library (now facing state cuts, see News, p. 12), which made a host of subscription databases available to libraries free of charge from the State Library. At the time, we were all aware that we needed to market our databases to the public, convinced that they were not being used simply because the public did not know what was available.
Usage-based pricing
Back then, the advantages of simultaneous users, remote access, and freed-up shelf space had us all giddy, and we allowed anticipation of usage to define pricing. We accepted the idea of billing based on population or number of patrons, even though we had no usage statistics to show whether this made sense economically.
Now that we have these statistics, we must ask whether the original pricing model makes sense. Perhaps a pricing mechanism based on actual usage would be better, especially as rumblings are being heard about the need to increase database usage, cut database budgets, or both. By looking at current usage statistics, rather than projected usage, we could see more easily whether we could justify the expense of a particular database.
If we insisted on making usage statistics the focus, so that pricing reflected actual demand rather than what we hoped to achieve, we would be doing due diligence—and reducing upfront costs. Certainly, vendors would then have to take a more active role in promoting their products to the public. Database publishers would have to recognize that while there are costs associated with access and updating, they are saving on printing costs and cannot indulge in price gouging. (Are we subsidizing the costs of print with our online subscriptions?) And both vendors and pubishers would have to recognize that we are now beyond the need to price by expectation—the results are already in.
Supply and demand
How closely do we follow web page statistics, much less individual database stats? Probably not as much as we should. In our defense, maybe the technology for measuring these statistics was not there before, but why didn't we ask for it to be created? And why aren't we demanding that it be used now? Of course, vendors might be reluctant to volunteer this level of support; it could mean lower fees for their products. But, as the market for these databases, we would save.
At the moment, there is no meaningful relationship between database supply and demand in the library world. We have created the demand for products that we helped produce. We are effectively testing the products, but our vendors often capture information from our tax-supported programs or projects, use it to create new products, and then turn around and resell these products back to us. No question, there's "learning by doing" here for publisher, vendor, and library, but we aren't being smart about our contribution to the process and the need to fight for what serves us best.
Subscription-based vs. general
Maybe we are trying too hard. Often a patron comes into the library wanting a simple answer for a simple question, and we bombard them with a range of resources. Given a choice between vertical, specialized databases that are subscription- or fee-based and horizontal, general databases or search engines, we opt for the former under the assumption that they alone are authoritative and can be trusted. But as more information becomes available for free, this argument holds less sway.
Right now, librarians are essentially marketing often unknown products to the public, but shouldn't our mission be simply to provide these products, not create a need we have no hope of filling? And shouldn't the cost of these products be based on actual demand? In classic McLuhanese, the quality of the massage/message can only be determined by the receiver. What kind of deals have your sales reps been offering you recently to renew? Do you think that this has been out of generosity and good will? Or perhaps we are seeing a crash of the authoritative database market comparable to the current fiscal crisis, giving us an opportunity to reinvent a world where information just wants to be free.
--------------------------------------------------------------------------------
Friday, September 11, 2009
Weed them and weep...
Today, while undertaking the gargantuan task of getting the shit off the shelves, many thoughts clamored for attention. One of the most electrifying was considering that I am taking an active part in defining "the new world order" (for lack of a better turn of phrase at the moment.)
Whereas, only a brief time ago books were kept on library shelves for content, we are instead entering the arena as a browsing venue. I was pulling things off the shelves and throwing them on the floor. (We were closed so I was alone listening to the trees falling in the forest.) Books that have been securely assured of a space in some instances for 20 or 30 years were now tossed into a heap.
The name of the game is changing beyond limits of my imagination. In the simplest terms, I am getting rid of books that are tired, worn and/or dated in order that our patrons can see the forest.
Our circulation has been sickeningly low. For six years I've poured everything that I am into turning that around. Even with usage doubling and tripling, I'm still embarrased at the numbers. As I am making my way through the 160,000 volumes in our library, I now envision the tripling tripling and even quadrupling as our collection becomes smaller and more inviting.
One glimmering thought and then I'll see what else surfaces from the day's memory, we are feeding more on the image than the word in the 21st century library. This is a complete turn around for civilization. With the internet, computers, and the tendency toward identifying according to iconic, rather than an alphabetic vocabulary, we are participating in "herstory" (read The Goddess Versus the Alphabet by Leonard Shlain.)
Viva la revolution!
Whereas, only a brief time ago books were kept on library shelves for content, we are instead entering the arena as a browsing venue. I was pulling things off the shelves and throwing them on the floor. (We were closed so I was alone listening to the trees falling in the forest.) Books that have been securely assured of a space in some instances for 20 or 30 years were now tossed into a heap.
The name of the game is changing beyond limits of my imagination. In the simplest terms, I am getting rid of books that are tired, worn and/or dated in order that our patrons can see the forest.
Our circulation has been sickeningly low. For six years I've poured everything that I am into turning that around. Even with usage doubling and tripling, I'm still embarrased at the numbers. As I am making my way through the 160,000 volumes in our library, I now envision the tripling tripling and even quadrupling as our collection becomes smaller and more inviting.
One glimmering thought and then I'll see what else surfaces from the day's memory, we are feeding more on the image than the word in the 21st century library. This is a complete turn around for civilization. With the internet, computers, and the tendency toward identifying according to iconic, rather than an alphabetic vocabulary, we are participating in "herstory" (read The Goddess Versus the Alphabet by Leonard Shlain.)
Viva la revolution!
Monday, August 17, 2009
Library Journal Article on "Self Service Library"
Congratulations Susan Kantor-Horning on an article well-done . I know this article is a little overdue (pun intended) but that's on par with the whole project. In Yuba County at our Wheatland location, the GoLibrary has yet to offer uninterrupted service for more than 3 weeks running. The good news is that the problems these days are minor, usually a book/box is stuck. The bad news is that it means a 30 minute trip for someone to go out to the machine and back in order to un-stick it.
For those interested, I wanted to add a vendor to the list mentioned in your article. mkSorting has designed and created a state of the art version of book dispenser and it would seem mkSorting intends to beta test the machine before bringing to it market. We have heard from the vendor that they anticipate their machine to be ready for release before the end of the year.
For those interested, I wanted to add a vendor to the list mentioned in your article. mkSorting has designed and created a state of the art version of book dispenser and it would seem mkSorting intends to beta test the machine before bringing to it market. We have heard from the vendor that they anticipate their machine to be ready for release before the end of the year.
Monday, August 10, 2009
ARRA's auras
Figuring out what's up with the fed funds potentially out there for libraries, is like pppp'ing in the wind. Funding will go mostly to commercial applications and we will be tasked with providing the markets, then training the markets, then providing free access to the well-trained markets. We must play the knowledge management card in order to take a stand as the great equalizers in the middle of the great divide.
Monday, March 16, 2009
Thursday, March 12, 2009
Tuesday, February 10, 2009
Monday, February 9, 2009
Blocked Again
http://www.gnu.org/manual/manual.html
I was going to post this on twitter but our IT department blocks it because of "adult language."
I've been blocked earlier today because of items that are "proxy" related, specifically this seems to include anything that has a tiny URL. I have a list of blocked items that I'm continually adding to in hopes that eventually it will serve as a back up support for libraries' increasing need for "social networking" tools and availability.
The above link that I started with here was not blocked, fortunately. It may be of interest to making lists of free software and free software documentation (manuals.)
Check it out!
I was going to post this on twitter but our IT department blocks it because of "adult language."
I've been blocked earlier today because of items that are "proxy" related, specifically this seems to include anything that has a tiny URL. I have a list of blocked items that I'm continually adding to in hopes that eventually it will serve as a back up support for libraries' increasing need for "social networking" tools and availability.
The above link that I started with here was not blocked, fortunately. It may be of interest to making lists of free software and free software documentation (manuals.)
Check it out!
Wednesday, February 4, 2009
Widgets & Gadgets & Blogs, Oh MY!
ALA workshop reminded me of how little I know and far there is to go to be Web2.0 (L2) savvy. It seems as if all I do these days is create new free accounts with no time to use them, apply what I learned or often even remember the password to the latest addition to what is cascading into a torrent of "It's all about ME" resources.
Fun as it all is, has been and will be, the only thing that keeps me taking the learning curve at 90 mph is knowing that somewhere along the way I'll be able to make it more manageable for someone else. The choices are multiplying like bunnies (have you seen Bob Stupel et al's Everything Web 2.0? I found it on Sacred Cow Dung as "the List," but it's been distributed all over the web. (It's so amazing, I just linked to twice just in case you weren't tempted by the first one.) I believe this list has been growing since 2006.
Forget about the 43 things, the list is 43+ printed pages of things for us to learn & play.
Fun as it all is, has been and will be, the only thing that keeps me taking the learning curve at 90 mph is knowing that somewhere along the way I'll be able to make it more manageable for someone else. The choices are multiplying like bunnies (have you seen Bob Stupel et al's Everything Web 2.0? I found it on Sacred Cow Dung as "the List," but it's been distributed all over the web. (It's so amazing, I just linked to twice just in case you weren't tempted by the first one.) I believe this list has been growing since 2006.
Forget about the 43 things, the list is 43+ printed pages of things for us to learn & play.
Monday, January 12, 2009
GoLibrary Slowly Gaining Speed
Finally it looks as if we have a product that is working. For months we have been running back and forth to the machine, every day or every other day or sometimes more than once a day, to troubleshoot technical problems. Our patrons in Wheatland have more or less given up on the thing and we are now starting from scratch in terms of generating public interest. But hey, that's why they call it beta testing.
To outline the project briefly, Yuba County Library was invited to participate as the rural beta test in an LSTA project put forward by Contra Costa County Library. The California State Library was aware of Yuba's lack of remote area access for patrons and knew that we were looking at technological solutions in order to address our growing needs for services. In fact, given our ongoing budget constraints, our strategic plan continues to place a heavy emphasis on technology due to the need to keep overhead costs to a minimum. The cybrary concept continues to be the template we use when considering appropriate use for our local impact fees.
With over 600 square miles of service area and a population of just over 70,000, Yuba County Library's one facility in Marysville along with a 32 foot bookmobile are grossly inadequate to meet the needs of our residents. So, when we were asked to be involved in testing a book dispenser that was purported to require a minimum amount of staff time to maintain and very little overhead costs, we enthusiastically agreed.
According to the vendor of the book dispenser, Distec from Sweden, all we needed was a location, a dedicated high-speed internet connection, SIP2 integrated ILS and RFID on books to be circulated from the machine. Sounds simple, right? From my recent experience I might concede that it would sound simple to those who are already knowledgeable about SIP2 and RFID technology or to those who did not have a clue about either, into which we fit firmly in the latter category. Eventually, we were to cobble together a crash course for ourselves with CCCL's gracious tech support.
Because of our determination and commitment to the project, we folded in a system migration and upgrade to our already ambitious planning. Actually, this was an essential piece of the puzzle, as our library had been piggybacking on our local community college's outdated ILS since automating in 2000. In order to minimize the costs, we stayed with the college's vendor, SIRSI, and were one of the first libraries in California to take advantage of their new SaaS, being hosted on a SIRSI server and running the enhanced version of SIRSI.net. The enhanced version was essential as without it we would have had neither the visual reinforcement of book covers, nor the added summaries/blurbs for patrons making their selections from the machine's simple touch screen interface.
In our innocence, which is always a good excuse for not knowing, we didn't realize how long it would take to complete the requirements for the SIP2 agreement. Our SIP2 was straightforward and was added onto our migration contract, but because Distec did not yet have a third party SIP2 agreement with SIRSI, or any vendors in the country for that matter, we were placed in the role of facilitating theirs as well as our own. Distec was quite dependent on us for contact information as their experience with U.S. ILS vendors was in its infancy. How much help we were able to offer them was limited, unfortunately, as we were in our own infancy in relation to having our own, independent ILS. This dual novice status was later to cost us, as we were unable to come up with alternatives to a customized report from SIRSI to allow books to be downloaded into the Bokomaten. Our County IT department suspected that we could probably create the report ourselves, but there just wasn't time.
Here I should briefly mention that, going into the project, we had pitched the grant as requiring minimal County resources. What we were faced with in relation to getting the machine to work was quite a different matter. So, anything requiring time from our County IT department was monitored very closely in order to be kept to a minimum. In fact, most of the players agree, the reason CCCL was able to have a functional, hassle-free machine months before us was due to their access to in-house tech support.
One non-technical challenge that came as a bit of a shock to both libraries was the box size limitations. The Bokomaten, having been designed for a European market, was designed for boxes and slots--into which the boxes would fit--according to standard European publishing sizes. For anyone familiar with U.S. publishing, you will know that there is quite a wide range of sizes for books in this country. Doing collection development according to size of book was certainly not something our professors would have recommended or taught us in library school.
After eliminating according to size and thickness, there was an added delimiter regarding inclusion of cover pictures associated with the record. For the remote access patron, title and author would not be enough to compel browsing by GoLibrary users. We determined that having the book cover, or graphic plus a blurb, would be critical for successful presentation of the books available in the machine. Small independent press titles or titles with older publication dates are examples of the kinds of materials that might not have enhanced content available. We learned the hard way to check for the enhanced content before selecting a book to add. We were forced to de-select many items, initially prepared for the machine, because we did not want anything in the book dispenser to be limited to just text. In some instances we resorted to showing a picture of an edition of the book other than the one that was actually in the machine for this reason.
Communication in a dot.com world was definitely the key to the success of this project. Communication between techies and non-techies has become a pet area of research for me, as a result of this project. I found fascinating that, in many ways, it was no easier communicating with our IT department down the street than it was Distec's IT staff on the other side of the world. Though our own IT staff were charmed with the whole idea of an automated book dispenser, and were ready and willing to put it on the fast track for us, time zones and other demands on their time made it difficult to get the two groups of tech specialists to communicate directly. Communication problems on both ends were likely a result of a bottleneck caused by my lack of technical expertise, since I had to play intermediary, sharing information between techies with only a very simplistic understanding of the concepts behind the data I was passing between the two groups.
Since completing the grant project, I've learned that this is a common problem for tech and nontech staff and techs admit that explaining the whys and wherefores or teaching someone to use the product they've created is the least-liked aspect of their jobs. I am learning more everyday about the growing need for specially trained individuals to function as go-betweens across the two worlds. Government is reported to lose millions of dollars annually due to this "communication disconnect.
"Fortunately, we had good mediators in CALIFA, a membership based California library service consortium, who was assigned by the State Library to negotiate the contract and deal with shipping the machine from Europe, both huge undertakings in their own right. And, as I've mentioned already, Contra Costa Library's knowledgeable IT staff were very gracious in taking time to explain many of the fundamentals to their less sophisticated grant partner.
Library tech staff or librarians with strong tech backgrounds will be the knowledge management experts of the 21st century and along with this expertise will come a universal mechanism for crossing the tech/nontech language barrier, but in the meantime, please be sure to add learning tech-speak to your list of things to do in your spare time, if it’s not what you’re already doing the major portion of your time.
The project was all engulfing for over a year and timing for everything had to be at warp speed. I've heard that this is known in techie land as "a death march project." If you deal with tech projects often then you're probably familiar with the insane expectations and deadlines that define your mission goals and objectives. I've certainly gained a great deal of respect for those working on the tech side of things and now have an inkling of the pressure they endure as a part of their day-to-day responsibilities. We, as librarians, are in a unique position to appreciate the multiple levels of tech-speak literacy and fulfill our role as one of the more likely mediums between the literate and "illiterate," or "still learning" masses.
To outline the project briefly, Yuba County Library was invited to participate as the rural beta test in an LSTA project put forward by Contra Costa County Library. The California State Library was aware of Yuba's lack of remote area access for patrons and knew that we were looking at technological solutions in order to address our growing needs for services. In fact, given our ongoing budget constraints, our strategic plan continues to place a heavy emphasis on technology due to the need to keep overhead costs to a minimum. The cybrary concept continues to be the template we use when considering appropriate use for our local impact fees.
With over 600 square miles of service area and a population of just over 70,000, Yuba County Library's one facility in Marysville along with a 32 foot bookmobile are grossly inadequate to meet the needs of our residents. So, when we were asked to be involved in testing a book dispenser that was purported to require a minimum amount of staff time to maintain and very little overhead costs, we enthusiastically agreed.
According to the vendor of the book dispenser, Distec from Sweden, all we needed was a location, a dedicated high-speed internet connection, SIP2 integrated ILS and RFID on books to be circulated from the machine. Sounds simple, right? From my recent experience I might concede that it would sound simple to those who are already knowledgeable about SIP2 and RFID technology or to those who did not have a clue about either, into which we fit firmly in the latter category. Eventually, we were to cobble together a crash course for ourselves with CCCL's gracious tech support.
Because of our determination and commitment to the project, we folded in a system migration and upgrade to our already ambitious planning. Actually, this was an essential piece of the puzzle, as our library had been piggybacking on our local community college's outdated ILS since automating in 2000. In order to minimize the costs, we stayed with the college's vendor, SIRSI, and were one of the first libraries in California to take advantage of their new SaaS, being hosted on a SIRSI server and running the enhanced version of SIRSI.net. The enhanced version was essential as without it we would have had neither the visual reinforcement of book covers, nor the added summaries/blurbs for patrons making their selections from the machine's simple touch screen interface.
In our innocence, which is always a good excuse for not knowing, we didn't realize how long it would take to complete the requirements for the SIP2 agreement. Our SIP2 was straightforward and was added onto our migration contract, but because Distec did not yet have a third party SIP2 agreement with SIRSI, or any vendors in the country for that matter, we were placed in the role of facilitating theirs as well as our own. Distec was quite dependent on us for contact information as their experience with U.S. ILS vendors was in its infancy. How much help we were able to offer them was limited, unfortunately, as we were in our own infancy in relation to having our own, independent ILS. This dual novice status was later to cost us, as we were unable to come up with alternatives to a customized report from SIRSI to allow books to be downloaded into the Bokomaten. Our County IT department suspected that we could probably create the report ourselves, but there just wasn't time.
Here I should briefly mention that, going into the project, we had pitched the grant as requiring minimal County resources. What we were faced with in relation to getting the machine to work was quite a different matter. So, anything requiring time from our County IT department was monitored very closely in order to be kept to a minimum. In fact, most of the players agree, the reason CCCL was able to have a functional, hassle-free machine months before us was due to their access to in-house tech support.
One non-technical challenge that came as a bit of a shock to both libraries was the box size limitations. The Bokomaten, having been designed for a European market, was designed for boxes and slots--into which the boxes would fit--according to standard European publishing sizes. For anyone familiar with U.S. publishing, you will know that there is quite a wide range of sizes for books in this country. Doing collection development according to size of book was certainly not something our professors would have recommended or taught us in library school.
After eliminating according to size and thickness, there was an added delimiter regarding inclusion of cover pictures associated with the record. For the remote access patron, title and author would not be enough to compel browsing by GoLibrary users. We determined that having the book cover, or graphic plus a blurb, would be critical for successful presentation of the books available in the machine. Small independent press titles or titles with older publication dates are examples of the kinds of materials that might not have enhanced content available. We learned the hard way to check for the enhanced content before selecting a book to add. We were forced to de-select many items, initially prepared for the machine, because we did not want anything in the book dispenser to be limited to just text. In some instances we resorted to showing a picture of an edition of the book other than the one that was actually in the machine for this reason.
Communication in a dot.com world was definitely the key to the success of this project. Communication between techies and non-techies has become a pet area of research for me, as a result of this project. I found fascinating that, in many ways, it was no easier communicating with our IT department down the street than it was Distec's IT staff on the other side of the world. Though our own IT staff were charmed with the whole idea of an automated book dispenser, and were ready and willing to put it on the fast track for us, time zones and other demands on their time made it difficult to get the two groups of tech specialists to communicate directly. Communication problems on both ends were likely a result of a bottleneck caused by my lack of technical expertise, since I had to play intermediary, sharing information between techies with only a very simplistic understanding of the concepts behind the data I was passing between the two groups.
Since completing the grant project, I've learned that this is a common problem for tech and nontech staff and techs admit that explaining the whys and wherefores or teaching someone to use the product they've created is the least-liked aspect of their jobs. I am learning more everyday about the growing need for specially trained individuals to function as go-betweens across the two worlds. Government is reported to lose millions of dollars annually due to this "communication disconnect.
"Fortunately, we had good mediators in CALIFA, a membership based California library service consortium, who was assigned by the State Library to negotiate the contract and deal with shipping the machine from Europe, both huge undertakings in their own right. And, as I've mentioned already, Contra Costa Library's knowledgeable IT staff were very gracious in taking time to explain many of the fundamentals to their less sophisticated grant partner.
Library tech staff or librarians with strong tech backgrounds will be the knowledge management experts of the 21st century and along with this expertise will come a universal mechanism for crossing the tech/nontech language barrier, but in the meantime, please be sure to add learning tech-speak to your list of things to do in your spare time, if it’s not what you’re already doing the major portion of your time.
The project was all engulfing for over a year and timing for everything had to be at warp speed. I've heard that this is known in techie land as "a death march project." If you deal with tech projects often then you're probably familiar with the insane expectations and deadlines that define your mission goals and objectives. I've certainly gained a great deal of respect for those working on the tech side of things and now have an inkling of the pressure they endure as a part of their day-to-day responsibilities. We, as librarians, are in a unique position to appreciate the multiple levels of tech-speak literacy and fulfill our role as one of the more likely mediums between the literate and "illiterate," or "still learning" masses.
Subscribe to:
Posts (Atom)