Well, although she states it's a USLOC dongle, it actually has a real-world equivalent (including in the scale of the data), and that'd be downloading the Internet Archive's index.
In any case, there's differing time scales of information here, but as of December 2014, the Internet Archive has 50 PB of storage, with 9.6 PB in the Wayback Machine, 9.8 PB in the books/music/video collection. As of July 1, 2015, the Wayback Machine itself is 23 PB of storage (for a total of 32.8 PB), and growing by 50-60 TB per week.
However, we don't care about the database, we care about the index. As of January 2014, the Wayback Machine had 378,825,513,000 CDX records. Let's say 512 bytes per record, that's a high estimate, but we'll go with it. So,
back then, just the index was 194 TB. It's likely tripled or quadrupled in size since then.
(It's worth noting that the USLOC's digital archive is "only" 5 PB:
http://blogs.loc.gov/loc/2015/08/experts-corner-collection-development-officer-joseph-puccio/)
I'll also assume that the archive is at modern levels, despite the setting being possibly 2006. AIs will increase the size of the database.
So, assuming that Pintsize's OS doesn't have proper disk space management, any drive you could possibly put in Pintsize would be wiped. However, a very large server could feasibly handle that index - if we're going with the setting being 2006, you were looking at 750 GB HDDs for nearline storage. We'll go with 3 U of rack space for 16 3.5" drives, 42 U per rack, 168 TB per rack. So, four racks full of SAN, and you've got enough storage for the entire index and all student data. It'd certainly have to be a school that's serious about keeping a local index of the archive, but...
Of course, this is also something that you wouldn't just hand to an intern. This is something that not even Tai, and not even the head librarian, would have as a physical object like that, most likely. It'd be a shell script run by a cron job on their server, most likely.