Posts
Having to do this in a kernel build is simply annoying
So there are some macros, DATE and TIME that the gcc compiler knows about. And some people inject these into their kernel module builds, because, well, why not. The issue is that they can make “reproducible builds” harder. Well, no, they really don’t. That’s a side issue. And of course, modern kernel builds use -Wall -Werror which converts warnings like macro "__TIME__" might prevent reproducible builds [-Werror=date-time] into real honest-to-goodness errors.
Posts
Talk from #Kxcon2016 on #HPC #Storage for #BigData analytics is up
See here, which was largely about how to architect high performance analytics platforms, and a specific shout out to our Forte NVMe flash unit, which is currently available in volume starting at $1 USD/GB. Some of the more interesting results from our testing:
* 24GB/s bandwidth largely insensitive to block size. * 5+ Million IOPs random IO (5+MIOPs) sensitive to block size. * 4k random read (100%) were well north of 5M IOPs.
Posts
Going to #KXcon2016 this weekend to talk #NVMe #HPC #Storage for #kdb #iot and #BigData
This should be fun! This is being organized and run by my friend Lara of Xand Marketing. Excellent talks scheduled, fun bits (raspberry pi based kdb+!!!). Some similarities with the talk I gave this morning, but more of a focus on specific analytics issues relevant for people with massive time series data sets and a need to analyze them. Looking forward to getting out to Montauk … haven’t been there since I did my undergrad at Stony Brook.
Posts
Gave a talk today at #BeeGFS User Meeting 2016 in Germany on #NVMe #HPC #Storage
… through the magic of Google Hangouts. I think they will be posting the talk soon, but you are welcome to view the PDF here.
Posts
Success with rambooted Lustre v2.8.53 for #HPC #storage
[root@usn-ramboot ~]# uname -r 3.10.0-327.13.1.el7_lustre.x86_64 [root@usn-ramboot ~]# df -h / Filesystem Size Used Avail Use% Mounted on tmpfs 8.0G 4.3G 3.8G 53% / [root@usn-ramboot ~]# [root@usn-ramboot ~]# rpm -qa | grep lustre kernel-3.10.0-327.13.1.el7_lustre.x86_64 kernel-tools-3.10.0-327.13.1.el7_lustre.x86_64 kernel-devel-3.10.0-327.13.1.el7_lustre.x86_64 lustre-2.8.53_1_g34dada1-3.10.0_327.13.1.el7_lustre.x86_64.x86_64 kernel-tools-libs-devel-3.10.0-327.13.1.el7_lustre.x86_64 lustre-osd-ldiskfs-mount-2.8.53_1_g34dada1-3.10.0_327.13.1.el7_lustre.x86_64.x86_64 kernel-headers-3.10.0-327.13.1.el7_lustre.x86_64 lustre-osd-ldiskfs-2.8.53_1_g34dada1-3.10.0_327.13.1.el7_lustre.x86_64.x86_64 kernel-tools-libs-3.10.0-327.13.1.el7_lustre.x86_64 lustre-modules-2.8.53_1_g34dada1-3.10.0_327.13.1.el7_lustre.x86_64.x86_64 This means that we can run Lustre 2.8.x atop Unison. Still pre-alpha, as I have to get an updated kernel into this, as well as update all the drivers.
Posts
Its not perfect, but we have CentOS/RHEL 7.2 and Lustre integrated into SIOS now
Lustre is infamous for its kernel specificity, and it is, sadly, quite problematic to get running on a modern kernel (3.18+). This has implications for quite a large number of things, including whole subsystems with a partial back-porting to earlier kernels … which quite often misses very critical bits for stability/performance. I am not a fan of back porting for features, I am a fan of updating kernels for features. But that is another issue that I’ve talked about in the past.
Posts
reason #31659275 not to use java
As seen on hacker news linking to an Arstechnica article, this little tidbit. This is the money quote:
I know it seems obvious now to Google and to others, but mebbe … mebbe … they should rethink building a platform in a non-open language? I’ve talked about OSS type systems in terms of business risk for well more than a decade. OSS software intrinsically changes the risk model, so that you do not have a built in dependency upon another stack that could go away at any moment.
Posts
isn't this the definition of a Ponzi scheme?
From this article at the WSJ detailing the deflation of the tech bubble in progress now.
A Ponzi scheme is like this:
Posts
Every now and then you get an eye opener
This one is while we are conditioning a Forte NVMe unit, and I am running our OS install scripts. Running dstat in a window to watch the overall system …
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- usr sys idl wai hiq siq| read writ| recv send| in out | int csw 2 5 94 0 0 0| 0 22G| 218B 484B| 0 0 | 363k 368k 1 4 94 0 0 0| 0 22G| 486B 632B| 0 0 | 362k 367k 1 4 94 0 0 0| 0 22G| 628B 698B| 0 0 | 363k 368k 2 5 92 1 0 0| 536k 110G| 802B 2024B| 0 0 | 421k 375k 1 4 93 2 0 0| 0 22G| 360B 876B| 0 0 | 447k 377k Wait … is that 110GB/s (2nd line from bottom, in the writ column) ?
Posts
new SIOS feature: compressed ram image for OS
Most people use squashfs which creates a read-only (immutable) boot environment. Nothing wrong with this, but this forces you to have an overlay file system if you want to write. Which complicates things … not to mention when you overwrite too much, and run out of available inodes on the overlayfs. Then your file system becomes “invalid” and Bad-Things-Happen(™). At the day job, we try to run as many of our systems out of ram disks as we can.