If you have noticed, Eve-Kill went out last weekend, after a few huddles they managed to get back on track, still some funding is needed and Karbowiak mentioned the need to buy new hardware to keep it going.
While I have no stake at Eve-Kill, I do have a ton of hatred for battle-clinic – their killboard is utter shit, loaded with click-pits and a lot of stuff the community doesn’t need. More importantly they do not share their API with the rest of the community. That’s why you see a niftier integration between EVE-Kill sites like DOTLAN, doctored BRs at dog-net’s site and other applications like EveWho. Differently from Battle Clinic I do praise Eve-Kill focus to keep the information they gather open and available to the community and is pretty much the reason an relaying this pledge to keep it going.
Eve-Kill Pledge: Sharing is Caring.
EVE-KILL is arguably the largest killboard in EVE, and we daily serve several thousand killmail hungry pilots, who both want to see who their friends and enemies killed while they were away.
We are currently hosting 4663 active killboards (Pilot: 1599 Corp: 2579 Alliance: 485) and 14.515.048 killmails.
Lately however, this community of murderers have been disturbed, and downright plagued by terrible luck, and downright shitty hardware.
The reason behind the downtimes we have had as of late, has been multiple, it started out with a hard drive dying, and a RAID controller failing to use the mirror drive, which it should have.
Then became a tale of the worlds slowest hard drive replacement, and finally a success story, after Beansman spent hours trying to figure out how to turn the working mirror, into a single drive, that would mount.
All the while this was happening, we was busy with smacking ourselves in the head over the fact our backup scripts had failed about two weeks before.
Luckily it all came back up, without any data lost, but the downtime had already cost us a weekend.
However, the bad luck didn’t end there, no, not even close. Now the controller decided to start a new war, something it liked to call “The timeout war”. Simply put, at a random time it will suddenly stop responding to the requests of the operating system, few moments later the operating system dumps the mounted array, and bam, all virtual machines grind to a halt.
At which point we have to rescan the system to get it to remount the drives, so the virtual machines returns to a functional state. The problem however, is that this requires direct intervention, and can’t easily be scripted away.
So this leaves us with three options.
First option is living with it, and staying where we are for another 2-3 months till we scrape together enough money to buy our own hardware.
Second option is switching to a different host, we already have one lined up which has made us a great offer for a server, but we will be bound to them for the next 12 months, which isn’t ideal.
Third option is to do a donation drive, and hope we can get the money that way.
We obviously picked the third option.
This said, we are not gonna put the burden of funding this entirely on you, our users – we are throwing all the money we have saved, plus money from our own pockets into it. Which nets us just shy of 3000 USD, sadly however, the server cost is upwards of 8000 USD, depending on the hardware we pick. However, for now we have set the donation goal at 4000 USD, should we blow past that, then we’re golden in terms of all the extras we are considering.
If you wish to contribute you can do it using the pledgie linked here.