Phase 10 is complete.

0

After two months of work I now consider phase 10 of the monster network complete.
Accomplishments:

– rebuilt both core i3 servers into a Xeon e5s to allow more virtual servers to be hosted, both with more horsepower and more maximum memory (256gb+)
– upgraded infiniband switch from SDR to DDR (8gbps to 16gbps) due to issues with old switch and wanting more speed (and I got a great deal)
– removed final AMD server due to power usage.  Replaced with a Xeon e3.
– ended up adding an LSI 9260 raid controller to increase the IOPS on the SAN, happy so far with performance increase.
– all servers now have IPMI, allowing easy remote  management along with allowing full control by SCVMM.
– rebuilt San storage, switching from a Linux based solution (ESOS) to windows iSCSI target.  Along with that change I rebuilt the storage and added automatic storage tiering and a 20gb write back cache using Windows storage spaces.
– all power supplies are now 80+ bronze rated or higher.
– with all the upgrades and rebuilds I ended up lowering overall power usage by 20-40 watts, not too bad!

Failures:
– all was not perfect.  Initial plan was to virtualize the iSCSI target server and then cluster it, but the performance hit was sizeable, I then found out that the only solution to that problem was SR-IOV, which isn’t fully supported by the chipset on the motherboard.  Bummer. So I’m back to single point of failure for now.

Phase eleven planning:

I’ve already started planning for the next stage of design, but due to both increased workload and the oncoming bicycling session. It’s both going to be smaller in scope and will take longer.

– prepare for removal of sharonapple.  I prefer having the hosts, I’ve armrests wanted to have the hosts, but sharonapple is an oddity.  The motherboard uses different memory than the other Xeon e5 boards I have now, and it’s limited to 32gb of ram.  Also the unplanned for rebuild of the San host into a Xeon also made it possible to use that also as a hyper v host.  While far from best practice, in a lab I don’t think it matters.  This would also let power usage by about 80 watts.

– addition of a standalone pfsense router box.  I’ve had some issues with  virtualizing pfsense on hyper v, so I may get a cheap Intel atom board and load it on there.

How to Build a home IT lab (beginner)

0

So, you’ve gotten your first real job in IT, you’ve gotten your certifications, and you’ve passed the interview process and are now managing some form of network, congratulations!  By now you’ve realized all your studying and reading and certifications are completely useless and you are seeing stuff break in ways you had no idea was possible.  You’ve come to the realization that you really need something to play with, either at home or at your work, but you don’t know for sure what you need.  Let me share some of my experience and help you avoid making the mistakes I did.  I will cover this topic in 3 parts, the first part is for beginners labs on a limited budget.

First, what constitutes a “beginner” lab?  First, we’re talking a single computer, using some form of virtualization, this will get your feet wet with a minimal investment of time and money.  This will also tell you if you need to progress to something more flexible or not.

Processor Type:

The first and most important decision you need to make is either AMD or Intel.   Why is this so important?   If you want to add more servers to your lab, all the processors should be approximately the same type (FX, xeon, core i3/i5/i7, etc)  This is because you cannot live migrate a virtual machine between different cpu architectures.  You have to shut the VM down, migrate it over, and then turn it on.  While this is not a major issue, it can be annoying if you decide to build a fail-over cluster and realize it’s worthless because you have to shut the VMs down first before moving between servers.  The processor families should also match for the same reason.  While you can live migrate between a Core i3 and a Xeon, in order to do so you may have to disable advanced CPU features, which will affect performance.   While both AMD and Intel processors will work fine for hosting VMs, you will find that while AMD motherboard and processors are cheaper, Intel processors are more power efficient, and perform better with fewer cores.

Virtualization platform:

The next decision you need to make is which platform to use.  Luckily every platform provides a limited free server version that you can download and try first.  While vmware is the most popular choice, if you have no linux experience, it can be a bit daunting, along with finding drivers.  My platform of choice is Hyper-V, as you get windows drivers and the free version is not as limited as ESX.  There are others you can try, like Xen, but ESX and Hyper-V are the most popular.

Hardware:  (Build your own)

  • Motherboard:  Make sure the motherboard is supported by your processor of choice.   Pay close attention to the amount of memory slots, in order to max out the supported memory, it should have at least 4.  Also make sure it has enough PCI-Express slots to support everything you want to use.  If you can afford it, go for a server motherboard from SuperMicro or Asrock.  Key words to look for are:
    • IPMI  (remote managment)
    • Vt-d (Virtualization technology)
    • Vt-x (virtualization technology)
    • Multiple Nics (nice to have but not a deal breaker)
    • SR-IOV (faster ways for VMs to access your nics)
  • Processor:  If you’re using a server board, pick a server CPU, either a Xeon or Opteron.  You can find Xeon E3 processors for about the same price as a fast I7.  Also check how much memory the CPU will address.  Consumer CPUs like the core chips or AMD FX chips only support up to 32GB, while a Xeon e5 will support 512GB.  Don’t go crazy with trying to purchase the fastest processor you can afford, unless you’re going to be using this as a workstation to play games on also, something with 4-6 six cores will be more than enough.
  • Memory:  Make sure the memory you buy is compatible with the motherboard.  Supermicro boards are very picky on the memory they use, while Asrock doesn’t.  If you’re going with a server chip (Xeon or Opteron) make sure to purchase ECC memory, it will make the server more stable and allows you to add more memory.   Also pay attention to how much memory the board supports.  Most only support 8Gb dimms or less, while server boards will usually accept up to a 32Gb dimm.
  • Power Supply:  Don’t skimp on this.  Get at least a 80 plus bronze power supply, and don’t go crazy with the wattage.  If you’re just using this as a VM host, it doesn’t need the super fancy 1000 watt supplies.  Depending on the storage, a 300 to 400 watt power supply is more than enough.  Also for instance if you buy a 1000 watt power supply and only use 200, it’s going to be less efficient had you save the money and bought a 400 watt supply and use 250 watts.  Before you buy make sure it has all the connections you need.
  • Case:  Any case works.  I prefer cases with hot-swap drive bays, but that’s personal preference.   I went and got a rack and rack mounted all my hardware to keep them together and out of the way, but tower cases would have worked just as well.
  • Storage:   Here is where you need to spend the money.  Virtualization is very storage intensive.   A single drive will only run 3-5 virtual servers before starting to really slow down.  If you want to run more than that, you’ll need to get at least a single large SSD to hold the VMs.  If you are going to use Hard drives, get a nice caching RAID controller, don’t use the RAID built into the motherboard, its not worth it.  Purchase either hitachi hard drives, WD black or red, or high-end seagate drives.  WD blue or green drives or seagate consumer drives will not work reliably in a RAID due to a thing called TLER.

Hardware (purchase)

If you don’t want to build one, you can also buy one.   Make sure that the computer you buy will hold as much memory as possible, some only have 2 slots.  Also make sure it has a quality power supply and has hardware supported by your virtualization platform.   You can also purchase used off-lease servers at a significant discount.   If you do opt for a used server, make sure to do your research first.  I’ve had older IBM servers pull 350 watts before booting!  Also avoid used HP servers as they do not provide you with support without an active support contract, so if you need a BIOS or driver update, you’re out of luck.  Also realize that parts for a server are going to be more expensive to replace or upgrade.

Now you will need to download and install whichever host software you’ve picked and get started!  You will also need to usually install some form of client software on your main machine  for management if it doesn’t include web management.  Now get started!   Depending on what you’re virtualizing, you’ll probably need licenses.  Microsoft has ended technet, but you can still purchase an action pack for about $349 which has most of the licensing you would need.   If you’re virtualizing Linux or BSD then you won’t have to worry.

Links:

Cpuboss  (Great for comparing CPU Types)

Superbiiz  (Great site for cheap parts)

newegg.com (another great site cheap parts)

Monster Network Phase 10 Progress

0

Not many posts for the last few weeks due to being elbow deep in real life work and upgrading the monster.  But here is what I have accomplished so far:

– Prometheus upgraded to  64GB of ram

– Daedalus upgraded to Xeon e5-2620v2 with 48GB of ram

– Infiniband Upgraded to DDR speed (20Gbps)

– Removed ESOS, switched to Hyper-V host with a virtual windows iSCSI server with 1.2Tb of mirrored, auto-tiered storage, along with 8TB of media

– Both Macross and SharonApple now boot Hyper-V server off 32GB flash drives.

– All servers but prometheus now reside on full extension slide rails for maintenance (suh-weet)

Assorted Remaining tasks:

– Re-connect all hosts to new iSCSI system (in progress)

– Cable Management (Next week)

– Replace bad power supply in prometheus (Monday)

– Use network virtualization to allow the VMs to use the Infiniband network to access SAN storage

– Restore 6TB of media onto storage LUN

– move all VMs to cluster storage and remove un-needed local drives

– Reload Virtual Machine Manager from scratch

much more, but I gave myself a headache just writing that much down.

Monster Network Phase 10 approaches!

0

Phase 10 has started as of 2/7/2014. This is expected to take at least 1 week to complete and should be the last major upgrade for the year.
1. Switch from ESOS (linux) based SAN target to a clustered virtual windows storage server 2012r2 solution.
2. Rebuild valkyrie from a core i3 with 32GB of ram to a xeon e5 2620v2 with 64GB of ram
3. Add another 3TB of space to backup
4. Replace SDR Infiniband switch with a DDR switch, doubling bandwidth on the SAN fabric to 20GB
5. Adding more memory to prometheus, give it 48GB of memory.
6. Switching Daedalus and Sharonapple to booting from USB sticks and removing local storage to save power and maintenance.
7. Fully switching over to system center virtual machine manager 2012r2 for all vm maintenance tasks.

With this upgrade, I’ll be going from 96gb of memory for virtual machines to 144gb with plenty of room to grow, with the rebuild Prometheus can cold 512gb of ram total and sharonapple will be able to hold 256gb of ram. The extra ram wil be used to both give existing servers more ram and give me room to start installing Microsoft lync 2013.

Introduction to Infiniband Pt.2 (Linux and vmware platforms)

1

In part one I went over some basic Infiniband terms and concepts.   In this part I am going to go over Infiniband basics for Vmware and Linux.  I will not go into full details on either as I no longer run Vmware, and only one of my machines runs Linux, but there are things that you should know that will save you problems.

Infiniband under Linux:

GOOD NEWS!!  You have picked about the best platform to run Infiniband on.  Linux has very mature Infiniband drivers available for almost every brand and type of Infiniband card available.  They also have the most expansive library of  Infiniband troubleshooting and diagnostic  tools out there (SCST).    You can either install install the software on your favorite distribution or use one that comes pre-installed.  I ended up loading ubuntu on a spare workstation with a spare Infiniband card and used it for troubleshooting while I was setting mine up.  It depends on your distribution, but there is a good tutorial here for debian and here for ubuntu.  Both of those articles give very good instruction on basic setup, and until you get more familiar with Infiniband, I would try getting 2 Linux machines to talk, just to test your knowledge.   ibnetdiag is your friend!  Another advantage is there exists an eIPoIB shim for some virtualization platforms under linux to allow virtual machines to share your Infiniband fabric.  I’ve never gotten that far, but you see mentions of it places like here.  If you don’t want to “roll your own”, you can download distributions that are already setup to use Infiniband:

ESOS:  This seems to be the best solution, and one I use here for the Monster Network.   The software is actively being developed, with new versions almost every week.   It supports automatic storage tiering via Btier, iSCSi, Infiniband, FC, SSD Caching, RAID cards, and storage clustering.   The main drawback to ESOS (for me)  is it has no web GUI, so you have to manage the software via SSH or the console with a test-based GUI called “TUI”, a little intimidating, but user friendly.

Openfiler:  I originally planned on using this, but while the software seemed really nice, the free version does not give you a GUI for infiniband setup (which I wanted) and it does not seem to be as actively developed as ESOS.  But if you want a nice web GUI and don’t mind have to do some command line work, it seems to work really well.

Infiniband under Vmware

You have picked a platform where you have to choose your hardware carefully, but it will work.  Before you purchase any Infiniband HCAs, make SURE that it’s either on the Vmware HCL or you can find VMware drivers for the card.  I originally tried to use Mellanox Infinihost III cards, but they weren’t supported on vmware 5.1.  I had to upgrade to a ConnectX card, which technically isn’t supported.  I had to use an older driver and downgrade the firmware on the card to get it to work correctly.  I also don’t know if that trick with work with Vmware 5.5  Start here to read up on what I had to go through, and with the ConnectX card you do not get SRP, you have to use IPoIB and iSCSI.   so if you’re using vmware, make sure to purchase only ConnectX-2 and connectx-3 cards if you don’t want a headache.

Introduction to Infiniband. Pt1.

1

As some of you may know, I have a very over-engineered lab, and as part of that I run an Infiniband SAN.   When I went for Infiniband there really wasn’t much information about what it is or how to use it, only that it’s a cheap way to get a crazy fast SAN connection, which isn’t the whole story.  In the next weeks I will start with what Infiniband is, and how to make an informed decision if it’s right for you or not.

I will start with going over some terminology that goes with Infiniband.  This will lay the groundwork for the next few posts.  One of the mistakes I made with Infiniband is not understanding the Infiniband terminology, I just made the false assumption that it’s just like ethernet, only faster.

Infiniband Speeds:

  1. SDR:  Single Data Rate, this is referred to as 10Gbps, although with the overhead from encoding you’ll only see about 8Gbps.  This is the cheapest speed to start with Infiniband, switches run about $200.  These use a CX4 Connector
  2. DDR: Double Data Rate, this is referred to as 20Gbps when it’s really only 16Gbps with the encoding overhead.   DDR switches start around $400-$500.  DDR Connections still use a CX4 Connector
  3. QDR: Quad Data Rate, this is referred to as 40Gbps, when it’s really only 32Gbps with the encoding overhead.  QDR switches start around $1000.  QDR and above connections use a QSFP  Connector.
  4. FDR-10, FDR & EDR:  these run at 40Gbps50Gbps and 100Gbps respectively.   The encoding overhead on these are only about 3%.   These are beyond the scope of this article.

HCA:  

Host Channel Adapter, this is what the Infiniband PCI adapter is referred to.

Infiniband Cable Types: 

Infiniband cables are  expensive, SDR / DDR cables run around $30-$50 each, and QSFP cables can go for $70-$100 at least.   You have to be very carefuly when shopping for cables as Infiniband CX4 cables look very similar to an SAS SFF-8470 cable.  If the price seems to good to be true, then it’s not an Infiniband cable.    Cables They come in both copper and fiber varieties, the optical cables are lighter and have a longer length, but tend to be more expensive.

  1. CX4:  This is the cable type used for both SDR and DDR Infiniband.  The problem is while DDR CX4 is backward compatible with SDR, the reverse is not true.  Cables for sale on the internet don’t always specify which type of CX4 it is, so if it doesn’t say “DDR” on it, make sure and ask.  These connectors tend to be made of steel and are very heavy duty.  They come in either “pinch” style or “latch” style.  Either works on adapters and switches.4X-Infiniband-L
  2. QSFP:  This is used for QDR and above connections, but it is backward compatible, so you can get a QSFP to CX4 cable adapter.  QSFP cables have a 2-3 inch plug that is inserted into the switch and adapter for a more secure connection than a CX4 connector.  they usually have a pull table dangling off to remove the cable after it’s inserted.TSC0507-QSFP-Plus-Passive

Signaling Rate:

Each Infiniband speed has a base signalling rate, for SDR it’s 2.5Gbp, DDR is 5Gbps and QDR is 30Gbps.   Sometimes you will see on cables, adapters and switches either “CX1”, “CX4” or “CX12”, this refers to how many lanes of traffic is supported by that link.  Mostly you will see “CX4” connections which would give you a 10Gbps SDR connection or a 20Gbps DDR connection.  Some of the higher-end switches offer “CX12” connections which would give you a 30Gbps SDR Connection and a 60Gbps DDR Connection.  CX12 uses a different type of connector and cabling, so make sure if you buy a switch with CX12 connections, you have CX12 connectors on the cards, and CX12 cables to connect them.

Infiniband Switches:

Infiniband, like Ethernet has switches, and they come in either managed or unmanaged.  Unless the switch specifically says “managed”, it’s not.  Infiniband is different as you can daisy change the adapters together to avoid using a switch, but you will experience some performance loss.  Obviously daisy chaining them requires dual-port cards.

Subnet Manager:

For an Infiniband fabric to be fully functional, you must have at least one subnet manager running.   A subnet manager assigns a unique identifier to each adapter and builds a routing table.  You can have multiple subnet managers running for failover, but only one can be active at the same time.  The second one you add will detect the first one running and switch to a passive mode.  Most managed switches will include a subnet manager, but if it’s not managed, you’ll have to run the subnet manager on one of the connected nodes.  OpenSM is included with most drivers.

Infiniband Adapters:

These come in many shapes and sizes.  I try to stay with switches from mellanox and voltaire, as they are still in the Infiniband business and have a very active support community.   You will See “Infinihost III” cards for about $30 each, but I would go for at least ConnectX cards, or if you’re running windows, go for ConnectX-2 cards, as there is better driver support.

IP over Infiniband:

IPoIB is used to run IP over Infiniband, so you can assign IP addresses and ping other machines.  Depending on your O/S it’s either installed by default (Windows) or has to be enabled (linux).  IPoIB lets you use iSCSI and other IP based applications over your fast Infiniband fabric.  IPoIB does add some overhead to the fabric, depending on what you’re running but it’s usually around 25%.  IPoIB is also not bridgeable, so even though windows sees it as just another network adapter, you will be unable to share your infiniband network with your virtual machines.  The only way to do that is with network virtualization.

SRP:

SCSI Remote Protocol, this is the alternative to using IPoIB and iSCSI.  SRP is basically running SCSI command directly over the Infiniband fabric, giving you a very low latency and high speed connection without the overhead from IPoIB.  This is only available in infiniband and certain 10gb ethernet adapters.  But for SRP to be available, the drivers must support it.

SCST:

Generic SCSI subsystem.  This is software used on linux to give you Infiniband or FC targets.  There are different ways to get SCST, either add it to an existing linux install or have it come with your distribution.  ESOS and Openfiler come with it.   SCST is not currently supported under windows, so if you want a lower-priced or free Infiniband Target server, you will use SCST on linux, there is no way to cheaply or easily run an Infiniband target server under windows unless you use IPoIB and iSCSI

This is the first part of my Infiniband Introduction, next I will explain some setup issues and what makes it good and bad.

Links for further Research:

http://www.openfabrics.org Openfabrics is the home of an open-source driver software mostly for mellanox adapters.  While site is still running, the forums are dead and the website hasn’t shown activity in a while.  The driver still does not support server 2012 or 2012R2.

http://community.mellanox.com/welcome Active support site for mellanox Infiniband products.  Help there is generally useful and you’ll usually get a reply to help requests.

http://www.servethehome.com/ A website with good information and many people using Infiniband that are active on their forums.

The Monster Network is earning it’s name early this year!

0

I haven’t been keeping the blog up to date with the changes, but this year has been crazy so far.

1.  I finally got fed up with the poor SAN performance and replace the final two 4+yr old drives in the RAID-50 Array.  They were WD Green drives and I think I was being bitten by one or both of them going bad and the TLER issue with those type of drives in a RAID.

2. I rebuilt one of the servers into a new case, which exposed some strange issues on my core-i3 hosts when pfsense is hosted on it, things like poor bandwidth and DNS issues, when both the host and the guest have plenty of ram and CPU cycles

3.  I had a heat sink break off one of my ConnectX Infiniband Cards, causing the card to cook itself, and the only reason I found that out was my SAN fabric was dropping out at random times, except when that host was turned off.  Opened it up as part of the rebuild and saw the heatsink hanging off the card.

4. I apparently also have a bad Infiniband cable which i’m trying to track down.

5. My Infiniband switch is managed (Voltaire ISR9024-M), but the serial port uses a mini-usb connector for some stupid reason, so i’m unable to research the above infiniband issues, trying to track down a cable for it has proved difficult at best.

6. My pfsense VM corrupted itself, causing me to have to reload pfsense from scratch and from memory as ar as vlan assignments…joy

7. Trying to get a host to simply boot off a USB drive proved much more difficult than it should be.  You have to flip the “removable” bit in the firmware but the tools to do that are very hard to find and use.  This also caused me to ruin 2 flash drives, at which point I gave up and bought an SSD replacement.

8. Power supply went out on a host, had to replace

9. The host I rebuilt now has an issue with the nics where not all of them are passing VLAN tags from the trunk ports, so i’m unable to team them until I diagnose which ports are the issue and why it’s happening.\

10. I used one of the WD Green drives I pulled from the SAN box to use for local VM storage, and when the server is rebooted, the drive letter was lost.  I posted my frustration on Facebook and was suggested I had a corrupt partition table and to completely wipe the drive and start from scratch which fixed it.

Now that most of the above are fixed, i’m tackling the final items in my current lab rebuild phase:

1. Configure Btier on my ESOS SAN target (automatic storage tiering)

2. Move remaining VMs to clustered storage

3. Rebuild last core-i3 server into a Xeon server

Free Tools worth their weight in gold Part 1: ESOS

0

Here in the monster network, I try to do things as cheaply and efficiently as possible.  Originally when the network was first coming together I decided I wanted a SAN for clustering.  Originally it was a windows storage server device running iSCSI target software and the cluster members connected via MPIO dual gigabit connections.  This worked well for quite a while, but I never saw throughput go much over 1 gigabit and it never seemed to run as fast as I thought it would.  Then finally I sat down and sketched out what I had and what I wanted to do and started doing research.  This led me to either fiber channel or Infiniband.  My requirements led me to picking Infiniband, for reasons i’ll cover in a later post.  So as part of my decision I could no longer use windows as the drivers do not support an Infiniband target.  My research led me to a driver packaged called SCST which exists solely on Linux.  As I said in the first sentence, I try to also be efficient and me having to re-learn Linux and installing packages and dealing with all that junk just didn’t work for me.  So after further research, I located a software package called “ESOS” (Enterprise Storage Operating System) which is actually a copy of Linux with all the needed packages pre-installed and configured, you simply copy it to a flash drive and boot off it, and it gives a nice curses-based text console for configuration called TUI (Text-based User Interface).  This software allows me to use Infiniband for my SAN traffic.  Now as I will cover in a later post, possibly to be called “Infiniband: It’s great for Linux, but sucks for windows”.  Part of what makes Infiniband so awesome is RDMA which offloads a lot of the processing to the Infiniband card, making storage incredibly fast, as long as the drivers support it.  Now I use Mellanox ConnectX cards for my Infiniband traffic.  The drivers for these cards do not support RDMA under server 2012.  the cards are no longer supported by mellanox, and there are open-source drivers for them, but those drivers only work for 2008R2.  So that left me back at using iSCSI for traffic, but this time over an 8GB connection instead of 1GB.  Luckily ESOS supports both RDMA and iSCSI, and you can mix and match.   I will write more on this later, but ESOS has been for the most part rock solid here in the Monster Network.  The main issues I have with it so far have been caused by my inexperience with Linux or defective cables or hardware.   While ESOS doesn’t support most of the cools things that windows has like storage tiering or storage pools, it does have software RAID support, hardware RAID support, 3 types of SSD storage acceleration, tape drive emulation, and monitoring.  It will also dump performance data to a MySQL server, along with sending e-mail alerts.   also since ESOS boots from a flash drive, It’s not installed on a platter so it won’t be taken out by a bad hard drive.

ESOS

Lesson in K-I-S-S troubleshooting

0

This is what I have to deal with @ the house.  I had a massive loss of data a few months ago (about 4TB) nothing irreplaceable, but it took me 2-3 months to re-create and re-download the data, and some of it is lost forever, luckily no family pictures we re affected.  So after that I went through and started using System Center Data Protection Manager for backing up.  I am using (2) external 3TB usb 3 drives and (1) external 1.5TB external USB2 drives.  The first issue I encountered was that DPM was failing to run first backup.  That ended up being that the new drives were 4k drives and the 1.4tb wasn’t, so I ended up using .VHDx’s on the drives and then backing up to them, problem solved right?  wrong.  Randomly, one of the USB drives would drop off the machine, and would not re-appear until the DPM machine would be rebooted.  This is after I loaded a program to “keep-alive” the USB drive by writing a file to all the drives every 7 seconds.   That continued for over a month until I swapped out the actual drive with a brand new one, and the problem went away for a week and came back.  So i’m pulling my hair out (what little that’s left), I even checked the power settings and turned them all off, thinking the drive was going into sleep mode in the middle of the night (during the backup)  finally, I receive a reply to an old post about previous issues on a support board that recommended I check the cable for problems.  How in the world would a USB cable ever be bad?  Inconceivable!!  But, I needed longer cables anyway so I ordered 2 new USB3 cables from amazon.  And guess what?  Problem resolved.  The cable was defective, and when I swapped out the drive, I never changed the cable out because how could a brand new cable be bad?   So let that be a lesson in troubleshooting children.  No matter how complicated the problem, check the simple things first, you’ll save yourself a headache.  This is also why IT support people always ask “is the computer plugged in” or “have you rebooted” because they’ve all been burned by people where they have assumed the basics only to find out “hey, this black thingie isn’t plugged into the wall anymore, is that important?”