I've been fighting for several days to get the audio working from a HP EliteDesk 800 G1 SFF device into my Panasonic TV using a DisplayPort to HDMI Cable. The PC is running Windows 10 build 10586. I tried lots of things, including BIOS patches, various driver builds from MS and Intel, but all to no avail.
However, I have managed to get it working, by changing the Default Format for output to FM Radio Quality, down from DVD or CD in the Advanced TV Properties. I'm not sure if this is a TV or PC/Driver problem, but I suspect its the latter as I had the same problem with another TV, but the cable works on a Surface Pro 3 Mini-DP to DP to HDMI adapter, so I think this is PC related.
Hopefully this helps someone else out.
A compilation of configuration options and lessons for iPXE booting various tools across a network.
Friday, 22 April 2016
HP EliteDesk 800 G1 - DisplayPort to HDMI Audio
Location:
United Kingdom
Sunday, 13 September 2015
Saving Images to AWS S3 Scriptomagically
Whilst I've been messing around creating boot images, I've hit against the problem of needing to archive off some large images for later use. Now I've finally got access to a high-speed bandwidth internet link, I'm can back stuff off to Amazon's AWS S3 cloud in a reasonably timely fashion.
s3cmd does a great job of interfacing with AWS from a Linux CLI, but is designed to deal with precreated files, not stuff that is dynamically made. When you're talking about multi-gigabyte files, it isn't always an option to make a local archive file before pushing it to the remote storage location.
I'm used to using pigz, dd and ssh to copy files like this, and wanted to achieve something similar to s3, however there doesn't seem to be many guides to achieving this. I have however made it work on my debian based distro relatively easily.
This is the tooling I combined
As of writing, this can be obtained from the s3tools git repository @ https://github.com/s3tools/s3cmd
You'll need git and some python bits and pieces but building was straightforward in my case.
Before you start, make sure you setup s3cmd using the command s3cmd --configure
tar cvf - --use-compress-prog=pigz /tmp/source/directory/* --recursion --exclude='*.vswp' --exclude='*.log' --exclude='*.hlog' | /path/to/s3cmd put - s3://bucket.location/BackupFolder/BackupFile.tar.gz --verbose -rr
I think its pretty self explaintory, but I'll run through the code anyway...
tar cvf = tar compress verbose next option is a file
- = stands for stdout in tar parlance
--use-compress-prog=pigz = self explaintory, but you can probably swap this for any compression app which supports stdout.
/tmp/source/directory/* = the directory or mount point where your source files are coming from
--recursion = recurse through the directorys to pickup all the files
--exclude='*.vswp' --exclude='*.log' --exclude='*.hlog' = exclude various file types (in this instance, I was backing up a broken VMFS storage array
| = breaks the input to the next app
/path/to/s3cmd = the directory where s3cmd resides - in my instance, Id installed git repository version
put = send to s3; put works with a single file name.
- = use stdout as the source s3://bucket.location/BackupFolder/BackupFile.tar.gz = the s3 bucket and path where you want the output stored
--verbose = debugging verbose output and status tracking
-rr = reduced redundancy storage - less expensive than full redundancy, and you can include/exclude this based on your needs.
The biggest problem with this is you don't really get an idea of how long a backup will take. s3cmd splits the file into chunks, but you don't know how many chunks it is until the process has completed. I average around 6 MB/s but a multi-gigabyte file can still take several hours to upload. Whilst I didn't time it exactly, a 70GB file, compressed to 10GB, took around 90mins to send to s3.
You may want to leave your backup running in a screen session.
s3cmd does a great job of interfacing with AWS from a Linux CLI, but is designed to deal with precreated files, not stuff that is dynamically made. When you're talking about multi-gigabyte files, it isn't always an option to make a local archive file before pushing it to the remote storage location.
I'm used to using pigz, dd and ssh to copy files like this, and wanted to achieve something similar to s3, however there doesn't seem to be many guides to achieving this. I have however made it work on my debian based distro relatively easily.
Tooling
This is the tooling I combined
s3cmd
You need a recent version of s3cmd to make this work - v1.5.5 or above is apparently what supports stdin/stdout which you'll need.As of writing, this can be obtained from the s3tools git repository @ https://github.com/s3tools/s3cmd
You'll need git and some python bits and pieces but building was straightforward in my case.
Before you start, make sure you setup s3cmd using the command s3cmd --configure
pigz
I use pigz, although you can use gzip to achieve the same thing. For those that don't know, Pigz is a multi-threaded version of gzip - it offers much better performance than gzip on modern systems.tar
tar is pretty much on every linux system, and helps deal with folder contents in a way that gzip/pigz can't.Usage
The command I built is as follows:tar cvf - --use-compress-prog=pigz /tmp/source/directory/* --recursion --exclude='*.vswp' --exclude='*.log' --exclude='*.hlog' | /path/to/s3cmd put - s3://bucket.location/BackupFolder/BackupFile.tar.gz --verbose -rr
I think its pretty self explaintory, but I'll run through the code anyway...
tar cvf = tar compress verbose next option is a file
- = stands for stdout in tar parlance
--use-compress-prog=pigz = self explaintory, but you can probably swap this for any compression app which supports stdout.
/tmp/source/directory/* = the directory or mount point where your source files are coming from
--recursion = recurse through the directorys to pickup all the files
--exclude='*.vswp' --exclude='*.log' --exclude='*.hlog' = exclude various file types (in this instance, I was backing up a broken VMFS storage array
| = breaks the input to the next app
/path/to/s3cmd = the directory where s3cmd resides - in my instance, Id installed git repository version
put = send to s3; put works with a single file name.
- = use stdout as the source s3://bucket.location/BackupFolder/BackupFile.tar.gz = the s3 bucket and path where you want the output stored
--verbose = debugging verbose output and status tracking
-rr = reduced redundancy storage - less expensive than full redundancy, and you can include/exclude this based on your needs.
The biggest problem with this is you don't really get an idea of how long a backup will take. s3cmd splits the file into chunks, but you don't know how many chunks it is until the process has completed. I average around 6 MB/s but a multi-gigabyte file can still take several hours to upload. Whilst I didn't time it exactly, a 70GB file, compressed to 10GB, took around 90mins to send to s3.
You may want to leave your backup running in a screen session.
Sunday, 21 September 2014
Azure Self-Signed Cert
I've been messing around with some of the Azure services, and am about to try some of the desktop utilities such as the Hyper-V converter to try publishing services. One thing you'll come across if you're trying to do this is the need to create a self-signed management certificate to allow these apps to authenticate, and you'll see all the technet articles mention the makecert.exe tool.
The problem is that makecert is bundled into the Windows 8.1 SDK and Visual Studio 2013 Express downloads, both of which are several hundred megabytes - overkill for what we need.
Well, the easy way to get makecert and only use about 9mb of storage space - download the 8.1 SDK installer from http://go.microsoft.com/fwlink/p/?linkid=84091 and run the installer. When prompted to select the tools, you only need to install the MSI Tools. This gives you makecert (as well as a few other bits and bobs) but in a much more compact format than having all of the developer tools installed.
The problem is that makecert is bundled into the Windows 8.1 SDK and Visual Studio 2013 Express downloads, both of which are several hundred megabytes - overkill for what we need.
Well, the easy way to get makecert and only use about 9mb of storage space - download the 8.1 SDK installer from http://go.microsoft.com/fwlink/p/?linkid=84091 and run the installer. When prompted to select the tools, you only need to install the MSI Tools. This gives you makecert (as well as a few other bits and bobs) but in a much more compact format than having all of the developer tools installed.
Friday, 22 August 2014
Wierd goings on with a Live CD and libdvdcss.so.2
I've been messing around trying to make a live-CD with some transcoding/ripping utilities built in to utilize some of the spare hardware I've got lying around. More on this later, but I've been reworking the guide @ http://willhaley.com/blog/create-a-custom-debian-live-environment/ with my own utilities and tools.
One problem I've been challenged with over the last couple of days is HandBrake-Cli bombing out with the message:
root@av3-build2108141:/mnt/Videos/Movies/dvdrip/91# HandBrakeCLI -i BHD.iso -o BHD.mkv --preset="High Profile"
[20:41:41] hb_init: starting libhb thread
HandBrake 0.9.9 (2014070200) - Linux x86_64 - http://handbrake.fr
4 CPUs detected
Opening BHD.iso...
[20:41:41] hb_scan: path=BHD.iso, title_index=1
index_parse.c:191: indx_parse(): error opening BHD.iso/BDMV/index.bdmv
index_parse.c:191: indx_parse(): error opening BHD.iso/BDMV/BACKUP/index.bdmv
bluray.c:2341: nav_get_title_list(BHD.iso) failed
[20:41:42] bd: not a bd - trying as a stream/file instead
libdvdnav: Using dvdnav version 4.1.3
libdvdread: Missing symbols in libdvdcss.so.2, this shouldn't happen !
libdvdread: Using libdvdcss version for DVD access
Segmentation fault
This has been bugging me, as it worked before I converted the image to a livecd. I wondered if it was some kind of problem with the lack of 'real' disk space, or a lack of memory or something like that, but nothing I could find would identify it.
Finally, I started looking into libdvdcss rather than HandBrake itself. I think what confused me is the symbols error looks like a warning, especially given that there is a follow-on message which looks like libdvdcss is continuing. Anyway, eventually! I ran an md5sum on the libdvdcss.so.2 file to see if it matched a non-live machine (to a virtually identical build).
root@av3-build2108141:/# md5sum /usr/lib/x86_64-linux-gnu/libdvdcss.so.2
4702028ab20843fd5cb1e9ca4b720a72 /usr/lib/x86_64-linux-gnu/libdvdcss.so.2
N.b. libdvdcss.so.2 is symlinked to libdvdcss.so.2.1.0 in my current Debian sid based build.
On the donor machine
root@mediasvr:/usr/lib# md5sum x86_64-linux-gnu/libdvdcss.so.2
c9b314d9ed2688223c427bc5e5a39e6f x86_64-linux-gnu/libdvdcss.so.2
Hope this helps someone, and I'll be back soon with more details about building a live image, then booting it using iPXE.
One problem I've been challenged with over the last couple of days is HandBrake-Cli bombing out with the message:
root@av3-build2108141:/mnt/Videos/Movies/dvdrip/91# HandBrakeCLI -i BHD.iso -o BHD.mkv --preset="High Profile"
[20:41:41] hb_init: starting libhb thread
HandBrake 0.9.9 (2014070200) - Linux x86_64 - http://handbrake.fr
4 CPUs detected
Opening BHD.iso...
[20:41:41] hb_scan: path=BHD.iso, title_index=1
index_parse.c:191: indx_parse(): error opening BHD.iso/BDMV/index.bdmv
index_parse.c:191: indx_parse(): error opening BHD.iso/BDMV/BACKUP/index.bdmv
bluray.c:2341: nav_get_title_list(BHD.iso) failed
[20:41:42] bd: not a bd - trying as a stream/file instead
libdvdnav: Using dvdnav version 4.1.3
libdvdread: Missing symbols in libdvdcss.so.2, this shouldn't happen !
libdvdread: Using libdvdcss version for DVD access
Segmentation fault
This has been bugging me, as it worked before I converted the image to a livecd. I wondered if it was some kind of problem with the lack of 'real' disk space, or a lack of memory or something like that, but nothing I could find would identify it.
Finally, I started looking into libdvdcss rather than HandBrake itself. I think what confused me is the symbols error looks like a warning, especially given that there is a follow-on message which looks like libdvdcss is continuing. Anyway, eventually! I ran an md5sum on the libdvdcss.so.2 file to see if it matched a non-live machine (to a virtually identical build).
root@av3-build2108141:/# md5sum /usr/lib/x86_64-linux-gnu/libdvdcss.so.2
4702028ab20843fd5cb1e9ca4b720a72 /usr/lib/x86_64-linux-gnu/libdvdcss.so.2
N.b. libdvdcss.so.2 is symlinked to libdvdcss.so.2.1.0 in my current Debian sid based build.
On the donor machine
root@mediasvr:/usr/lib# md5sum x86_64-linux-gnu/libdvdcss.so.2
c9b314d9ed2688223c427bc5e5a39e6f x86_64-linux-gnu/libdvdcss.so.2
So I've SCPd the source file into the live machine, checked the md5sum matched the donor machine (it did), and repeated the HandBrake job. Lo and behold it worked! So I've restreamed the two files into the filesystem and success, it just works.
So I don't know if something funky happens when the image is created using the link, but actually its quite easy to fix once you understand the problem.
Saturday, 26 October 2013
iPXE Booting OpenElec
Open Embedded Linux Entertainment Center (OpenELEC) is a small Linux distribution built from scratch as a platform to turn your computer into an XBMC media center. OpenELEC is designed to make your system boot fast, and the install is so easy that anyone can turn a blank PC into a media machine in less than 15 minutes.
This is a great live image for getting up and running with XBMC, or testing it before committing to installing to a harddisk. I've set it up today to boot from the network to see how well it works on a machine I'm thinking about using for a media centre. It was a bit of a pain to get it working, but now that it is, it works fine.
First of all, download a copy of OpenElec from http://www.openelec.tv/get-openelec/download - I got a the tarballed version entitled OpenELEC-Generic.x86_64-devel-20131026131436-r16293 from the developer sources, but I think stable versions will equally well.
This was copied to my NAS server, and untarred using the command.
tar -xvf OpenELEC-Generic.x86_64-devel-20131026131436-r16293.tar
This then spat out what I presume to be an OpenElec live-cd or some such (but who cares - we don't do CD's do we? :) ). Within the created folder, there is a 'target' folder, which contains the images you need to boot from.
Make sure the target folder is in a location where it is accessible from both HTTP and NFS. Note, I've not been able to make this boot using HTTP, and I'm not sure its possible, because it seems to use NFS as a persistent storage location for your configuration.
Next, create a folder for storing your persistent information (I created a folder called persistent within my target folder.
Now update your iPXE menu.
:OpenElec
echo Booting OpenElec Media Centre
echo HTTP and NAS Method
kernel http://boot.server/openelec/OpenELEC-Generic.x86_64-devel-20131026131436-r16293/target/KERNEL boot=NFS=10.222.222.50:/boot.server/openelec/OpenELEC-Generic.x86_64-devel-20131026131436-r16293/target/ disk=NFS=10.222.222.50:/boot.server/openelec/persistent/ netboot=nfs ssh ip=dhcp
boot
So this loads the kernel using http from the server, and passes the boot partition nfs and persistent nfs location. Note, neither of the latter two define the files, just the folder paths. The Kernel knows what its looking for when it boots.
The final variables tell the kernel that it is being booted with nfs required, to enable ssh (if you want it) and to get the IP using DHCP. There are a number of other modes for debugging, text only mode, that sort of thing, but that is not discussed here.
Anyway, other than configuring the iPXE menu to call :OpenElec, that's all there is too it.
This is a great live image for getting up and running with XBMC, or testing it before committing to installing to a harddisk. I've set it up today to boot from the network to see how well it works on a machine I'm thinking about using for a media centre. It was a bit of a pain to get it working, but now that it is, it works fine.
First of all, download a copy of OpenElec from http://www.openelec.tv/get-openelec/download - I got a the tarballed version entitled OpenELEC-Generic.x86_64-devel-20131026131436-r16293 from the developer sources, but I think stable versions will equally well.
This was copied to my NAS server, and untarred using the command.
tar -xvf OpenELEC-Generic.x86_64-devel-20131026131436-r16293.tar
This then spat out what I presume to be an OpenElec live-cd or some such (but who cares - we don't do CD's do we? :) ). Within the created folder, there is a 'target' folder, which contains the images you need to boot from.
Make sure the target folder is in a location where it is accessible from both HTTP and NFS. Note, I've not been able to make this boot using HTTP, and I'm not sure its possible, because it seems to use NFS as a persistent storage location for your configuration.
Next, create a folder for storing your persistent information (I created a folder called persistent within my target folder.
Now update your iPXE menu.
:OpenElec
echo Booting OpenElec Media Centre
echo HTTP and NAS Method
kernel http://boot.server/openelec/OpenELEC-Generic.x86_64-devel-20131026131436-r16293/target/KERNEL boot=NFS=10.222.222.50:/boot.server/openelec/OpenELEC-Generic.x86_64-devel-20131026131436-r16293/target/ disk=NFS=10.222.222.50:/boot.server/openelec/persistent/ netboot=nfs ssh ip=dhcp
boot
So this loads the kernel using http from the server, and passes the boot partition nfs and persistent nfs location. Note, neither of the latter two define the files, just the folder paths. The Kernel knows what its looking for when it boots.
The final variables tell the kernel that it is being booted with nfs required, to enable ssh (if you want it) and to get the IP using DHCP. There are a number of other modes for debugging, text only mode, that sort of thing, but that is not discussed here.
Anyway, other than configuring the iPXE menu to call :OpenElec, that's all there is too it.
Sunday, 13 January 2013
iPXE CloneZilla
CloneZilla is a linux toolset that allows you to clone either a partition or whole disk to another location; either a connected storage device, or remotely to the network. This is a great tool for imaging systems before you work on them and lets you have a copy in case the worst should happen. It has a variety of bundled tools in order to read from most of the popular filesystems in use, falling back to DD to copy each disk sector if you're using some obscure or proprietorial filer. This is the FOSS alternative to Norton Ghost!
The great thing about CloneZilla is that its quick and easy to get it booting via iPXE, so is worth investing a small amount of time in setting up so that you have it ready to go should you need it.
These instructions are based on release clonezilla-live-20121217-quantal.iso which seems to be versioned 2.0.1-15.
Download the ISO from the CloneZilla site. Use 7zip or your favourite image opening tool to open the ISO. You need to extract the following files:
and put them onto your boot webserver. In this example, I have created a folder called CloneZilla.
############ CloneZilla ############
:Clonezilla
echo Starting CloneZilla with default options
kernel http://boot.server/CloneZilla/vmlinuz
initrd http://boot.server/CloneZilla/initrd.img
imgargs vmlinuz boot=live config noswap nolocales edd=on nomodeset ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" ocs_daemonon="ssh" usercrypted=Kb/VNchPYhuf6 ocs_lang="" vga=788 nosplash noprompt fetch=http://boot.server/CloneZilla/filesystem.squashfs
boot || goto failed
goto start
And that is really about it! You'll notice we pass a few arguments which set various options. The most important option is the 'fetch=' command which tells the image where to download the main file system from. The other option I set was 'usercrypted=' which uses the Linux mkpasswd command to set the root password on boot - in this example iloveclonezilla.
A really easy one this week, but one worth trying. I'm fighting to get Backtrack5 booting over iPXE without using the ISO method, but this is proving troublesome. I think the image simply isn't able to cope with being booted from a http network source.
The great thing about CloneZilla is that its quick and easy to get it booting via iPXE, so is worth investing a small amount of time in setting up so that you have it ready to go should you need it.
These instructions are based on release clonezilla-live-20121217-quantal.iso which seems to be versioned 2.0.1-15.
Download the ISO from the CloneZilla site. Use 7zip or your favourite image opening tool to open the ISO. You need to extract the following files:
- vmlinuz
- initrd.img
- filesystem.squashfs
and put them onto your boot webserver. In this example, I have created a folder called CloneZilla.
############ CloneZilla ############
:Clonezilla
echo Starting CloneZilla with default options
kernel http://boot.server/CloneZilla/vmlinuz
initrd http://boot.server/CloneZilla/initrd.img
imgargs vmlinuz boot=live config noswap nolocales edd=on nomodeset ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" ocs_daemonon="ssh" usercrypted=Kb/VNchPYhuf6 ocs_lang="" vga=788 nosplash noprompt fetch=http://boot.server/CloneZilla/filesystem.squashfs
boot || goto failed
goto start
And that is really about it! You'll notice we pass a few arguments which set various options. The most important option is the 'fetch=' command which tells the image where to download the main file system from. The other option I set was 'usercrypted=' which uses the Linux mkpasswd command to set the root password on boot - in this example iloveclonezilla.
A really easy one this week, but one worth trying. I'm fighting to get Backtrack5 booting over iPXE without using the ISO method, but this is proving troublesome. I think the image simply isn't able to cope with being booted from a http network source.
Thursday, 3 January 2013
SysRescueCD v3.1.2
So to kick off, we'll start with booting SysRescueCD from http://www.sysresccd.org/.
From Wikipedia:
SystemRescueCd is an operating system for the x86 computer platform, though the primary purpose of SystemRescueCD is to repair unbootable or otherwise damaged computer systems after a system crash. SystemRescueCD is not intended to be used as a permanent operating system. It runs from a Live CD or a USB flash drive. It was designed by a team led by François Dupoux, and is based on the Gentoo Linux distribution.
For this activity, I used the download versioned v3.1.2 which I got from http://goo.gl/F36zV
My issue was that the machine I was trying to boot from didn't seem to have enough memory to use the memdisk/iso boot option common for most installs, which meant I had to try and boot the ISOLINUX image.
The Software:
Open the ISO in your favorite ISO opening tool (I use 7zip).
Extract the following files into your web boot server. I used a sub-directory called SysRescueCD
The Webserver Config:
This is the menu display section of the config:
item SysRescueCD32 SysRescueCD - 32bit
And this is the execution program required to boot it.
############ SYSRESCUECD ############
:SysRescueCD32
echo Starting Sys RescueCD (32bit) with default options
initrd http://boot.server/SysRescueCD/initram.igz
chain http://boot.server/SysRescueCD/rescue32 cdroot docache dodhcp setkmap=uk netboot=http://boot.server/SysRescueCD/sysrcd.dat
boot || goto failed
goto start
Note, you can change setkmap= to your preferred keyboard mapping; I'm in the UK so that is the one I use. If you leave this option unset, it will prompt you when you boot the server.
If you change rescue32 to rescue64 or one of the alternate kernel images, the same commands seem to work. There doesn't seem to be any difference in using netboot= or boothttp= to locate the main disk image.
Finally, I'm using a Thecus NAS as my boot webserver, using FaJo's Apache Webserver module. For some reason, whilst the initrd and kernel load perfectly well, the image refused to boot, freezing at 'null' in the download. Another Apache webserver didn't exhibit the same condition, but its something to be aware of. If I find the cause, I'll update this post.
From Wikipedia:
SystemRescueCd is an operating system for the x86 computer platform, though the primary purpose of SystemRescueCD is to repair unbootable or otherwise damaged computer systems after a system crash. SystemRescueCD is not intended to be used as a permanent operating system. It runs from a Live CD or a USB flash drive. It was designed by a team led by François Dupoux, and is based on the Gentoo Linux distribution.
For this activity, I used the download versioned v3.1.2 which I got from http://goo.gl/F36zV
The Software:
Open the ISO in your favorite ISO opening tool (I use 7zip).
Extract the following files into your web boot server. I used a sub-directory called SysRescueCD
- sysrcd.dat
- sysrcd.md5
- ISOLINUX/rescue32 (or 64)
- ISOLINUX/initram.igz
The Webserver Config:
This is the menu display section of the config:
item SysRescueCD32 SysRescueCD - 32bit
And this is the execution program required to boot it.
############ SYSRESCUECD ############
:SysRescueCD32
echo Starting Sys RescueCD (32bit) with default options
initrd http://boot.server/SysRescueCD/initram.igz
chain http://boot.server/SysRescueCD/rescue32 cdroot docache dodhcp setkmap=uk netboot=http://boot.server/SysRescueCD/sysrcd.dat
boot || goto failed
goto start
Note, you can change setkmap= to your preferred keyboard mapping; I'm in the UK so that is the one I use. If you leave this option unset, it will prompt you when you boot the server.
If you change rescue32 to rescue64 or one of the alternate kernel images, the same commands seem to work. There doesn't seem to be any difference in using netboot= or boothttp= to locate the main disk image.
Finally, I'm using a Thecus NAS as my boot webserver, using FaJo's Apache Webserver module. For some reason, whilst the initrd and kernel load perfectly well, the image refused to boot, freezing at 'null' in the download. Another Apache webserver didn't exhibit the same condition, but its something to be aware of. If I find the cause, I'll update this post.
Subscribe to:
Posts (Atom)