No announcement yet.

The GNU/Linux and Free Software resource thread

This is a sticky topic.
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Some more Virtual Machines:


    Stands for Linux "Kernel-based Virtual Machine". I personally think it's a silly name, as KVM means "Keyboard Video Mouse" where I come from, and means a piece of hardware that lets you use one keyboard, video and mouse for multiple machines. Boo to people who use already taken acronyms. But anyway...

    KVM is the new Hypervisor-based Virtual Machine being worked on directly by the Linux Kernel team. So far it seems to be the fastest of the hypervisors according to benchmarks. I've not used it myself, but from what I read it works very closely with QEmu to provide bridged virtual ethernet adaptors and other interfaces to talk to the kernel's IP (and other I/O) stacks. It's definitely the baby of the VM world, existing only from Linux 2.6.20 kernel and up (which itself is only a couple of months old at time of writing). While the big commercial distros like RedHat and SuSE are backing Xen, Ubuntu announced that their latest release "Feisty Fawn" will support options to install both a KVM host as well as a KVM virtual machine straight off the install disk/ISO. That means simple point-and-click VM setup for users, which is always a good thing.


    VirtualBox by InnoTek (make sure you put a cover letter on your TPS reports). Again, not one I've used, but it seems to be like the VMWare of old where you can have a complete running session with display via a window on your desktop. This isn't how I run my VM servers (they run GUI-less and outside of a user session) but for folks who want a Windows desktop on their Linux box (or Vista session on their XP desktop) it's a nice way to do things.

    Networking is NATed and not bridged, which can be a pain in the arse for some. However I think that's tweakable (much like QEmu which is similar in that respect).

    Supports virtualisation extensions in hardware also (ie: speedy).
    Last edited by elvis; 26th March 2007, 12:20 PM.
    | |


    • #32
      Multidia and Audio Players

      Linux is quite literally swamped with multimedia players and dedicated audio players. There are plenty of command-line players out there that work very well when interfaced with web frontends if you ever want to set up a web-accessible jukebox or similar sort of thing. One thing I hate about GUI/Graphic stuff is that it makes for a really poor dedicated jukebox or hidden sound system. But anyways... I'll leave those for another day. Today I'll talk just about desktop players.

      As mentioned, there are literally hundreds of the buggers. Rather than go into excruciating detail about all of them, I'll just stick to the most popular dozen or so.

      VLC - VideoLAN Client

      This is my personal favourite media player hands down. It works on everything (Windows/Mac/Linux) and has all the necessary codecs built in. Developed in France, there are no stupid DMCA style laws over there, so people are free to make media players that can play proprietary codecs like WMV without fear of being sued (honestly, who the **** in their right mind sues someone over making a free media player??? God I hate America sometimes).

      VideoLAN can do neat things like:

      - Stream media from a central server to a listening client
      - Stream media to multiple clients simultaneously (have all your TVs playing the same AVIs from a central computer)
      - Stream media and break it up into segments (make your own "video wall" with multiple TVs!)
      - Play back media from any device or file - if you have ISO files that are DVD images, you can play straight from the ISO - no need to burn!
      - Use VLC as a plugin for Firefox to watch WMV, Quicktime and other formats off the net on any computer (great for Linux and Mac users, or Windows users who hate Media Player)
      - Stream from any protocol - http, ftp, udp unicast/multicast, whatever!
      - Full support for post-processing, anti-aliasing, interlace fixup, etc, etc.

      Brilliant software. I know a lot of die-hard Windows users who use this instead of Windows Media Player because it really is that good, and dead simple to use.

      VLC has recently been ported to handheld devices too. If you're a Palm/iPaq/WinPhone/iPhone/mobile-phone user, keep an eye out to see if it supports your device.


      Will literally play ANYTHING under the sun. Movies, audio, even DVB streams from TV/cable/satellite/capture cards. Comes in both command-line form and GUI for GNOME, KDE, TCL or anything you like. There are even dedicated versions for Windows and MacOSX if you are so inclined.

      The command-line version is very cool, because you can use it to quickly "transcode" between one file format at another. eg: play your DVD, and set the output to be a file instead of the screen, which is piped through XVID or a similar compression tool. End result is your DVD saved as an XVID or AVI file! Remember that in Linux "everything is a file", so redirecting output from sreen/speakers to a file or even another computer is trivial, and tools like MPlayer suddenly become much more useful than for merely watching videos.


      Similar in design to MPlayer, it's another popular video/DVD player with some neat back-end tools to do all sorts of trickery.


      The default movie player for GNOME, it's a bit odd in that it's more of a frontend for other movie playing systems. By default it uses GStreamer, but can be plugged into Xine, MPlayer, or other systems. Pretty basic in it's functionality, if you use Ubuntu or other GNOME-based distros, you'll probably have this installed as default. If you find it isn't playing the files you have and don't want to go through the process of manually adding codecs, have a look at VLC as an alternative.


      Default audio player for KDE, this has everything you'd expect from a music player. Favourite voting, playback of all music filetypes, organisation and grouping systems. It will also happily sync with any Apple iPod.


      GNOME's answer to Amarok. Same features including iPod support.




      Three music players I've never used, but are all very popular. Again, all filetypes and iPod support.


      A WinAmp clone, XMMS is a great music player with small footprint. More than that, it has an ENORMOUS array of plugins for things like MOD, S3M/Screamtracker, MIDI, and other oldschool sample or instrument/instruction-driven filetypes.

      Best of all, heaps of plugins have been written to play music out of old games. Super Nintendo/Famicom plugins, Gameboy plugins, Commodore 64 plugins, etc, etc. Grab your ROMs and use this baby to listen to the music within. As with other Linux programs, change the output device from your speakers to your disk, and you can write WAV files (and later compress them to MP3). An easy way to convert an old SNES ROM into a music CD with your favourite game music!
      | |


      • #33
        Sorry to jump on the thread but Apple's new OS Leopard is incorparating a bit of Linus(x)

        All 3D all the time (like Pinny had going in one of AA meet video's)

        And knowing Apple it will be super smooth before the pubic can get there hands on it, bring on the spots!


        • #34
          Apple's MacOSX has used 3D acceleration of *all* screen drawing since 10.3 (leopard will be 10.5). It's nothing new on offer - it's just that journalists are the bottom of the evolutionary pit, and are only realising now what it can do.

          Initially this was called Quartz Extreme, and offered only OpenGL 1.4 extensions:

          With OpenGL 2.0 and HLSL (High Level Shader Language), they moved up to "Core Image", which allows pixel-shader level programming to enhance Quartz Extreme's fairly basic "draw a window on a rectangle and move it about" with some new sexiness:

          This all goes way back to Apple's native Postscript/PDF rendering, which is a leftover from it's NeXT days:

          NEXTSTEP was an old UNIX-based OS that was the great grandaddy of OSX. Developed by Steve Jobs after he was sacked form Apple (and developed by Apple when they begged him to come back after they lost millions of bucks making the extremely shit OS8 and OS9) it was built from the ground up to accellerate primarily print information back from the old Xerox days (Xerox invented the GUI, and came up with the design for most desktop publishing and print stuff you see today - long before Apple or even Microsoft or Linux were on the scene).

          So NeXt was all about Postscript - the native print language. It was designed to make using it as fast as possible. Postscript is all vector (ie: mathematical shape primatives like triangles and elipses - kinda like what 3D cards do, only in a single 2D plane). PDF is merely an extension of Postscript, and Steve Jobs and his cohorts over at Apple figured out a long time ago that your average 3D video card was utterly wasted when you were doing normal desktop stuff. They were quick to jump on the technology and use a super speedy graphics card to accelerate their entire desktop. Seeing as MacOSX is built on Postscript and PDF, accelerating it via OpenGL was trivial, as standard primitives like triangles and elipses are what video cards do best.

          Apple were really the first to do this, and they are still the best. Both Microsoft and Linux/Xorg's 3D desktops are last minute hacks. Both Visata and Linux/Xorg use the accelleration at a very high level - essentially all information inside a window is drawn by software, and then slapped on a polygon. Speed wise, this is nice for pretty effects like twirly windows and such, but for actual useful speed boosts, it's useless.

          MacOSX accelerates basic drawing functions wherever possible. For raster items (ie: JPEG images, etc) nothing is accelerated. But for vector (PDF, SVG, etc) everything is much faster. Luckily 90% of MacOSX's desktop (named "Aqua") is vector! There was some smart forward planning by Apple.

          [You can easily tell how much of MacOSX is built on Postscript/PDF, because you can export ANYTHING to PDF from MacOSX. Anything you print, and even when you take a desktop screengrab - it's all captured to PDF straight from the processing pipeline. If it's already in a postscript format, it makes sense to just write it to disk instead of converting it to something useless and unscalable like a JPG.]

          Windows and Linux are slow to catch up. Their desktops are still heavily raster, and vector is being added in slowly and in quite a hacked and kludgey fashion. *Some* Linux utilities like the PDF browser Evince can offload PDF/vector rendering onto the video card. I've played with this under Gentoo Linux, and with the right video card it can make simple PDF viewing MUCH faster. Documents open in mere milliseconds, compared to the 15+ seconds it can take in Windows using Adobe Acrobat (simply the slowest and worst PDF browsing software EVER made, IMHO. How it got so famous I'll never know).

          So yeah, nothing new there. Apple's been doing this for years, just nobody noticed. There are times when Apple do really stupid things, and there are times when they are lightyears ahead of everyone else. It's funny now that Linux and Windows are catching up how everyone (and when I say "everyone", I mean "journalists") is looking at MacOSX and going "oooh... I get it now!". These are typically the same folks who 3 years ago were saying things like "pointless waste of time" when referring to the same technologies they now talk up today.
          Last edited by elvis; 28th March 2007, 09:41 PM. Reason: more reference links added
          | |


          • #35
            Ubuntu 7.04 Feisty Fawn has graphical desktop effects made extremely accessible, with one click activation in the system menu.
            The Pinny Parlour likes this post.

            The Pinny Parlour Welcomes you to Aussie Arcade.


            • #36

              Screenshot of Linux-KVM running AROS (free AmigaOS3.1 clone) inside a virtual machine. Neat!
              | |


              • #37
                More 3D Desktop goodness. The beauty of open source is that development and customisation of things like desktop effects and themes both 2D and 3D is mind bendingly fast. The amount of options available to Beryl (the Linux 3D desktop system) users is enormous.

                Last edited by Berty; 23rd September 2007, 06:27 PM.
                | |


                • #38
                  WOW! love the desktop. kinda makes Vista look like a baby toy in comparison doesnt it
                  rc forums looking for cabinet pc atx power supply for 12/5 volt for mame cabinet and turn it on
                  if your new to electronics and


                  • #39
                    Originally posted by dezrae View Post
                    WOW! love the desktop. kinda makes Vista look like a baby toy in comparison doesnt it
                    I wouldn't say "toy", just "unconfigurable".

                    Mac and Vista are very static. There's not a whole lot you can do with their default setups. If you are happy enough to have a machine that was configured by some foreign developer in a set way, that's cool. If you want some sort of personal customisation, then you'll need to look elsewhere.

                    Beryl and the Linux desktop are limited only by the imagination of the end user. You can quite literally load any theme you like (or make your own!) and do anything you want with the 3D tools. The choice of themes and configurations are creeping quite literally into the hundreds, and growing every day.

                    For my personal desktop, I keep away from the excessive bling. I regularly use features like the F8 (tile all windows on current desktop) and F9 (tile all windows on all desktops) keys to find windows I need quickly. Multi-desktops I've been using for 10+ years now. They've been a standard in XFree86 since day dot - something I always miss when I have to use Mac or Windows for a bit. But the cube/slide transition doesn't hurt any, so I leave it on. OpenGL AA fonts are nice too - much better than standard font smoothing on LCDs.

                    Fast preview (mouse-over on a minimised window and you see the contents in a smaller window) can be handy also when searching for the right app. But then again, alt-tab with graphical preview is just as easy.

                    The other effects like burn/explode on window close and others I use occasionally, but generally if I'm doing grunty stuff like running scripts on a few hundred thousand images from a render, I prefer my CPU power to be elsewhere.
                    | |


                    • #40
                      Cegeda 6.0 was released today:

                      Cedega lets you play Windows/DirectX/Direct3D games under Linux. See the bigger post here:

                      The new release promises better performance and better graphics (new DX9 and DX10 features). Sounds like a winner for PC gamers who want to move away from Microsoft operating systems.

                      Cedega 6.0 review:
                      Last edited by elvis; 12th April 2007, 10:11 AM.
                      | |


                      • #41
                        Thanks Elvis,

                        Unlike knoppix and Windows, ubuntu 7.04b doesn't like my pc.
                        Got the dreaded "can't access tty; job control turned off" message, spent a lot of time, tried a few things (hardware changes included), got to the gui screen with install icon but at the end of the day it was inconsistent, back on windows.
                        Maybe this can be resolved but not going to attempt again until next pc as its been too time consuming.

                        virtualization is the buzz of these five years 2005-2010 isn't it? Its being heavily marketed and accepted by it managers, its all fine for small apps and there are advantages with products from vmware like vi as everything is raided, but for me is causes no end of issues as a dba. As an isp its great, you can sell a "standalone" server to a client unknowning to them its virtual, but
                        1. its slower due to overhead
                        2. its a single point of failure in itsself
                        3. its yet another dependancy
                        4. its sharing resources with other instances
                        5. running large software in a heavy environment, when something goes wrong the vendor will respond "now try that in a non vi environment"
                        6. maybe use vi in testing, but in large scale production not for me.

                        Give me a simple solution and i will choose it anyday.
                        Since a SAN provides virtual disk, I can swap a production server around without virtualization or clustering in say 15 minutes.

                        Also my last windows pc motherboard change I simply swapped the hard drive over, after booting and waiting a few minutes everything worked.

                        Not being bias here, I use RedHat at work, will getoff windows this year, but every environment has its story.


                        • #42
                          Originally posted by Mikie View Post
                          1. its slower due to overhead
                          2. its a single point of failure in itsself
                          3. its yet another dependancy
                          4. its sharing resources with other instances
                          5. running large software in a heavy environment, when something goes wrong the vendor will respond "now try that in a non vi environment"
                          6. maybe use vi in testing, but in large scale production not for me.
                          You are confusing tradtional VMs with hypervisors. A common mistake. Remember that hypervisors are more like hardware partitioners, and not full VMs.

                          Some comments on your points quoted:

                          1. It's no slower than running two services on one machine. The "virtualised" OSes themselves take almost no overhead. We run heavy finance systems that service well over 1000 users from virtualised servers without fuss.

                          2. Say I have 5 physical machines. That's 5 physical pieces of hardware that can break. Still a risk. With Xen, I can built 2 physical machines, and put 5 VM's on each. I can then set up a cheap system using Linux-HA where if physical hardware fails, I can fail over to another physical machine. I've spent less money on hardware, and have complete redundancy of an entire setup.

                          3. Not sure what you mean here.

                          4. No more than running more than one service on one machine. Again, these are hypervisors, not traditional virtualised CPUs - each VM gets full access to a bare metal CPU, not a fake "virtual" CPU. You are essentially just partitioning up kernel time. It's actually a very secure way to deal with multiple systems running on one physical piece of hardware. A bit like BSD chroot jails, and other systems to the same effect.

                          5. I've yet to find a vendor that has said that to me. And quite frankly, it shows a lack of competency and quality on the vendor's behalf if they did. I demand a much higher level of intelligence from people I am potentially spending millions of dollars with.

                          6. Hypervisors are not the silver bullet for all situations, but they are very useful. And yes, they are fine for production (again, depending on what you need from them). I think you have a slightly out of date idea as to what they can offer. 5 years ago VMWare was a VERY different creature to what Xen is today.

                          From a management point of view, they are heavenly. I can assign memory and CPU needs to servers on the fly, I can use things like LVM and iSCSI (or proprietary SAN stuff if I want) to dynamically allocate disk on the fly to VMs (in realtime, no downtime needed), etc. And again, I can do really neat things like pause an entire VM, snapshot it, and resume it in mere seconds. How long do backups of physical machines take compared to hypervisor environment? I can tell you now, it's closer to hours, and not seconds.

                          And again, fail-over is instant. A machine dies? Easy as - just fire up the snapshot backup VM on another machine. Everything is identical - same MAC, same IP, same everything. The user sees nothing but a 30 second network lag, and the IT team is free to take as long as they need to fix the problem.

                          No, I wouldn't use Xen on everything all the time. But it has it's place, and it's worth the research. Check it out if you get the time. I think you'll be pleasantly surprised at what it can do compared to traditional virtual machines that are not hypervisors.
                          | |


                          • #43
                            just to clarify, I look after a large finance instance over 2000 users on a (now old) hardware partitioned HP GS1280, this runs fine

                            A new client runs vi from vmware emulation on five servers, the cool parts are everything is raided so a loss of say a server only results in a slight pause to the application and then it just continues, downside here is performance.

                            I'll have a read on hypervisors.

                            In terms of dependency, if your not running virtualisation its just one less link in the chain.

                            Snapshotting can also be done on the san but the changes still need to be logged until backup is complete. We can't lose any data so we cannot revert to a saved snapshot, still need to roll forward.

                            Places I've worked the software vendor takes any opportunity to not progress a difficult call. Hence given the option of a standalone isolated environment. I'll always take it.


                            • #44
                              Originally posted by Mikie View Post
                              In terms of dependency, if your not running virtualisation its just one less link in the chain.
                              No, not really. Again, you're more or less partitioning hardware. Don't think of it like a "software layer", because it really isn't.

                              You can take a Xen "virtual machine" image and load it straight onto bare metal disk, and it will boot like normal. You can't do that with the old style VMWare systems.

                              But again, Xen (or any VM/hypervisor) is not a silver bullet. There are far more instances where I'd use real hardware than where I would use Xen.

                              It just so happens that for one of the places I work for at the moment (LARGE Aust-wide retail chain), Xen fixes A LOT of problems they've been having, and does so for far less money than throwing buckets of hardware at the problem.

                              Conversely another place I work for does a lot of special effects and computer graphics, and for them Xen is utterly pointless. They need raw power, and lots of it. So throwing lots of real CPUs at the problem is the only solution. With that said, their main network/file server is partitioned up into a few Xen machines for minor network services stuff (DNS and LDAP servers, etc). But it's not a core part of their actual money-making work load.
                              | |


                              • #45
                                whoops hold on

                                so you mean paravirtualization, to take advantage on the virtualization routines available in the processor?
                                this is to offload the traps required by previous sw vm's to the processor.

                                still if the intended goal here from managers is to maximise resources by sharing resources in a multi large application environment no thanks. if running the environment for one large app , sure why not include it

                                ah sorry elvis, didn't read all of your last post.

                                yes appropriate use, ive just worked at a lot of places now where there's inappropriate use of certain technologies (and there's nothing wrong with the technology)

                                a great read, thank you
                                Last edited by Mikie; 12th April 2007, 06:23 PM. Reason: Automerged Doublepost


                                Users Viewing Topic: 0 members and 1 (guests)