[131019870010] |
Straightforward system and file backup/restore for Linux?
[131019870020] |After using an Ubuntu virtual machine for a while, I think I am ready to get a real, physical machine and use Ubuntu on it.
[131019870030] |Since I am still relatively new to the Linux world, is there a reliable solution for making regular backups of files and settings for my system (maybe similar to Apple's Time Machine?) that I can rely upon when, e.g., my hard drive fails, or something bad happens?
[131019870040] |Thanks!
[131019880010] |You could give TimeVault (https://launchpad.net/timevault) or FlyBack (http://code.google.com/p/flyback/) a try.
[131019880020] |I am a gentoo user myself, so I haven't really tried these, but they seem like they are pretty straight forward.
[131019880030] |TimeVault does not seem to work correctly under Lucid, but only due to an incompatibility with python 2.6.
[131019880040] |Let me know if this helps or if you need further information.
[131019880050] |Cheers!
[131019890010] |In addition to what wormintrude mentioned you might consider:
[131019890020] |rdiff-backup: I've used this myself regularly in the past.
[131019890030] |The pros are that it does incremental backups, can be used for remote backups or local backups, and has a wide range of features that makes it easy to implement a wide range of backup policies.
[131019890040] |The down side is that it is basically just a collection of command line utilities and thus requires you to write cron jobs to manage the backups.
[131019890050] |Deja Dup: This is a GUI-centric solution that provides a lot of the same features as rdiff-backup but without the need for self-written cron jobs.
[131019890060] |It also supports encrypted backups and backups to amazon's cloud service.
[131019890070] |rsnapshot: This is still a command-line based utility but uses conf files in /etc to reduce the amount of custom script writing that is necessary.
[131019890080] |As far as I know, all of these are available in the Ubuntu repositories.
[131019890090] |The most straightforward of these is Deja Dup, which is meant to be easy-to-use and integrate well with GNOME.
[131019890100] |One advantage of all three is that they make incremental backups.
[131019890110] |Thus you can have daily backups reaching fairly far back without taking up much more space than one full backup.
[131019900010] |Sending audio through network
[131019900020] |This is a rather crazy idea.
[131019900030] |I am planing to set up a configuration that would send audio being played on my laptop to my home server via local network so that the sound is played on the server connected to a hi-fi with good speakers.
[131019900040] |It's supposed to serve the purpose of watching movies on the laptop with sound on the speakers without rearanging cables.
[131019900050] |I suppose it can be done similarly to writing text to a file mounted via sftp, but with the /dev/audio or /dev/mixer
[131019900060] |But I have no idea how to intercept the audio output.
[131019900070] |Looking forward to tips from Pros ;)
[131019910010] |Your best bet is probably VLC/VLS, but expect some nasty problems with synchronization drift as it is hard to keep video playing here in lockstep with audio data playing there.
[131019920010] |Use MPD on your laptop to stream the music to your computer at home.
[131019920020] |I suggest, however, that you run MPD on the computer at home, and just connect to MPD with your Client from your laptop (I suggest GMPC).
[131019920030] |It is how I listen to music all the time, I have just one computer with music on it, and clients on my laptop and other computers.
[131019920040] |MPD can stream the music through network, so you can feed that to a server or directly play it with mplayer.
[131019930010] |What you're looking for is a sound server.
[131019930020] |These programs send sound over the network.
[131019930030] |Choices include JACK, NAS, Pulseaudio and more.
[131019930040] |Pulseaudio is the default audio system on Ubuntu and is widely available on Unix.
[131019930050] |JACK is widely available on desktop operating systems and prides itself on its low latency.
[131019930060] |I'd try these two first.
[131019930070] |To play music on a different computer, any of these programs would do as long as you manage to install the same program on both machines (they use incompatible protocols, though some have translation modules).
[131019930080] |But when playing movies, you may have trouble because forwarding sound over the network introduces perceptible latency.
[131019930090] |Some movie players allow you to fine-tune the alignment between audio and video, you may need to play with this setting.
[131019940010] |As Gilles said, you're best of looking into updating your soundserver config.
[131019940020] |You can use pulseaudio to listen to a TCP port.
[131019940030] |Be sure to check padevchooser, which is a GUI frontend to update your config.
[131019940040] |You can find a tutorial on http://wiki.archlinux.org/index.php/PulseAudio#PulseAudio_over_network
[131019950010] |MPD allow for streaming audio over http, just not setup by default.
[131019950020] |The limitation is that MPD streams from your MPD audio library, where as a sound server will allow you to stream any audio on the computer.
[131019960010] |How was the shift to 64 bits handled on Linux
[131019960020] |How was the transition to 64 bits handled on Linux/Unix?
[131019960030] |The Windows world still seems to have issues with it and I'm curious how it was handled in the *nix world.
[131019970010] |The work required to make the kernel 64-bit was done a looooong time ago using DEC Alpha systems.
[131019970020] |Programs, however, are a different matter.
[131019970030] |The general consensus that I've seen so far seems to be:
[131019970040] |Separate /lib and /lib64 directories for systems that have mixed binaries
[131019970050] |Compile as 64-bit; if compilation fails, recompile as 32-bit until the source can be cleared for 64-bit.
[131019970060] |Other than that, you're really not going to see a whole lot of "grief" from mixed 32/64 bit builds.
[131019980010] |Windows and *ix used different data models for the transition.
[131019980020] |This UNIX.org page is a bit old, but it still provides a good overview of the trade-offs (note that long long
was later added to C99, and was required to be at least 64-bit).
[131019980030] |You can also see a Wikipedia article on the same topic.
[131019980040] |As advocated at the end of the UNIX.org article, most UNIX-like systems have gone with LP64, which means long
, long long
, and pointers are all 64-bit.
[131019980050] |Windows went with what's called the LLP64 data model, which means that only long long
and pointers are 64-bit. long
remains 32-bit.
[131019980060] |Part of the reason was simply that they didn't want to go through and fix broken code that assumed long
fit in an int
.
[131019990010] |As Linux distros is mostly OpenSource there is largly transition already done.
[131019990020] |Unless you use propertary software (such as skype) you can run pure 64-bit system without any disadvantages.
[131019990030] |However the real difference IMHO is more propertary vs. open then unix vs. windows as it is usually the open source software that is ported first (some volonteer needs to recompile something - maybe fix some compilation issues) - or in most cases not ported at all but just recompiled ;) - and propertary that is ported last.
[131019990040] |Possibly additionally on Linux you have repos so the installation is handled automagically - you don't need to choose 64-bit or 32-bit version (system chooses yours automatically).
[131019990050] |On Windows programs are downloaded and having separate 64-bit and 32-bit version:
[131019990060] |Doubles the size of files on server
[131019990070] |Requires of user to know his/her version.
[131019990080] |Or even that they differ by something
[131019990090] |I guess that's the reason why Windows binaries are usually 32-bit - it is one-size-fits-all and not everyone have gone to 64-bit version.
[131020000010] |In a narrow sense, Linux is a Unix-like kernel started by Linus Torvalds in 1991.
[131020000020] |In common usage, Linux is an operating system built around the Linux kernel.
[131020000030] |This operating system includes many other components, notably from the GNU project, so the term GNU/Linux is sometimes used.
[131020000040] |There are many different Linux distributions (see distros), including more or less the same programs but with different installers and other customizations.
[131020000050] |The Linux kernel is used in some non-unix-like embedded operating systems, the best known being Android.
[131020000060] |Further reading
[131020000070] |Is Linux a Unix?
[131020000080] |Linux Cell Phones?
[131020010010] |Linux is the family of Unix-like operating systems that use the Linux kernel
[131020020010] |Unix in general is the topic of this site.
[131020020020] |Do not use this tag unless your question is about the historical UNIX product from AT&T. If your question is about a particular unix variant (e.g. linux, freebsd, solaris, …) or a particular application, use the corresponding tag.
[131020020030] |Otherwise stick to one or more tags reflecting what you are trying to do.
[131020020040] |Unix is an operating system that was initially developed at AT&T Bell Labs as a simpler Multics.
[131020020050] |Since then, there have been many operating systems based on Unix, at least reproducing the interfaces if not the design and sometimes the code.
[131020020060] |Legal status
[131020020070] |The original Unix code is proprietary software, originally owned by AT&T and licensed by several companies.
[131020020080] |This spurned several groups into developing independent code bases with similar design.
[131020020090] |The best known are bsd and linux.
[131020020100] |UNIX® is a registered trade mark owned by The Open Group.
[131020020110] |Only certified products may use the brand.
[131020020120] |In informal usage, “unix” or “Unix” or “Un*x” or “*nix” can mean any Unix-like system, whether it is derived from the original code or not, and whether it has the brand or not.
[131020020130] |Standards
[131020020140] |The POSIX and Single UNIX standards codify many interfaces for programmers and command-line users.
[131020020150] |The Single UNIX® Specification, Version 2 (a superset of the original POSIX, which is IEEE 1003 and ISO/IEC 9945)
[131020020160] |The Open Group Base Specifications Issue 6, which is also the Single UNIX Specification version 3 and POSIX:2001
[131020020170] |The Open Group Base Specifications Issue 7, which is also the Single UNIX Specification version 4 and POSIX:2008
[131020020180] |System administration is not so standardized and varies greatly from one variant to the next.
[131020020190] |(See the Rosetta Stone for Unix for an overview.)
[131020020200] |Unix variants
[131020020210] |Derived from the original Unix code:
[131020020220] |aix AIX (IBM)
[131020020230] |solaris, opensolaris Solaris (historically Sun, now Oracle)
[131020020240] |sco SCO OpenServer
[131020020250] |hp-ux HP/UX (Hewlett-Packard)
[131020020260] |Derived from BSD:
[131020020270] |darwin Darwin, the core of Mac OS X
[131020020280] |freebsd FreeBSD
[131020020290] |netbsd NetBSD
[131020020300] |openbsd OpenBSD
[131020020310] |Others:
[131020020320] |linux Linux, with many distributions
[131020020330] |minix Minix
[131020020340] |POSIX emulation layers on non-unix operating systems:
[131020020350] |windows
[131020020360] |cygwin
[131020020370] |(And many more that have not (yet) been discussed on this site.)
[131020020380] |Further reading
[131020020390] |Evolution of Operating systems from Unix
[131020020400] |Why is there a * When There is Mention of Unix Throughout the Internet?
[131020020410] |Is Linux a Unix?
[131020020420] |Is Mac OS X, UNIX?
[131020030010] |Unix is the topic of this site.
[131020030020] |Do not use this tag except for questions about the historical UNIX product
[131020040010] |How linux handles multiple path separators (/home////username///file)
[131020040020] |I'm working on a python script that passes file locations to an scp subprocess.
[131020040030] |That's all fine, but I'm in a situation where I may end up concatenating a path with a filename such that there's a double '/
in the path.
[131020040040] |I know that bash doesn't care if you have multiple file separators, but I'm wondering how exactly that is rectified.
[131020040050] |Is it bash that strips extra /
s or does it really not matter ever?
[131020040060] |I ask because it will save me several lines of code to check for extra /
s while concatenating.
[131020040070] |I know it's not a big deal, but I'm curious as well.
[131020040080] |I have a bash script that has the line cd //usr
(instead of cd /usr
), which seems to imply there might be a significance to using multiple /
s in a path
[131020050010] |The OS doesn't appear to care about it either, having just tried out a C program with a direct syscall to open with a // in the path.
[131020050020] |You can use the python library function os.path.normpath to normalize it though, which saves you having to scan through the string looking for extras.
[131020050030] |Other languages have similar functions.
[131020050040] |http://docs.python.org/library/os.path.html#os.path.normpath
[131020060010] |Multiple slashes are allowed and are equivalent to a single slash.
[131020060020] |From the Single Unix specification (version 3), base definitions §3.266 pathname: “Multiple successive slashes are considered to be the same as one slash.”
[131020060030] |There is one exception: if a pathname starts with exactly two slashes, it may be treated differently (ref: base definitions §4.11 pathname resolution).
[131020060040] |Linux itself doesn't do this, though some applications might, and other unix-ish system do (e.g. Cygwin).
[131020060050] |A trailing /
at the end of a pathname forces the pathname to refer to a directory (base definitions §4.11 pathname resolution makes it equivalent to a trailing /.
).
[131020060060] |For programs that act on a directory entry, if foo
is a symbolic link to a directory, then passing foo/
is a way to make the program act on the directory instead of the symbolic link.
[131020070010] |Use os.path.join
in Python and you won't get multiple slashes.
[131020070020] |Building up filenames yourself by concatenating strings is considered poor Python style.
[131020080010] |Of course you can normalize a path with possible multiple / (slashes) in it by passing it through tr -s
[131020080020] |... and then use "$NORMALIZE"
[131020080030] |However, it should be necessary.
[131020080040] |As for as I know any properly UNIX kernel should ignore concurrent path separators --- or conceptually treating them as ..././
...
[131020090010] |There is no difference.
[131020090020] |Multiple slashes get ignored (without effect), e.g.:
[131020100010] |On all Unix systems that I've seen it's the same as a single /
, but the Unix standard specifies that
[131020100020] |A pathname that begins with two successive slashes may be interpreted in an implementation-defined manner, although more than two leading slashes shall be treated as a single slash.
[131020100030] |so it may be handled specially, depending on your system.
[131020100040] |(Some older Unix versions used a double leading /
for remote filesystem access, and there may still be some that do.)
[131020110010] |What is the loopback interface
[131020110020] |What is the loopback interface and how does it differ from the eth0 interface?
[131020110030] |And why do I need to use it when mounting an ISO or running a service on localhost?
[131020120010] |The loopback networking interface is a virtual network device implemented entirely in software.
[131020120020] |All traffic sent to it "loops back" and just targets services on your local machine.
[131020120030] |eth0 tends to be the name of the first hardware network device (on linux, at least), and will send network traffic to remote machines.
[131020120040] |You might see it as en0, ent0, et0, or various other names depending on which OS you're using at the time.
[131020120050] |(It could also be a virtual device, but that's another topic)
[131020120060] |The loopback option used when mounting an ISO image has nothing to do with the networking interface, it just means that the mount command has to first associate the file with a device node (/dev/loopback or something with a similar name) before mounting it to the target directory.
[131020120070] |It "loops back" reads (and writes, if supported) to a file on an existing mount, instead of using a device directly.
[131020130010] |wireless networking
[131020130020] |When I connect my laptop to a wpa2 secured router, using wicd I don't have any connection problems.
[131020130030] |BUT when I connect with the following command-line command ( as outlined here )
[131020130040] |UPDATED:
[131020130050] |My connection is unstable, sometimes working sometimes not, indicated by the following output:
[131020130060] |My /etc/wpa_supplicant.conf
file:
[131020130070] |dhcpcd wlan0
[131020130080] |ifconfig
[131020130090] |route
[131020140010] |It seems to me that wireless signal is low, but that doesn't explain why wicd works.
[131020150010] |you want a sleep 10s
before the dhcpd part at the least
[131020160010] |How to flash firmware under Linux in practice?
[131020160020] |Well, I am feeling to old for jumping through several burning hoops to upgrade several firmwares via the usual vendor-specific way: Download some DOS tools, waste some time creating a (Free-)DOS boot medium and wasting more time to make the BIOS actually boot from that and finally flash the firmware upgrade.
[131020160030] |This is so 1980-ies.
[131020160040] |I come across some linux flash tool flash tool from the Coreboot project.
[131020160050] |It seems to support various FLASH-chips.
[131020160060] |But how does it work in practice?
[131020160070] |I guess there are some pitfalls converting vendor supplied firmware upgrades into the right format.
[131020160080] |Or what about indentifying the right destination chip?
[131020160090] |Currently I probably have to upgrade for example:
[131020160100] |the firmware of some Seagate 1.5 TB disks
[131020160110] |the firmware of an old Abit Athlon 64 board (Award bios)
[131020160120] |Bios/Embedded-Controller-Bios of a Thinkpad
[131020160130] |How do you upgrade your devices firmware at a Linux system?
[131020170010] |Every device with upgradeable firmware is probably going to have its own methods for doing that.
[131020170020] |Motherboards in particular are notoriously incompatible in this regard.
[131020170030] |As to hard drives, again, this is a proprietary matter.
[131020170040] |Seagate provides liveCDs and Windows downloads to perform firmware updates, but not Linux or Unix tools.
[131020170050] |You can build bootable images for Thinkpad BIOS updates that can be booted from GRUB.
[131020170060] |Otherwise, you're just going to have to check with the manufacturer for tools.
[131020170070] |On the other hand, if you're working with microcontrollers, you can often program them with fairly universal tools, though still on a limited basis (e.g., Atmel chips can usually be programmed with avrdude
).
[131020180010] |My small experience is that I used Flashrom to update my Intel Motherboard BIOS and it worked fine.
[131020180020] |In general it seems like a really nice tool.
[131020190010] |Using DOS upgrade floppy booted with GRUB as mentioned before works for majority of hardware.
[131020190020] |In some cases you can find native tools.
[131020190030] |Dell even prepares repositories which integrates with distro packaging system:
[131020190040] |http://linux.dell.com/wiki/index.php/Repository/firmware
[131020190050] |Sadly, most updates requires machine reboot to complete.
[131020200010] |External tablet input devices for Linux (Inkscape/Gimp)?
[131020200020] |What are good external tablet input devices for Linux?
[131020200030] |It should be used for more convenient use of Inkscape and Gimp.
[131020200040] |Some characteristics that are useful, I guess:
[131020200050] |connected via USB
[131020200060] |included pen with some buttons
[131020200070] |tablet should only be sensitive to pen touches
[131020200080] |Open-source drivers (a must)
[131020200090] |some OS engagement by the vendor
[131020200100] |Open questions:
[131020200110] |What are good sizes of such tablets in practice?
[131020200120] |Is there is some good guide how to setup it under Linux/X?
[131020200130] |What are other great programs that are really easier to use with a tablet?
[131020210010] |I and friends of mine made some good experiences with the tablets from Wacom.
[131020210020] |The Bamboo series contains different tablets in different pricing categories.
[131020210030] |My Bamboo for example is connected via USB, the pen as 2 Buttons, the tablet is only sensitive to the pen, has some more buttons and works with my linux out of the box.
[131020210040] |So this should satisfy your needs.
[131020210050] |Wacom supports Windows, Mac OS X and Linux without any problems as far as I know.
[131020210060] |They link to the Linux Wacom Project on their official homepage for driver support.
[131020210070] |After a little configuration of the input devices it works pressure sensitive with Gimp.
[131020210080] |For advanced configuration of all tablet buttons and touch sensitive areas theres the Wacom ExpressKeys project, which also works fine under the different distributions.
[131020210090] |To your questions:
[131020210100] |What are good sizes of such tablets in practice?
[131020210110] |This totally depends on your usage of the tablet.
[131020210120] |Are you just using it as an addition to your mouse?
[131020210130] |Are you gonna start some kind of digital painting? etc.
[131020210140] |A common size for the "drawing" area of those tablets is ~ 5.8" x 3.6".
[131020210150] |This should work fine for the average usage.
[131020210160] |More important than the size is IMHO the resolution and pressure levels the tablet supports, because this will influent your work.
[131020210170] |Keep this in your mind when you are comparing tablets.
[131020210180] |Is there is some good guide how to setup it under Linux/X?
[131020210190] |The Linux Wacom Project maintains a nice Howto to that topic.
[131020210200] |Also there are several guides based more ore less on the used distribution, e.g. ARCH and Ubuntu.
[131020210210] |What are other great programs that are really easier to use with a tablet?
[131020210220] |I often use my tablet also for audio processing.
[131020210230] |The editing of different audio tracks with a pen feels much more natural for me.
[131020220010] |Does resolution really matter?
[131020220020] |Can you physically distinguish between resolutions greater then 1000 lpi?
[131020220030] |Does any tablet ever come with less then 1000 lpi?
[131020230010] |Start X as a user other than root
[131020230020] |I know some distro's (moblin?) have X starting as a user other than root already... what is required to do this? what steps need to be taken?
[131020230030] |I don't think it matters but I think X is started by KDM on my system and I'm running Arch Linux
[131020240010] |The first step that needs to be taken is to make sure that you have a card that supports kernel-mode-setting.
[131020240020] |If you don't you will likely still have to run X as root.
[131020240030] |Ubuntu is looking into doing this and thus has a small set of directions here: https://wiki.ubuntu.com/X/Rootless which I think should work as a good starting place for most major distros.
[131020250010] |Create services in Linux (Start up in linux)
[131020250020] |I need one process run before log in to system. how to run it like services?(how do I make services in Linux?)
[131020250030] |in Ubuntu and Fedora ? the service is customized tomcat
[131020260010] |If you have a cron
daemon, one of the predefined cron time hooks is @reboot
, which naturally runs when the system starts.
[131020260020] |Run crontab -e
to edit your crontab
file, and add a line:
[131020270010] |To run a service without or before logging in to the system (i.e. "on boot"), you will need to create a startup script and add it to the boot sequence.
[131020270020] |There's three parts to a service script: start, stop and restart.
[131020270030] |The basic structure of a service script is:
[131020270040] |Once you have tweaked the script to your liking, just place it in /etc/init.d/ And, add it to the system service startup process (on Fedora, I am not a Ubuntu user, >D):
[131020270050] |Service will be added to the system boot up process and you will not have to manually start it up again.
[131020270060] |Cheers!
[131020280010] |Tomcat is a fairly common service, I'd recommend looking at the init script provided by the distro already.
[131020280020] |Chances are it works with your customized binary, with little to no tweaking.
[131020290010] |Depending on init system, you create init script differently.
[131020290020] |Fedora gives you upstart and systemd to choose from, and of course SysV compatibility.
[131020290030] |In case you select Upstart:
[131020290040] |create service definition file as /etc/init/custom-tomcat.conf
[131020290050] |put inside:
[131020290060] |start on stopped rc RUNLEVEL=3
[131020290070] |respawn
[131020290080] |exec /path/to/you/tomcat --and --parameters
[131020290090] |And you Tomcat should start on system start.
[131020290100] |If you choose to create systemd job, do the following
[131020290110] |create service definiton in /etc/systemd/system/custom-tomcat.service
[131020290120] |put inside:
[131020290130] |[Service]
[131020290140] |ExecStart=/path/to/you/tomcat --and --parameters
[131020290150] |Restart=restart-always
[131020290160] |[Install]
[131020290170] |WantedBy=multi-user.target
[131020290180] |and enable your service using “systemctl enable custom-tomcat.service”.
[131020290190] |It will be started every normal boot.
[131020290200] |Of course there are few more configuration options for both init systems, you can check those in their documentation.
[131020300010] |For simply running a script after the computer started but before a user logs in, you can simply edit the script /etc/rc.local which is meant to solve exactly this task.
[131020310010] |How to set up local mail retrieval and delivery?
[131020310020] |I am able to send mails using mail
.
[131020310030] |I have unread mails in my inbox as I can see in Outlook.
[131020310040] |Why doesn't mail show me my mails?
[131020310050] |How can I make mail
fetch them?
[131020320010] |Traditionally, unix mail is delivered right to your machine (because if your email address is lazer@machine.example.com
, surely you have a shell account on machine.example.com
).
[131020320020] |It is usually delivered in a file called /var/mail/lazer
or /var/spool/mail/lazer
, though a mail delivery agent may put it somewhere else.
[131020320030] |This still happens on unix mail servers, but nowadays most users don't have direct access to mail servers.
[131020320040] |Local mail (e.g. from cron jobs) is normally delivered in this way.
[131020320050] |Nowadays, typically, the mail is delivered on a server somewhere, and your only access to this server is a mail retrieval protocol, typically POP or IMAP.
[131020320060] |Microsoft has a proprietary protocol to talk to its mail server (Exchange), and accessing Exchange with anything but Outlook can be difficult (Exchange has an optional modules for POP and IMAP, but they're not always enabled).
[131020320070] |Most ISPs and mail providers offer both POP and IMAP access; in an all-Microsoft corporate environment you might be stuck with Exchange.
[131020320080] |To read your mail under unix, you have three choices:
[131020320090] |Arrange for the mail server to forward the mail to your computer.
[131020320100] |This is reasonable only if your computer is always on and connected to the Internet: you take responsibility for any failure, and must set things up properly to handle bounces, spam attempts, virus attacks, etc.
[131020320110] |Then your mail will arrive in the traditional unix way.
[131020320120] |Fetch your mail from the server at regular intervals.
[131020320130] |The usual tool for this is fetchmail.
[131020320140] |It queries a POP or IMAP server and delivers the mail either using the normal system delivery mechanism or directly to a file of your choice.
[131020320150] |What protocol to use, what server to query, what username and password to pass, and so on will be found in your Outlook settings.
[131020320160] |Depending on how you configure fetchmail, a copy of the downloaded mails may or may not remain on the server.
[131020320170] |Make your mail client itself retrieve the mail from the server using POP or IMAP.
[131020320180] |Most unix mail clients that are more advanced than the ancient /bin/mail
can do this.
[131020320190] |Again, the parameters to access the server will be found in your Outlook settings.
[131020320200] |Outside the unix world (e.g. with Outlook) this is typically the only possible mode of operation.
[131020330010] |Extending history search in zsh
[131020330020] |How to extend Ctrl+R search in zsh?
[131020330030] |It cannot find entry despite it is in history file.
[131020330040] |Edit: My .zsh
:
[131020330050] |cave resolve -c
should find blowprio cave resolve -c world -C a -R w --resume-file resume-world
which is in .zsh/history
(at least grep
says so) but it finds no match.
[131020340010] |I don't think it is possible to "extend" this search.
[131020340020] |In a default configuration Ctrl+R is mapped to a builtin function of the ZSH Line Edtior (zle): history-incremental-search-backward
[131020340030] |See History Control in the ZSH Manpage.
[131020340040] |It seems that there are no possibilities to "extend" this function.
[131020340050] |What is the entry which is not found?
[131020340060] |Any examples?
[131020350010] |You've set SAVEHIST=10000
, but you left HISTSIZE
at its default value of 30.
[131020350020] |That means any session will keep at most 30 entries in memory.
[131020350030] |Due to the append_history
option, the history file can contain more history than kept into memory.
[131020350040] |If the entry you're searching for is not in memory, it won't be found.
[131020350050] |Easy fix: set SAVEHIST
to be larger.
[131020350060] |Most of the time SAVEHIST
and HISTSIZE
should be the same value.
[131020350070] |If you're extremely short of memory, I suppose it would make sense to keep fewer entries in memory and to load them only when you search for them.
[131020350080] |But that sounds like a lot of coding effort for a rather small benefit (10000 entries would be something like a megabyte, which is large for a shell instance but not out of the question).
[131020350090] |You would get better mileage out of your history entries with the hist_ignore_all_dups
option (instead of hist_find_no_dups
).
[131020360010] |Zsh is a shell with many advanced command-line and scripting features
[131020370010] |Mail is an intelligent mail processing system, which has a command syntax reminiscent of ed with lines replaced by messages.
[131020380010] |The mail command - send and receive mai
[131020390010] |How to find image files by content
[131020390020] |I have a list of files and I need to find all the image-files from that list.
[131020390030] |For example, if my list contained the following:
[131020390040] |Then I would like only to select:
[131020390050] |Notes:
[131020390060] |Method must not be dependant on file extensions
[131020390070] |Obscure image formats for Photoshop and Gimp can be ignored. ( If feh
can't show it, its not a image )
[131020400010] |If file detects image it should print line like:
[131020400020] |It works on magic numbers so it is not based on extentions.
[131020400030] |It
[131020410010] |The following command lists the lines in list_file
that contain the name of an image file:
[131020410020] |file -i FOO
looks at the first few bytes of FOO
to determine its format and prints a line like FOO: image/jpeg
(-i
means to show a MIME type; it's specific to GNU file as found on Linux).
[131020410030] |xargs -d \\n
reads a list of files (one per line) from standard input and applies the subsequent command to it.
[131020410040] |(This requires GNU xargs as found on Linux; on other systems, leave out -d \\n
, but then the file list can't contain \'"
or whitespace).
[131020410050] |The sed
command filters out the : image/FOO
suffix so as to just display the file names.
[131020410060] |It ignores lines that don't correspond to image files.
[131020420010] |In addition to the file
command, you can also use ImageMagick.
[131020420020] |The following will show the type of all files in the current directory:
[131020420030] |The identify
command will print out something like this for various file types:
[131020420040] |Animated GIF files will print more information (this is a 21-frame GIF):
[131020420050] |You can then use awk
or similar tools to decide what to do with them.
[131020430010] |If you have Python and python-magic .
[131020430020] |Eg
[131020440010] |man 1 file
[131020440020] |This manual page documents version 5.04 of the file command.
[131020440030] |file tests each argument in an attempt to classify it.
[131020440040] |There are three sets of tests, performed in this order: filesystem tests, magic tests, and language tests.
[131020440050] |The first test that succeeds causes the file type to be printed.
[131020440060] |The type printed will usually contain one of the words text (the file contains only print‐ ing characters and a few common control characters and is probably safe to read on an ASCII terminal), executable (the file contains the result of compiling a program in a form understandable to some UNIX kernel or another), or data meaning anything else (data is usually ‘binary’ or non-printable).
[131020440070] |Exceptions are well-known file formats (core files, tar archives) that are known to contain binary data.
[131020440080] |When modifying magic files or the program itself, make sure to preserve these keywords.
[131020440090] |Users depend on knowing that all the readable files in a directory have the word ‘text’ printed.
[131020440100] |Don't do as Berkeley did and change ‘shell commands text’ to ‘shell script’.
[131020450010] |file - determine file type.
[131020460010] |Xorg is a full featured X server that was originally designed for UNIX and UNIX-like operating systems running on Intel x86 hardware.
[131020460020] |It now runs on a wider range of hardware and OS platforms.
[131020460030] |This work was derived by the X.Org Foundation from the XFree86 Project's XFree86 4.4rc2 release.
[131020460040] |The XFree86 release was originally derived from X386 1.2 by Thomas Roell which was contributed to X11R5 by Snitily Graphics Consulting Service.
[131020460050] |Xorg operates under a wide range of operating systems and hardware platforms.
[131020460060] |The Intel x86 (IA32) architecture is the most widely supported hardware platform.
[131020460070] |Other hardware platforms include Compaq Alpha, Intel IA64, AMD64, SPARC and PowerPC.
[131020460080] |The most widely supported operating systems are the free/OpenSource UNIX-like systems such as Linux, FreeBSD, NetBSD, OpenBSD, and Solaris.
[131020460090] |Commercial UNIX operating systems such as UnixWare are also supported.
[131020460100] |Other supported operating systems include GNU Hurd.
[131020460110] |Darwin and Mac OS X are supported with the XDarwin(1) X server.
[131020460120] |Win32/Cygwin is supported with the XWin(1) X server.
[131020470010] |Xorg is a full featured X server that was originally designed for UNIX and UNIX-like operating systems running on Intel x86 hardware.
[131020470020] |It now runs on a wider range of hardware and OS platforms
[131020480010] |Bash (the Bourne again shell) is a unix shell.
[131020480020] |It was intended as a free replacement to the Bourne shell and includes many scripting features from ksh.
[131020480030] |Bash is intended to conform to the POSIX 1003.2 standard.
[131020480040] |Bash also includes more advanced interactive features such as command line editing with the readline library, command history, job control, dynamic prompts and completion.
[131020480050] |Links and documentation
[131020480060] |Bash GNU project web page
[131020480070] |Bash maintainer's web page
[131020480080] |Related tags
[131020480090] |shell Many shell-agnostic questions are of interest to bash users.
[131020480100] |wildcards (or globbing): matching files based on their name
[131020480110] |command-history a history of commands that can be navigated with the Up
and Down
keys, searched, etc.; also a recall mechanism based on expanding sequences beginning with !
.
[131020480120] |autocomplete completion of partially-entered file names, command names, options and other arguments.
[131020480130] |prompt showing a prompt before each command, which many users like to configure.
[131020480140] |Further reading
[131020480150] |What features are in zsh and missing from bash, or vice versa?
[131020480160] |Strange change directory
[131020480170] |understanding the exclamation mark (!) in bash
[131020480180] |Better bash history
[131020480190] |Bash autocomplete in ssh session
[131020480200] |How do I clear Bash's cache of paths to executables?
[131020480210] |Command Line Completion From History
[131020490010] |Bash is the shell from the GNU project.
[131020490020] |Is it the standard shell on Linux and often available on other unices
[131020510010] |find - search for files in a directory hierarch
[131020520010] |Archlinux
[131020520020] |Website: http://www.archlinux.org/
[131020520030] |Philosophy:
[131020520040] |The Arch Way
[131020520050] |The Arch Way (v2.0)
[131020520060] |History:
[131020520070] |Arch Linux was founded by Canadian programmer, Judd Vinet.
[131020520080] |Its first formal release, Arch Linux 0.1, was on March 11, 2002.
[131020520090] |Although Arch is completely independent, it draws inspiration from the simplicity of other distributions including Slackware, CRUX and BSD.
[131020520100] |In 2007, Judd Vinet stepped down as Project Lead to pursue other interests and was replaced by Aaron Griffin who continues to lead the project today.
[131020520110] |Technical Details:
[131020520120] |Package Manager: Pacman
[131020520130] |Release Cycle: Rolling Release
[131020520140] |Related Projects
[131020520150] |Documentation:
[131020520160] |Official Installation Guide
[131020520170] |Forums
[131020520180] |Wiki
[131020520190] |Community:
[131020520200] |Mailing Lists
[131020520210] |IRC Channels
[131020520220] |Common Tasks:
[131020520230] |Refresh the package list:
[131020520240] |# pacman -Sy
[131020520250] |Install or Update a package:
[131020520260] |# pacman -S package_name
[131020520270] |Update all packages on the system
[131020520280] |# pacman -Su
[131020530010] |a Linux distribution that is aimed at keeping things lightweight and simple
[131020540010] |Revo 3610 not doing hdmi handshake
[131020540020] |I am having a problem with my Revo 3610 witch is connected to my tv via hdmi.
[131020540030] |For some reason will it not do the hdmi handshake with the tv, so my tv does not think that there are anything in the hdmi port.
[131020540040] |I have tested the tv and it works find, with my laptop and dvd.
[131020540050] |It dose work some times, but this time have it failed for 2 days in a row, and i have tried rebooting, turning the tv off and on, and so on nothing helps.
[131020540060] |I can trick the TV to listen to the HDMI with connecting with my laptop and then change the hdmi back to my revo, this on the other hand results in the image going thoug nicely but there are a big fat "Check signal cable." on the screen.
[131020540070] |I have also tryed changing the resolution in the revo but this dose not help ether.
[131020540080] |Have any one had this problem before, and if so how did you fix it?
[131020550010] |Differences between VNC and ssh -X
[131020550020] |Why would you use VNC (or for that matter NX) instead of just using ssh -X (-Y)
.
[131020550030] |I read that VNC uses less bandwidth, but is there any other differences/advantages with respective tool?
[131020560010] |ssh -X
redirects X11 commands to your local X server.
[131020560020] |So it is as if you were running the program locally, when it's really running on the computer at the other end.
[131020560030] |It's very slow because it uses a great deal of bandwidth.
[131020560040] |(This is what people are talking about when they say X11 is "network transparent.")
[131020560050] |VNC and other remote desktop apps instead let the other computer process all of the graphics drawing and so forth and captures, in essence, a screenshot and sends that back to your computer.
[131020560060] |It can seem much faster, because far less information is required to display everything.
[131020560070] |However, it also sends the whole desktop, rather than a single application.
[131020560080] |I don't recommend using ssh -X
over the Internet for one simple reason: It will use all of your available bandwidth.
[131020560090] |It's fairly useful over a LAN, in my opinion, so if you just need one application and don't want to have to run a whole desktop, this is a good way to go.
[131020560100] |Otherwise, just use VNC.
[131020570010] |VNC will share an entire desktop from a remote system.
[131020570020] |It requires a full-fledged desktop on the remote system.
[131020570030] |ssh -X allows you to run single X application from a remote server.
[131020570040] |The remote system does not need to be running a complete desktop, and you often only need a handful of packages to be installed on the remote system.
[131020570050] |ssh -X
can be useful when installing complex software packages over a remote connection.
[131020570060] |Some software products may use a GUI installer (Oracle Database, etc).
[131020570070] |I don't want to install and a full-fledged Gnome desktop on my remote server.
[131020570080] |So, you install one or two X11 packages (Xauth?) on the remote server, and allow the DBA to run the Oracle installer remotely using something simple like 'ssh -X /media/cdrom/oracle-installer'.
[131020580010] |Aside from bandwidth and latency issues (which can vary a bit), the big differences are the functionality it provides.
[131020580020] |VNC exports a whole session, desktop and all, while ssh will run a single program and show its windows on your workstation.
[131020580030] |The VNC server exports a session that survives even when you disconnect your screen, and you can reconnect to it later with all the windows open etc.
[131020580040] |This is not possible with an ssh X tunnel, since when your X server dies, the windows go away.
[131020590010] |mplayer video output drivers
[131020590020] |Is there a document somewhere describing each of mplayer's video output drivers and why'd you'd want to pick that one for a given circumstance (or why it exists)? or would someone be willing to write that out here?
[131020600010] |The video output drivers compiled into your version of mplayer can be viewed by running
[131020600020] |As to which you should pick when, some of that will be obvious from the help output (for those that target specific video cards[ mga, s3fb, etc], or output formats [aa, png, etc]).
[131020600030] |Some are obsolete (I don't think VIDIX went anywhere, or GGI).
[131020600040] |The others, I cannot help you with.
[131020600050] |I use xv
when I can and fall back to x11
.
[131020600060] |The others are somewhat specialised so unless I find a reason to use one, I wont.
[131020610010] |Unexplained Log Cruft and Possible Dropped Packets on WPA2-Personal LAN
[131020610020] |I've been receiving a LOT of log cruft ever since I installed my Linksys Rangeplus USB WUSB100v2 (using the rt2870sta community driver from the Linux kernel) and was wondering what it all meant.
[131020610030] |Many times when these messages occur it is accompanied by slow network speeds and many DNS queries and outgoing SYNs being dropped.
[131020610040] |I have searched for documentation for these (error?) messages and have come up empty as far as what they mean or how I can stop them from occurring.
[131020610050] |I reside on the opposite side of the building from my WAP.
[131020610060] |I have taken steps to improve the signal strength, but the signal quality hovers between 50% and 70%, sometimes dropping to 40% for unknown reasons.
[131020610070] |I am using Slackware64-current (kernel 2.6.33.4) with dhcpcd-5.2.7, wpa_supplicant-0.6.10, wireless-tools-29.
[131020610080] |My /var/log/messages
:
[131020610090] |My dmesg
:
[131020610100] |My /proc/net/wireless
:
[131020610110] |My iwconfig
settings:
[131020610120] |My wpa_supplicant.conf
:
[131020620010] |I figured this out.
[131020620020] |It was the network dongle.
[131020620030] |Eventually it stopped working completely.
[131020620040] |I pried open the housing and there was a big drop of solder and a scorch mark where a diode used to be.
[131020630010] |Using a differencing, aka overlay, aka union, file-system with commit capability
[131020630020] |I work in two PCs and I sync all my files from my primary PC to a USB flash memory.
[131020630030] |In my second PC, I mount the USB flash memory to the same path to work on my files as I were on my primary PC.
[131020630040] |Now for the sake of performance and flash memory lifetime, I need to use any type of differencing, aka overlay, aka union, file-system (like unionfs or aufs) to let me use the USB flash disk as read-only and write changes to a temp directory and at the end allows me to write the changes back to the USB flash at once.
[131020630050] |Any help?
[131020630060] |Any hope?
[131020630070] |Update:
[131020630080] |Thanks for all your answers and comments.
[131020630090] |I am interested in Linux and my question is: Does any of the above file-systems allow committing the writes to the lower file-systems when required?
[131020630100] |If yes, how?
[131020640010] |I'm assuming this is for a Unix flavor OS, but in case you're interested in a Windows-based solution, I've used Microsoft Mesh, which is a free tool for syncing a variety of files and folders over a series of computers.
[131020640020] |Another cool feature is the ability to access this "cloud" (their term, not mine) via a web interface.
[131020640030] |It comes in handy when you're on a remote computer that is not synced, but you would like to access/download certain files.
[131020650010] |I recommend you to use file synchronizing tools.
[131020650020] |File system level solution to your problem may not be feasible.
[131020650030] |Check out unison and conduit.
[131020650040] |As I understand you already have a copy of files in your primary computer.
[131020650050] |Here is the work flow that i use:
[131020650060] |Work and change files on PC_1.
[131020650070] |After you are done synchronize them to your USB Stick.
[131020650080] |Connect the USB stick to your PC_2 and synchronize the content to your PC_2.
[131020650090] |Work and change files on PC_2.
[131020650100] |After you are done synchronize them to your USB Stick.
[131020650110] |Synchronizing will be very fast because only the changed files will be rewritten.
[131020650120] |Also you can write mount and unmount triggers that will automatically make the synchronization.
[131020650130] |For a file system solution you can look for some FS with Copy On Write attribute, e.g. btrfs.
[131020650140] |Taking snapshots and syncing them may be faster and more effective.
[131020650150] |But I couldn't find any implementation yet.
[131020650160] |Also working on file system level will not ease your way on conflict resolution.
[131020660010] |This seems a use case for dm-userspace+cowd: in essence, you would set up a DM target (block device) consisting of a COW (copy-on-write) file and the block device corresponding to your USB stick, and use it to host a filesystem.
[131020660020] |All updates would go to the COW file; reads which are not in the COW file would be served off the USB stick; after you unmount the filesystem, merge modifications from the COW file into the USB stick.
[131020660030] |Unfortunately, it's Linux specific and development seems to have stopped in 2007.
[131020660040] |If what you want to do is sync'ing files across two (or more) PCs, may I suggest that you put your home (or relevant folders) under a versioning system?
[131020660050] |The usual work cycle becomes like this:
[131020660060] |plug in USB stick;
[131020660070] |update home directory repository by pulling latest changes from the USB stick;
[131020660080] |do your stuff;
[131020660090] |commit changes to the versioning system and update repository on the USB stick.
[131020660100] |That's only one write to the USB stick.
[131020660110] |(Although I agree with what others have said, that by the time yoru USB stick wears out, you will probably have bought another -larger- one.)
[131020670010] |There is a new dm target called "snapshot-merge".
[131020670020] |If you format your USB flash memory as a LVM physical volume, and then locate your desired filesystem atop it in a logical volume, you can
[131020670030] |Activate a volume group containing your USB flash memory and another LVM physical volume on a local disk.
[131020670040] |Create a snapshot of the logical volume on the local disk.
[131020670050] |Mount the snapshot, do whatever you want with it, then umount it.
[131020670060] |Merge the snapshot back to the origin.
[131020670070] |This should achieve close to what you've asked for, although it requires a scratch block device rather than a temporary directory.
[131020670080] |Substitute the parts enclosed in {braces} as appropriate.
[131020670090] |Untested, but all the LVM commands have manpages so you should be able to figure things out from here.
[131020670100] |You might need a vgscan
invocation in there somewhere, if the volume group doesn't get automatically detected when you plug the USB drive in.
[131020680010] |How do I make my pc speaker beep
[131020680020] |Using bash, how can I make the pc speaker beep?
[131020680030] |Something like echo 'beepsound' >/dev/pcspkr
would be nice.
[131020690010] |Some distros have cmdline utilities to achieve this.
[131020690020] |Maybe you could tell us what distro you are on.
[131020690030] |Or just do a search (i.e emerge -s beep in gentoo) >D.
[131020690040] |Going beyond "available" utils, you could also make a perl script that emits the beep, all you need to do is include:
[131020690050] |If you do end up getting 'beep', try out the following:
[131020690060] |Regards.
[131020700010] |Try echo -n Ctrl+V Ctrl+G The downside is that this will only work when the output device is a terminal, so it may not work inside a cron job, for instance (But if you are root you might be able redirect to /dev/console for immediate beeping).
[131020710010] |Simply echoing \a
or \07
works for me.
[131020710020] |This will probably require the pcspkr
kernel module to be loaded.
[131020710030] |I've only tested this on RHEL, so YMMV.
[131020710040] |Update
[131020710050] |As Warren pointed out in the comments, this may not work when logged in remotely via SSH.
[131020710060] |A quick workaround would be to redirect the output to any of the TTY devices (ideally one that is unused).
[131020710070] |E.g.:
[131020710080] |Happy beeping!
[131020720010] |I usually use the little utility beep
installed on many systems.
[131020720020] |This command will try different aproaches to create a system sound.
[131020720030] |3 ways of creating a sound from the beep manpage:
[131020720040] |The traditional method of producing a beep in a shell script is to write an ASCII BEL (\007) character to standard output, by means of a shell command such as ‘echo -ne '\007'’.
[131020720050] |This only works if the calling shell's standard output is currently directed to a terminal device of some sort; if not, the beep will produce no sound and might even cause unwanted corruption in whatever file the output is directed to.
[131020720060] |There are other ways to cause a beeping noise.
[131020720070] |A slightly more reliable method is to open /dev/tty and send your BEL character there.
[131020720080] |This is robust against I/O redirection, but still fails in the case where the shell script wishing to generate a beep does not have a controlling terminal, for example because it is run from an X window manager.
[131020720090] |A third approach is to connect to your X display and send it a bell command.
[131020720100] |This does not depend on a Unix terminal device, but does (of course) require an X display.
[131020720110] |Beep will simply try these 3 methods.
[131020730010] |Enabling or disabling one monitor in nVidia Twinview on the command line, like with nvidia-settings
[131020730020] |For some reasons, most native games as well as Wine have a problem with Twinview.
[131020730030] |So when starting SC2 I have to manually disable one of my two screens in nvidia-settings.
[131020730040] |(By going in X Server Display Configuration >click on second monitor >Display >Resolution: off)
[131020730050] |I searched hard but couldn't find a way to do that automagically. nvidia-settings itself has non-GUI options (see "nvidia-settings -q all") but none of them seems to do what I want.
[131020730060] |I want to put that in my startup script for games, which already replaces Compiz with metacity (and back when it exits).
[131020730070] |Help appreciated.
[131020740010] |If you're using Twinview the displays are treated as one display with the resolution of all the physical displays put together.
[131020740020] |You can use xrandr
to change the current output dimensions, and it will turn on or off the appropriate displays to make it fit.
[131020740030] |For example, if you have two 1280x1024 monitors:
[131020740040] |However, this requires that X be configured with both modes.
[131020740050] |I'm not up on the latest wisdom when it comes to X configuration, but I use this metamodes
line in my Screen
section:
[131020740060] |That says "either display on my 1920x1200 DFP at 0x0 and my 1280x1024 CRT to its right, or just display on the DFP and leave the CRT off", so I can tell xrandr to use just the DFP (--mode 1920x1200
) or both (--mode 3200x1200
)
[131020750010] |How to turn off Nautilus autoplay under KDE?
[131020750020] |does anyone have an idea why when I insert a cd/dvd/flash nautilus opens as a default file manager under KDE instead of dolphin krusader? in my system settings krusader is set as the default manager, but still nautilus somehow keeps showing up, and I'm wondering how to change that?
[131020750030] |I keep nautilus because it's a dropbox dependancy... ok, I know there are workarounds so one could use dropbox without nautilus but I didn't bother trying that out...
[131020750040] |I don't mind keeping nautilus but I just want it to be quiet :D
[131020750050] |I have kde 4.5.1 installed on up to date arch if that helps anyhow
[131020750060] |thank you
[131020750070] |ps - mods, please add tags: nautilus autoplay
[131020760010] |This link seems to be what you are looking for.
[131020760020] |The post is ubuntu specific thought...
[131020770010] |GNU screen - Restore a session with splitted screen
[131020770020] |When I restore a splitted session of screen, I've got only one print session and have to reconfigure the number of display session.
[131020770030] |Maybe there is another way to have the original screen configuration ?
[131020770040] |Thanks
[131020780010] |This is not currently possible without a hack (see next paragraph); however, the features required to do this have already been added to screen's current git tree.
[131020780020] |In future versions, the "layout save" and "layout load" commands will be able to load not only your last layout, but other named layouts.
[131020780030] |I believe there is also support for cycling through layouts.
[131020780040] |Currently, the trick is to use a screen inside a screen.
[131020780050] |All of your work and layout changes are done in the inner screen, but then when you detach, you actually detach from the outer most screen.
[131020780060] |The layout of the inner screen will be preserved.
[131020780070] |See the following for all the gritty details:
[131020780080] |When I split the display and then detach, screen forgets the split.
[131020780090] |Alternatively, you can try compiling the latest version directly from the screen source tree.
[131020780100] |You can do this by installing git and then running:
[131020780110] |Then, follow the directions in src/INSTALL.
[131020780120] |In general, the directions are:
[131020780130] |./autogen.sh
[131020780140] |./configure
[131020780150] |make
[131020780160] |There is a discussion in the INSTALL file about various issues surrounding where to install screen based on various concerns.
[131020780170] |If you go this route, your best bet is to read all of the INSTALL directions and then proceed.
[131020790010] |/mnt Directories Disappearing
[131020790020] |I'm trying to mount some smb shares on bootup using fstab
on a Kbuntu box.
[131020790030] |Here are the steps I using to accomplish this:
[131020790040] |Then I add this line to my fstab file:
[131020790050] |However after restarting the /mnt/MyShare
folder is removed.
[131020790060] |If i re-create this directory and run sudo mount -a
, everything works fine.
[131020790070] |Can browse the share.
[131020790080] |But when I reboot, /mnt/MyShare
is gone.
[131020790090] |Any hints as to what I am doing wrong?
[131020800010] |I don't think you should be using /mnt
in this way.
[131020800020] |According to the Filesystem Hierarchy Standard;
[131020800030] |This directory is provided so that the system administrator may temporarily mount a filesystem as needed.
[131020800040] |The content of this directory is a local issue and should not affect the manner in which any program is run.
[131020800050] |This directory must not be used by installation programs: a suitable temporary directory not in use by the system must be used instead.
[131020800060] |A permanent mount, specified in fstab
, should go somewhere else.
[131020800070] |Note that current Ubuntu systems use subdirectories in /media
for removable disks, and /mnt
is always left as an empty directory for manual, one-off mounts.
[131020800080] |I suspect that Ubuntu is enforcing this, or at least facilitating it, by deleting and recreating it on each startup.
[131020800090] |I suggest you create a new root-level directory /network
and put your permanent network mounts in there.
[131020800100] |You may be able to get away with putting them in /media
, but it's probably better to leave that for use by the system.
[131020800110] |Using /network
as the prefix nicely labels yours as network drives.
[131020810010] |Is setting up a permanent SMB mount really the right thing to do?
[131020810020] |Modern desktop environments like KDE and GNOME allow you to bookmark network shares and accessing them becomes a single click or menu selection.
[131020810030] |The username and password for the share can then be stored in the user's keychain.
[131020810040] |This is much better than putting names and passwords into fstab
, which can be read by anyone.
[131020820010] |QLogic 42C1831 HBA drivers on RHEL 4.8? Using Kernel 2.6.x
[131020820020] |We're trying to get a QLogic 42C1831 HBA installed on a system using RHEL 4.8.
[131020820030] |We've tried the drivers from QLogic for Linux Kernel 2.6.x (which this system is running) and are getting all sorts of errors trying to install them.
[131020820040] |The drivers to specifically mention that they're for RHEL 5.x, but I was wondering if anyone has gotten this to work?
[131020820050] |If you'd like to look at the driver files we're using they're here
[131020830010] |QLogic drivers for 2.6.18 likely won't mix with the 2.6.9 kernel that RHEL 4.8 uses.
[131020830020] |For instance, looking at The QLogic Download Page the oldest kernel that is supported is 2.6.16 (SLES 10 SP2).
[131020830030] |Without any additional information like dmesg/lspci/compile errors, your best bet would be to upgrade to RHEL5/6 where the device is supported by your vendors.
[131020840010] |Script for ssh Agent Management: Is this adequate? Any bugs?
[131020840020] |The following is something I use to manage my ssh agent settings.
[131020840030] |It's intended to be sourced (. ~/lib/sshagent.sh
) from ~/.bashrc or other login or shell start-up files ... or even cron jobs.
[131020840040] |It works for me but I'm hoping folks here will review it and offer suggestions about any corner cases that I'm missing.
[131020840050] |I used to only run it from ~/.bash_login ... but then I'd find that, in some cases, my shells wouldn't pick up the settings (X display manager and I think remote ssh non-login sessions ... cases where ssh is called with a command).
[131020840060] |In some other cases the old settings would persist and not updated when an agent process was restarted (for whatever reason).
[131020840070] |So I run it in ~/.bashrc and try to avoid any stray output ... as is recommended for ~/.bashrc in general.
[131020840080] |So, are there any evident corner cases or bugs?
[131020840090] |Would this make sense for something like /etc/bashrc?
[131020840100] |Is it reasonably portable to other shells?
[131020850010] |Why are you writing your own? why not use a handy dandy little product called keychain
?
[131020850020] |Here's Gentoo's Keychain Guide (possibly newer version of same article on Funtoo)
[131020850030] |It's basically a little program that allows you to use password protected keys, without typing the password all the time.
[131020850040] |(It should be available on whatever distro you're using)
[131020850050] |You may also be interested in Gentoo's Open SSH Key Management series: part 1, part 2, and part 3.
[131020850060] |Which looks like it includes some things that you're trying to do.
[131020860010] |One portability issue in your script is the use of &>/dev/null
to redirect both stdout and stderr.
[131020860020] |This is a bashism and won't necessarily work on other shells.
[131020860030] |(I was recently bitten by this one.)
[131020860040] |The more portable way is to use >/dev/null 2>&1
.
[131020870010] |Make package explicitly installed in pacman
[131020870020] |I have a package that's installed on my PC as an dependency to another package.
[131020870030] |I would like to have the package explicitly installed, but without actually re-installing it, or downloading any files.
[131020870040] |Is this possible?
[131020870050] |update:
[131020870060] |I do not have any packages cached in /var/cache/pacman/pkg
, which is one of the reasons I want to change the package detail without a re-install.
[131020870070] |Even If I had the packages cached, running pacman -S would mean the whole install proccess is run, which I also want to avoid.
[131020880010] |pacman -S
has a --asexplicit
flag that should do what you want.
[131020880020] |For example:
[131020880030] |You can see that nothing was downloaded since it is already installed locally.
[131020880040] |It just flipped the "Install Reason" field.
[131020880050] |Pacman has different --help
operations depending on the operation (-S
, -R
, etc.).
[131020880060] |So pacman -S --help
lists the --asexplicit
flag as one of the available flags. --asdeps
is available as well.
[131020890010] |I found the answer on Arch Linux Forums
[131020890020] |Since pacman 3.4 you can use
[131020890030] |to modify only the database.
[131020890040] |So:
[131020890050] |will make
explicitly installed.
[131020900010] |Doesn't work for me:
[131020900020] |This also fails:
[131020910010] |Scheduling commands by system inactivity
[131020910020] |So cron
or at
can schedule our commands to run at the exact time we need them to, but can we schedule commands to run when systems are inactive?
[131020910030] |Something like:
[131020920010] |hmmm...
[131020920020] |I don't think so... but what you could do is cron a script to run like every 5 minutes and check the load average to see if it's acceptably low.
[131020920030] |I wouldn't check the current because you could get the cpu in between 2 really high peaks.
[131020920040] |This is just thoughts on what I'd do to accomplish this, but there might be a better way.
[131020930010] |On many systems the at daemon is configured such that the batch
command will run a command when the system drops below a certain load.
[131020930020] |However, this may not give you the fine grained control you are looking for.
[131020940010] |Fcron has a lot of additional features over common cronds.
[131020940020] |For example:
[131020940030] |set the max system load average value under which the job should be run
[131020940040] |(quote from the Homepage)
[131020940050] |Thus, you could use fcron to setup what you want.
[131020950010] |A friend of mine posted about this problem some days ago.
[131020950020] |He talks about this tool Dmon.
[131020950030] |I did not test it, but it sounds great.
[131020960010] |Distros that support compiling from source
[131020960020] |A long time ago I used to use FreeBSD with its ports system and after that Gentoo for portage in order to install applications via compiling from source.
[131020960030] |I did this in order to directly target my system.
[131020960040] |Are there any other distros out there which support such a configuration?
[131020960050] |I seem to remember Slackware having something similar.
[131020970010] |I'm not aware of a complete "build the system from source" tool for Debian, but it does support this in a round-about way via apt-src
, which will download and build a package, then install the resulting build.
[131020980010] |Many RPM-based distros have source RPM packages.
[131020980020] |Debian and Ubuntu have source debs as well.
[131020980030] |Are you looking for other distros that are primarily, built-from-source or just distros that have source packages available.
[131020980040] |If it's the latter, the answer is "many/most" of them.
[131020990010] |There are a few distros which support both binary and compiled packages--in theory, Gentoo supports this, but I don't think there are too many binary packages.
[131020990020] |Arch also supports building from source in addition to binary packages via the Arch Build System (ABS), though I don't have any experience with it.
[131021000010] |Some come to mind, that I have personally used: LFS (obviously), SourceMage, and someone made one from LNX-BBC makefiles, I can't find it now.
[131021000020] |But I consider Debian being good enough to compile packages myself, if I need to.
[131021000030] |You should also check the list is given by DistroWatch source-based distros:
[131021010010] |If you want to try something a little different, there's GoboLinux and NixOs.
[131021020010] |Gentoo is your best bet here, what's wrong with using it for your needs?
[131021030010] |You can also try the old and mighty Linux From Scratch.
[131021040010] |I have compiled Squid in Open SUSE, so that distro supports it.
[131021050010] |Most Linux distros support building packages from the source code.
[131021050020] |You simply need to install the necessary development packages from the distribution repositories along with any specific requirements of the package you are building.
[131021050030] |If you are wanting to build the system as close to scratch as possible the Linux from Scratch is the model but you have a greater responsibility for tracking security updates, patches, etc.
[131021050040] |Arch Linux was the distro I chose because it allows you to build from source and provides the sources for updates and patches, etc.
[131021050050] |Arch has really good user support and plenty of documentation when it comes to resolving install and configuration issues.
[131021060010] |Yes, you are right, slackware use build scripts to compile packages.
[131021060020] |There are a lot of them available from http://slackbuild.org/ .
[131021060030] |There are also templates for new scripts and you can always submit your scripts if you want to.
[131021070010] |I don't exactly what you're getting at, but take a look at tinycore.
[131021070020] |The entire image creation pricess is possible to be made from sources.
[131021080010] |A very similar question was recently asked.
[131021080020] |My answer to that question is here: How to build all of Debian
[131021080030] |Theoretically all distros can be built from source.
[131021080040] |The details may differ slightly with each distro but the method I listed there is a solid starting point.
[131021090010] |At ALTLinux, much effort is put into maintaining accurate spec-files for packages and that building the packages is accurately reproducible in the current state of the repository of packages.
[131021090020] |It is being checked regularly that every package in the repository (called Sisyphus) is rebuildable at the current moment -- a rebuild test status report, the logs of the last rebuild test, per package.
[131021090030] |To be sure in accurate reproducibility of package builds, special tools to isolate the build system from the host system are used: hasher and the surrounding build-infrastructure tools (e.g., Building packages with gear).
[131021090040] |So, although ALTLinux isn't dedicated to installing your system by building, one can be sure that a package he takes from the repository will be easily rebuildable at his host system, without extra issues that haven't been tracked formally by the spec.
[131021090050] |ALTLinux is dedicated to being the source for custom package repositories and distros, which--by the design of ALTLinux build system and associated tools--can be easily customized and rebuilt independently from ALTLinux and safely (i.e., isolated from your host system).
[131021090060] |So, if one wants to make his own customized distro, ALTLinux Sisyphus can be the base for this distro that will be easy for him to use in his work: Intro into making your own distro (in Russian).
[131021110010] |Permission bits not being enforced on samba share.
[131021110020] |I have a problem where permission bits are not being enforced on a samba share using a Linux client.
[131021110030] |I have samba configured on the server to force a certain user, group and permission bits and this works as expected until I touch the file or it becomes the target of IO redirection.
[131021110040] |Here's what's happening:
[131021110050] |Notice when I touch the existing file its permission bits are 0777.
[131021110060] |They're supposed to be 0664 like when it was first created.
[131021110070] |How can I enforce 0664 on the existing file?
[131021110080] |I have version 3.0.24 on the server and version 3.4.7 on the client.
[131021110090] |Here's my smb.conf:
[131021120010] |The samba permissions only work on the SMB (ie Windows) network clients.
[131021120020] |If you want to enforce this on the server (and any NFS clients) you need to set the sticky bit on all the directories.
[131021120030] |first correct the files that are there:
[131021120040] |then enforce this with the group sticky bit
[131021120050] |This is not infalable but does solve 99% of this sort of problem.
[131021120060] |Regards DaveF
[131021120070] |Result on my Solaris box:
[131021130010] |User's Login date and login time
[131021130020] |Hello all,
[131021130030] |I want to fetch user's login time and login date, is there any command in Unix providing User's login date and User's login time ? this problem i want to perform in Shell-script where username is accepting from the end-user and after checking the availability of user, i would like to fetch that user's login time and login date in different variable and then display using 'echo' command.
[131021140010] |For past logins:
[131021140020] |Also, the command who
lists current logins.
[131021140030] |If you're looking for the date of the user's last login, some systems provide it directly, for example lastlog -u "$USER_NAME"
on Linux or lastlogin "$USER_NAME"
on FreeBSD.
[131021140040] |It's also available in the output of finger
, but not in an easy-to-parse form.
[131021140050] |In any case, it's available in the output of last
(on many unix variants, last -n 1 "$USER_NAME"
shows the last login; otherwise you can do last "$USER_NAME" | head -n 1
).
[131021140060] |Note that last login may not correspond to the last logout (e.g. a user remained connected from one origin for a long time and made a quick network login recently).
[131021150010] |On linux, last -R $username | awk '/still logged in/ {print $3,$4,$5,$6}'
will return nothing if the user is not logged in, otherwise a date/time list for each active session.
[131021150020] |Other unixes that don't know the -R
option to last (which supresses the hostname) will need some modification.
[131021160010] |How can I tell *which* application is asking for access to gnome-keyring/Seahorse?
[131021160020] |Whenever I log in on Ubuntu Lucid Lynx 10.04 I get a Seahorse/gnome-keyring prompt telling me an application wants to access my keyring.
[131021160030] |It isn't the network manager, because if I cancel the request my network connection is still established (also, this only started happening recently).
[131021160040] |How can I tell which application is making this request?
[131021160050] |The prompt doesn't provide this information.
[131021170010] |You could try to have a look at the logfiles :-)
[131021170020] |Maybe something is in /var/log/auth.log
after a wrong password.
[131021170030] |Check your autostart applications under System -> Preferences -> Sessions -> Startup Programs
and ~/.config/autostart
[131021170040] |Check the running processes with top
and ps aux
, check the process tree of ps axjf
[131021170050] |It should be pretty simple to figure out which process requires your keyring?
[131021180010] |I setup my box for auto-login and it does this on every login.
[131021180020] |In my case it's nm-applet/network-manager family of apps.
[131021180030] |Edit: btw the problem has been around for some time, evidently some half-fix got undone during a package upgrade, but i digress...one solution is here
[131021180040] |WHATEVER is causing the problem you could add after login but before everything else: a script with libpam-gnome-keyring to unlock it...the package to get this tool is: libpam-gnome-keyring at least in 11.04
[131021180050] |Proof video that it is nm-applet is here
[131021190010] |From a security perspective, the answer is that in current distros you can't tell which application it is. See this bug report for clarification from a gnome-keyring developer, including the security implications and scope of the task.
[131021190020] |From a practical perspective I am also using auto-login on Ubuntu, and it seems that it is indeed nm-applet.
[131021200010] |What does the Broken pipe message mean in an SSH session?
[131021200020] |Sometimes my SSH session disconnects with a Write failed: Broken pipe
message.
[131021200030] |What does it mean?
[131021200040] |And how can I keep my session open?
[131021200050] |I know about screen
, but that's not the answer I'm looking for.
[131021200060] |I think this is a sshd
config option.
[131021210010] |It usually means that your network (TCP) connection was reset.
[131021210020] |E.g. your internet provider reconnected you or something like this.
[131021220010] |It's possible that your server closes connections that are idle for too long.
[131021220020] |You can update either your client (ServerAliveInterval
) or your server (ClientAliveInterval
)
[131021220030] |To update your server (and restart your sshd
)
[131021220040] |Or client-side:
[131021230010] |Made changes, same issue Connecting from Ubuntu 10.4.1 to Ubuntu 10.4.1 virtual box (with bridged connection) hosted on Ubuntu 10.4.1
[131021240010] |Can not find link to download OpenSolaris source code
[131021240020] |I want to understand how OpenSolaris ptools(process tools) works.
[131021240030] |How exactly pstack,pmap,pargs etc works.
[131021240040] |But cant find any link to its full source code.
[131021240050] |I can only find online version of the source.
[131021240060] |Any advice where I can download source code for offline use?
[131021250010] |Download it from the main download page.
[131021260010] |Get The Source
[131021260020] |It's possible you'll need to use Mercurial to get it.
[131021270010] |Like Kristof Provost mentioned, the official source for the code is
[131021270020] |Like you said the source tarballs are now depricated.
[131021270030] |and I cant install Mercurial :(
[131021270040] |?
[131021270050] |But you should have access to some machine where you can?
[131021270060] |If not, another possibility would be a live cd with mercurial installed, for example the excellent GRML.
[131021270070] |Beside that, I cloned the repository for you ;-) You can find it under: http://solaris.oark.org/usr/src/.
[131021270080] |What you are looking for is the directory http://solaris.oark.org/usr/src/cmd/ptools/. wget
should now do the job :-)
[131021270090] |Note: I will delete this cloned repository the next weeks...
[131021270100] |Have fun.
[131021280010] |Command-line mv exclusion list
[131021280020] |Is there a way to mv
,cp
, or any file operation such that I could specify all the files I don't want affected?
[131021280030] |For example, say I have a folder with the files file1
, file2
, and file3
, and I want to move file1
and file2
somewhere.
[131021280040] |Rather than explicitly naming the files to move (mv file1 file2 /path/to/destination
), I want to name the files not to move and have all the others in the folder get moved (mv --some-switch file3 /path/to/destination
)
[131021290010] |You can use the advanced globbing patterns in some shells to match all the files in a directory except for those matching a particular pattern.
[131021290020] |For example, in ksh, bash or zsh, the command
[131021290030] |will move all files in /source
to /destination
except for the files matching *.bak
.
[131021290040] |In zsh, you can also write /source/^*.bak
if you first run setopt extended_glob
, and more generally (again requiring setopt extended_glob
) /source/*~*.bak
(or /source/a*~*.bak
for all files whose name begins with a
except for .bak
files, etc).
[131021290050] |Zsh has a mass copy/move/link command that can be used, amongst others, to move all files except for those matching a pattern.
[131021290060] |For example, the following command moves all files except *.bak
from /source
to target
, and adds .bak
to their name in the process:
[131021290070] |There are several commands called rename
floating around.
[131021290080] |On Debian and Ubuntu, /usr/bin/rename
is a perl script that moves files to a new name generated by a perl expression.
[131021290090] |You can exclude files from renaming by not generating a new name if the file is to be excluded.
[131021290100] |For example, the following command (using this particular rename
program) moves all files except *.bak
from /source
to /target
:
[131021290110] |You can use the find
command to select the files you want to move.
[131021290120] |For example, the following command moves all regular files except *.bak
in /source
or a subdirectory into /target
(note that the directory structure is collapsed):
[131021290130] |or (more efficient if there are many files to move)
[131021290140] |rsync
is a generalization of cp
and scp
with very powerful include/exclude rules.
[131021290150] |For example, the following command copies all files except *.bak
in /source
or a subdirectory into /target
, respecting the directory structure:
[131021290160] |pax
is (amongst other things) another cp
on steroids.
[131021290170] |Its exclusion rules are not nearly as powerful as rsync's, but it has the additional ability to rename files as they are copied.
[131021290180] |If you rename a file to the empty string, it's excluded from the copy.
[131021290190] |For example, the following command copies all files except *.bak
in /source
or a subdirectory into /target
, and renames the files to .bak
in passing.
[131021290200] |The example above has the unfortunate side effect of creating directories called foo.bak
, which can be avoided by combining find
with pax
:
[131021300010] |Is it possible to break long lines in sshd_config?
[131021300020] |Specifically AllowUsers
parameter:
[131021300030] |e.g. convert this
[131021300040] |to this
[131021310010] |In short, it looks like no
[131021310020] |OpenSSH's servconf.c
dumps the file into a buffer without checking for such things (all it appears to do is look for #
to mark a comment):
[131021310030] |The function that parses the config then splits the buffer on newlines and processes each line:
[131021320010] |No, but it's not useful in this case.
[131021320020] |You can have multiple AcceptEnv
, AllowGroups
, AllowUsers
, DenyGroups
, DenyUsers
, HostKey
, PermitOpen
, Port
and Subsystem
lines, and each line adds one or more (or sometimes zero) elements to the list.
[131021320030] |Nonetheless, if you can't easily fit your AllowUsers
directive on one line, I suggest creating a ssh_allowed
group and using AllowGroups ssh_allowed
in sshd_config
.
[131021330010] |How do I minimize disk space usage
[131021330020] |One of my machines is the 2GB EeePC Surf, a neat netbook with very limited resources.
[131021330030] |So limited that right now, I have 22MB free space left.
[131021330040] |On it, I'm running archlinux with the openbox DE.
[131021330050] |And a host of needed applications for it to function as a mobile pc.
[131021330060] |What methods are available to me to stamp out some unnecessary used space?
[131021340010] |Here are some points you could start with:
[131021340020] |Have a look at the packages installed on your system with pacman -Q
and remove the ones you don't need.
[131021340030] |A good start may be to append the -t
switch:
[131021340040] |Restrict or filter output to packages not required by any currently installed package.
[131021340050] |Clean the package cache of pacman with pacman -c
[131021340060] |Always use pacman -Rs
to remove also unused package dependencies.
[131021340070] |To find "big files" and folders which use large parts of the disk, a nice addition to du
is xdiskusage
.
[131021340080] |This little tool lets you quickly browse your filesystem and see graphical representation of the disk usage of the folders.
[131021350010] |On the 4GB disk in my Eeepc with Ubuntu it helped to remove some locale files (from /usr/share/locale) and Gnome help files (from /usr/share/gnome/help/).
[131021350020] |Both were installed for languages which I don't use.
[131021350030] |Not sure if Arch Linux even installs all those files, though.
[131021360010] |How do I connect to a pc through another pc using ssh
[131021360020] |I have three Computers.
[131021360030] |PC1 and PC2 is on a private LAN, where PC1 is known to PC2 as 192.168.0.2
[131021360040] |PC2 and PC3 is on a another LAN, where PC2 is known to PC3 as 192.168.123.101
[131021360050] |How can I connect to PC1 from PC3 with SSH.
[131021360060] |Is there something like:
[131021370010] |The only solution I know for this is ssh scripting with Belier:
[131021370020] |Belier allows opening a shell or executing a command on a remote computer through a SSH connection.
[131021370030] |The main feature of Belier is its ability to cross several intermediate computers before realizing the job.
[131021370040] |A while ago I found this README.sshhop on the MIT Lincoln Laboratory Homepage, but I wasn't able to find any further information about that.
[131021370050] |Does somebody know more?
[131021380010] |Using SSH there is a clear solution:
[131021380020] |on your local machine set up your ~/.ssh/config
such that it has the following:
[131021380030] |On both the gateway and the end server you'd like to connect to, make sure that you have your local client's public keys located in the ~/.ssh/authorized_keys
[131021380040] |On the gateway machine you need to alter the ~/.ssh/authorized_keys
such that at the beginning of the line that specifies your client's public key, add the forced command as follows:
[131021380050] |The -A
is to forward the agent if you don't like to send passwords all the time...
[131021380060] |This way, anytime you do something like ssh WhatYouWillCallTheConnection
it will run straight through the gateway and connect you to the server on the other side transparently.
[131021390010] |Best used through an alias in ~/.ssh/config
:
[131021390020] |Then you can simply run ssh PC1
.
[131021400010] |Port Forwarding might come in handy.
[131021400020] |From PC1:
[131021400030] |7777 can be just any port (provided it is not already being used).
[131021400040] |I just like that number, plus any "ordering up" I can manage by +1 's (7778, 7779, etc, etc).
[131021400050] |This being done, you will have a 'transparent' tunnel from PC1's local port 7777 to PC3's port 22.
[131021400060] |Just issue:
[131021400070] |And you should be on PC3.
[131021400080] |You can also use -D to dynamically forward a port if you want a SOCKS proxy established.
[131021400090] |Cheers!
[131021410010] |How to permanently remove all Mono related package (libs, apps, etc.)
[131021410020] |I'm a Ubuntu newbie.
[131021410030] |In my opinion Mono is a patent trap and I do not want my distribution of choice to be tainted by anything Mono or any application that requires Mono.
[131021410040] |So I would welcome your feedback for the following:
[131021410050] |How to prevent anything Mono and applications that require Mono from getting installed in the first place.
[131021410060] |Is there a way to forcefully disable Mono during installation?
[131021410070] |How do I find out if I have anything Mono installed on my current default installation?
[131021410080] |If Mono is already installed, how to remove Mono and all applications that require Mono?
[131021410090] |Hopefully with your feedback I can make sure that all my bits are truly free.
[131021420010] |Disclaimer: I offer this answer since I believe you should have control over what packages are on your system--not to flame the mono-hate flame war.
[131021420020] |Also this question is heavily edited since my first post.
[131021420030] |Removing Mono
[131021420040] |To remove mono completely all you have to do is remove the base mono libraries, and all files that depend on those libraries will also be removed.
[131021420050] |The exact set of packages that need to be removed vary depending on which version of Ubuntu you are using.
[131021420060] |I believe you should be able to remove most of mono with the following command (Update: I've updated the command to better ensure everything is removed.):
[131021420070] |The command should list all of the packages that will be removed--including applications that depend on mono-- and ask you to confirm their removal.
[131021420080] |You should review the list carefully before accepting the changes and make sure you won't be removing something you need.
[131021420090] |You may want to follow that up with:
[131021420100] |If you are more comfortable with GUI tools, you can also do this in Synaptic:
[131021420110] |Change to the "installed" filter.
[131021420120] |Use the quick search box and search for "libmono."
[131021420130] |Select all of the packages that appear in the results.
[131021420140] |Mark them for complete removal.
[131021420150] |Repeat steps 2-4 for the other packages in the command above.
[131021420160] |Press apply.
[131021420170] |Keeping Mono Off of Your System
[131021420180] |While there used to be a package called mononono that would prevent mono from being installed on the system, I do not believe this package works well with recent versions of Ubuntu.
[131021420190] |If you are truly concerned with keeping mono off of your system, I would simply look carefully at the details of software installs you do and ensure that you do not see it pulling in mono libraries.
[131021420200] |One more automatic method would be to using apt-preferences.
[131021420210] |Putting the following in /etc/apt/preferences
or in a file inside /etc/apt/preferences.d/
should provide relatively good defense against installing mono on your system:
[131021420220] |For more information about how this works, see man apt_preferences
.
[131021420230] |The short version is that negative priorities prevent that version of the package from being installed.
[131021420240] |Other Notes
[131021420250] |If you are very concerned about non-free software, you may find the vrms package of interest.
[131021420260] |It lists non-free packages on the system.
[131021420270] |It will not list mono packages since patent issues are orthogonal to the software being free, at least according to some definitions of "free."
[131021420280] |Also, if you were to list all of the packages that have potential patent issues, you'd have to list a whole lot of packages.
[131021430010] |Every piece of code is a potential patent trap, exactly in the same way as mono is, so your only solution to get rid of patent traps is: sudo rm -rf /
[131021440010] |Commandline gstreamer player
[131021440020] |Is there a good, simple commandline player that uses gstreamer?
[131021450010] |gst123 is command line music player that uses gstreamer.
[131021450020] |I have not messed with it, I generally use MOC.
[131021460010] |You can use gst-launch from gstreamer-tools.
[131021470010] |You can also use audiopreview.
[131021480010] |Quoting commands
[131021480020] |As I've learned to use Linux over the years, I've repeatedly come across the idiom of quoting commands with a leading backtick (`) and a following single quote ('), like so:
[131021480030] |`rm -rf /tmp/foo/bar'
[131021480040] |(I first realized that I kept seeing this, I think, on jwz's site.
[131021480050] |I might have even asked him this question, though that would have been a loooong time ago.)
[131021480060] |Is there a significance to this style of quoting commands?
[131021480070] |I do it myself, now, so that if people just copy and paste what I've posted, and don't know enough to leave out the marks, the command will fail.
[131021480080] |Is there a preferred method for making commands like mysql -hlocalhost -u -p -A bigdatabase obvious in running text, without offsetting it in its own paragraph as above?
[131021490010] |I haven't actually witnessed exactly what you're talking about as a widespread phenomenon, but I can think of three hypotheses, in increasing order of likelihood.
[131021490020] |In Bash and similar shells/scripting languages, backticks can be used for command substitution; people may be referring to that.
[131021490030] |(But here there are backticks on both sides.)
[131021490040] |In markdown syntax, applicable even here on Stack Exchange sites, backticks are used to put things in a monospaced/typewriter font: like this
, which typically is used to indicate that something is a short piece of code or something you'd enter into a command line interface.
[131021490050] |(But again, here, there are backticks on both sides.)
[131021490060] |In LaTeX mark-up, which is likely something disproportionally used by linuxers/Unixers, backticks are used as left quotation marks and single quotes for right quotation marks, so `this' becomes ‘this’ when typeset.
[131021490070] |Perhaps this is why this a common practice among linux/Unixers.
[131021490080] |I guess there might be other explanations, but I personally haven't really witnessed this phenomenon much.
[131021500010] |I think that what you are seeing is just "correct" typography.
[131021500020] |There is software out there, such as smartypants, that automatically convert plain text punctuation into HTML entities.
[131021510010] |Good typography demands that the opening and closing quote glyphs be different (and symmetrical).
[131021510020] |Some older computer fonts (e.g., Sun console) provided left- and right- quote glyphs on the backtick and apostrophe characters; modern fonts tend to show he grave accent and a vertical single quote instead.
[131021510030] |Unicode now provides separate characters for left- and right- quotes.
[131021510040] |You can read the full story, including Unicode code points for all involved characters and a history of the backtick+apostrophe convention, at: http://www.cl.cam.ac.uk/~mgk25/ucs/quotes.html
[131021510050] |Whether the use of backtick+apostrophe makes visual sense is all about fonts: TeX/LaTeX fonts indeed interpret backtick and apostrophe as left- and right- quote glyphs; the use of backtick+apostrophe for quoting is still commonplace in (ASCII-format) Emacs and TeXinfo documentation.
[131021510060] |I personally tend to adapt my quoting habits to the context very much:
[131021510070] |when a markup language is used (e.g., markdown here on SE sites), I use whatever markup syntax for monospaced font;
[131021510080] |when writing plain ASCII text, I tend to avoid using backticks as they have a special meaning to the shell and use single- or double- quotes to enclose command snippets.
[131021510090] |(Same quote character at both sides.)
[131021510100] |when writing LaTeX or Emacs docs, I use the backtick+apostrophe convention.
[131021520010] |Which BSD to start with?
[131021520020] |So I've been a linux user since June 2008,my first distro was Ubuntu
[131021520030] |I've tried OpenSuSE,Fedora,Mandriva,Linux Mint,Puppy Linux,Damn Small Linux and Arch Linux
[131021520040] |And I was thinking about giving BSD a try
[131021520050] |which BSD variant should I choose?
[131021530010] |FreeBSD because it's the most user friendly.
[131021530020] |OpenBSD focuses too much on security to be truly useful to the average user.
[131021530030] |NetBSD's goal is to run on anything, but that doesn't make it user friendly.
[131021530040] |I can't speak anything about any of them really...
[131021530050] |But FreeBSD just sounds like a good, popular choice.
[131021540010] |Personally, I find OpenBSD a great BSD to start with.
[131021540020] |It's simple, installs a minimum level of packages by default, and has excellent documentation.
[131021540030] |Man pages are a good thing.
[131021540040] |The installer is fast and incredibly easy to use (no, it's not a gui).
[131021540050] |And once you have the base system up and running, the online FAQ has answers to pretty much any question you could imagine, and it's not a wiki, the FAQ is written and maintained by the developers, and is up to date.
[131021540060] |Installing packages is just as easy as on other modern unix like systems, and though they lag behind the latest/greatest they are fully functional.
[131021540070] |I've been using it as my only desktop OS for years, and I find it's a great first unix like OS.
[131021550010] |The most mature one, though far from free is Mac OS X.
[131021550020] |The low-level part is distributed for free as Darwin.
[131021560010] |If you are coming from Linux, you might give PCBSD a try.
[131021560020] |It is FreeBSD but with a focus on desktop polish and ease-of-use.
[131021560030] |FreeBSD has historically been a server OS.
[131021560040] |The PCBSD installer can also install FreeBSD.
[131021560050] |Many people find it easier to install FreeBSD using the PCBSD installer.
[131021560060] |Linux and BSD are pretty similar in that they share the bulk of the software that would run on either one of them.
[131021560070] |To a casual desktop user, the BSD desktop will not seem that different.
[131021560080] |Big differences are (in my opinion of course):
[131021560090] |Userland (Linux uses GNU while BSD uses BSD)
[131021560100] |Integration (Linux is a collection of different efforts, BSD is much more unified at the core)
[131021560110] |Packaging (Linux typically manages installed software in binary packages - BSD typically manages a "ports" tree that you use to build software from sources)