[131033770010] |
What does dead.letter files do
[131033770020] |I find dead.letter files from time to time in my $HOME
directory.
[131033770030] |What they are for?
[131033780010] |Either a program tried to send mail and failed (this is more likely), or you were in the middle of writing mail and broke out, so the client saved the draft in dead.letter
.
[131033780020] |From the mail
man page:
[131033780030] |Normally, when you abort a message with two interrupt characters (usually control-C), mail copies the partial letter to the file dead.letter in your home directory.
[131033790010] |Gnome panel missing application icons, chat bubble menu, and power menu
[131033790020] |I've been messing around with my system too much and messed something up.
[131033790030] |I'm new to Ubuntu, but have been using linux on servers for a few years.
[131033790040] |I'm not sure of the correct terminology so I'm including screen shots to explain what is going on.
[131033790050] |First, system specs:
[131033790060] |Ubuntu 10.4 LTS x64 Lucid Core i7-970 Nvidia GTX 480 Dual Screen with Twinview Nvidia proprietary dev driver 260.24 (64-bit)
[131033790070] |Now what I screwed up:
[131033790080] |First major customization was ppa:goehle/goehle-ppa
customizations for keeping evolution open after closing the main window.
[131033790090] |That worked fine until I started messing with getting hibernate working.
[131033790100] |I never got hibernate working even after installing linux-generic-tuxonice
; it gave a warning about usb09 not stopping.
[131033790110] |The only things that I have in USB are a keyboard and mouse.
[131033790120] |Then I started getting the error:
[131033790130] |Trying to fix this, I reinstalled the Evolution customizations.
[131033790140] |The error persists and now the panel is messed up as well.
[131033790150] |I'm not getting the application icons, the menu with the chat status, or the shutdown/restart/lock screen menu.
[131033790160] |This is what it should look like:
[131033790170] |But this is what I'm getting now:
[131033790180] |How do I get my icons back?
[131033790190] |EDIT: I found how to get my application icons back.
[131033790200] |Right-click on panel
[131033790210] |Add to Panel ...
[131033790220] |Notification Area.
[131033790230] |I still have not figured out what the chat bubble menu and power menu are called.
[131033800010] |The Power thingy and the user chat bubble thingy are both the same applet called "Indicator Applet Session".
[131033810010] |the required package is "indicator-applet-me" (chat bubble on the right top corner)
[131033820010] |How similar is Apple's terminal.app to a bash terminal on Linux?
[131033820020] |I know that Apple's Terminal.app provides a bash shell.
[131033820030] |Are there any differences between this and a bash on Linux?
[131033830010] |Terminal is a terminal emulator.
[131033830020] |It interprets various control sequences sent by programs (control characters like CR, LF, BS and longer control sequences for commands like “clear screen”, “move cursor up 3 lines”, etc.).
[131033830030] |Terminal is the same kind of program as xterm, rxvt, Konsole, or GNOME Terminal.
[131033830040] |Almost all modern terminal emulators support the “xterm” control sequences, so they are generally highly compatible (and most programs use the ncurses library and its terminfo database to abstract over the actual control sequences).
[131033830050] |bash is a shell.
[131033830060] |It interprets commands that usually involve running other programs.
[131033830070] |In normal, interactive use the shell’s input comes from a user via a terminal emulator.
[131033830080] |The terminal emulator and the shell are connected via a “pseudo tty” device (e.g. /dev/pts/24
, or /dev/ttyp9
).
[131033830090] |Because the tty devices are the only interface between Terminal and bash, they are completely independent.
[131033830100] |You can use bash with iTerm instead of Terminal, and you can use zsh instead of bash inside a Terminal window.
[131033830110] |The version of bash installed on your Mac OS X and Linux systems may be different, but should be fairly easy to install pretty much whatever version of bash you want on either system.
[131033830120] |You might look at MacPorts, homebrew, or Fink for ways to install recent versions of bash (and other shells) on Mac OS X. Whatever Linux distribution you are using surely comes with packages for common shells.
[131033840010] |Mac OS uses standard released of Bash.
[131033840020] |From systems I have easy access to:
[131033840030] |SLES 10.2
[131033840040] |GNU bash, version 3.1.17(1)-release (x86_64-suse-linux)
[131033840050] |SLES 11.0 GNU bash, version 3.2.49(1)-release (x86_64-suse-linux-gnu)
[131033840060] |Leapord (10.5.8)
[131033840070] |GNU bash, version 3.2.17(1)-release (i386-apple-darwin9.0)
[131033840080] |Snow Leapord (10.6.4)
[131033840090] |GNU bash, version 3.2.48(1)-release (x86_64-apple-darwin10.0)
[131033850010] |How can I debug a Suspend-to-RAM issue on Linux?
[131033850020] |I'm hoping to get experience-based suggestions on how to go about debugging suspend-to-RAM issue.
[131033850030] |Advice specific to my situation(detailed below) would be great, but I am also interested in general advice about how to debug such issues.
[131033850040] |The problem:
[131033850050] |Often, when I attempt to suspend my machine, it gets stuck in a "not suspended but not awake" state.
[131033850060] |Often the screen will be completely black but sometimes it will have the following error message on it:
[131033850070] |Also, this state will also be accompanied by the fans kicking into high gear.
[131033850080] |The only way to get it out of this state is to manually power off the laptop.
[131033850090] |Some Information
[131033850100] |I've taken a look at /var/log/dmesg
and /var/log/pm-suspend.log
, but I don't know what I'm looking for and nothing stands out.
[131033850110] |I'm unsure if it is related, but I did find a lot of the following in /var/log/kern.log
:
[131033860010] |I have suspicions that the issue may be due to the BIOS not correctly reporting on what lowmem it really uses.
[131033860020] |By default this option is in effect:
[131033860030] |You can try setting that to larger values to make the memory corruption scanner examine a larger chunk of lowmem.
[131033860040] |Look for "memory_corruption_check_size" in
[131033860050] |http://lxr.linux.no/#linux+v2.6.35.7/Documentation/kernel-parameters.txt
[131033860060] |etc.
[131033860070] |I'd be interested in knowing what you find, if anything.
[131033870010] |My experience in working in this area was in Windows CE, rather than Linux.
[131033870020] |During the suspend /resume cycle, the OS will progressively shut down OS functionality restricting your ability to get accurate dependable information on what is going on using OS functionality.
[131033870030] |In addition, your monitoring connection can (e.g. if the issue is timing related) alter the outcome.
[131033870040] |Tools of preference start with a C/C++ debugger connection to the OS at the high end, and at the very low level end sending data down a serial port / POST Codes or on non X86 hardware JTAG debugger or equivalents.
[131033870050] |The end result is long hours working out the code flow and finding the point when it behaves differently from normal behaviour.
[131033870060] |At that point, the fix is usually obvious.
[131033870070] |Keep good notes, and make one change at a time.
[131033870080] |It took 6 weeks to identify the power up problem we had with Windows CE.
[131033870090] |We had a PC104 processor board that we could power off for 10 or 60 seconds and power up with no problems.
[131033870100] |However if power was removed for 25 seconds, it would not power up.
[131033870110] |It turned out that we had enough capacitance to keep the DRAM contents intact with no power for around 20 seconds, so on a short power off cycle, Windows CE thought it was resuming from a suspended state.
[131033870120] |When all the memory was preserved, it would actually succeed performing a resume, when the memory was partially corrupt, it would get rather confused during the resume.
[131033870130] |Good luck.
[131033880010] |Do you have an Intel graphics chipset?
[131033880020] |I was getting what sounds like the same problem on my ThinkPad X200s running Ubuntu 10.10, and this workaround (from 2008!) fixed it for me: http://ubuntuforums.org/showpost.php?p=6105510&postcount=12
[131033890010] |How would I go about getting UPnP working on a Slackware server/firewall?
[131033890020] |I'm a long time fan of Slackware and I've always had a machine serving as my main server/firewall with the latest version installed.
[131033890030] |I have it now but I'm struggling to find information on how to setup UPnP on it.
[131033890040] |Can anyone please provide some good links where I can investigate further.
[131033890050] |Many thanks.
[131033900010] |I think you are looking for linux-igd.
[131033900020] |This project is a deamon that emulates Microsoft's Internet Connection Service (ICS).
[131033900030] |It implements the UPnP Internet Gateway Device specification (IGD) and allows UPnP aware clients, such as MSN Messenger to work properly from behind a NAT firewall.
[131033900040] |This works fine with iptables...
[131033910010] |Writing a makefile to install manual pages for a library
[131033910020] |If I have a C library, let's say "apple", and "apple" contains functions "banana" and "carrot", how do I write the "install" line in the makefile so that "man banana" brings up the manual page for the "apple" library?
[131033920010] |You want to copy banana.3 to /usr/share/man/man3/ or perhaps /usr/local/share/man/man3.
[131033920020] |Details on which directory to use depend on your build system and your users' configurations.
[131033920030] |You might want to consider Automake.
[131033930010] |Make the banana.3
man page a symbolic link to the apple.3
page:
[131033940010] |In SciTe (Scintilla), how can I create Syntax-Highlighting for my own custom language?
[131033940020] |I want to create a custom Language, with its own custom Syntax Highlighting.
[131033940030] |Notepad++ (a Windows SciTe/Scintilla based Text-editor), allowed me to create a custom "Language", and now, in Linux, I want to reproduce the same thing.
[131033940040] |I need(?) to use SkiTe/Scintilla because unless someone knows otherwise, it is the only plain text editor which can display different size fonts in the same text-file (eg. default-font= 12pt, comment-font=24pt).
[131033940050] |I used the comments font to display a complex script(alphabet) in a larger font.
[131033940060] |Please let me know if there is any other plain text editor which does this.
[131033940070] |I assume this is a feature of SciTe/Scintilla (and not of Notepad++).
[131033940080] |Some of the magic is possible/probably(?) done in files such ase: /usr/share/scite/.properties
[131033940090] |Notepad++ has a GUI interface to set up a new language/syntax, ...but I could use some direction on exactly how to go about it in SciTe.
[131033940100] |Thanks...
[131033950010] |This probably will not suit your request, but jEdit has an extensive configuration for language highlighting if you want an editor that will highlight your new language.
[131033960010] |I looked at it once, it's quite painful, because all is defined statically in the c++ source code.
[131033960020] |You have to define a certain number of rules among the one already existing in scintilla, for things like... well it's quite mangled.
[131033960030] |You can quickly search for "scintilla lexer" on google, but you to understand that syntax highlighters are very sophisticated to code, just look at notepad++ and all its features: it's almost a code parser, which is found in compilers.
[131033960040] |On top of that, a scintilla lexer can be made for any type of language.
[131033960050] |Another problem is that coloring the text has to be platform agnostic, and again, I guess it favors windows (duh), and don't forget the font renderer, which can also be another thing.
[131033960060] |I'm not sure scintilla has been ported to gnome or kde, has it ?
[131033970010] |renaming files without the **rename** command
[131033970020] |Hi, I have a directory of files,
[131033970030] |Measurements Panama 2009-03-22 Session1.xml
[131033970040] |Measurements Panama 2009-03-22 Session2.xml
[131033970050] |Measurements Panama 2009-03-22 Session3.xml
[131033970060] |Measurements Panama 2009-03-22 Session4.xml
[131033970070] |...
[131033970080] |Measurements Panama 2009-03-22 Session10.xml
[131033970090] |and I want to remove the blank spaces and the "-" dash characters.
[131033970100] |I don't want to use the rename command because I don't know perl or regular expressions.
[131033970110] |From another post a recommendation for removing part of a file name was:
[131033970120] |this looks nice.
[131033970130] |I cant figure out what the "#" character does (I assume it removes the string "image" from the file names?), anyways... how do I remove the spaces and dashes?
[131033970140] |Best,
[131033970150] |I get an error that the files are not a directory when doing
[131033970160] |etc for all the files
[131033980010] |You can find information the ${...}
substitutions in your shell's man page, for example bash(1).
[131033980020] |The most common:
[131033980030] |${var#word}
remove shortest prefix: the value of variable var
with the word
prefix (if any) removed;
[131033980040] |${var%word}
remove shortest suffix: the value of variable var
with the word
suffix removed (if any).
[131033980050] |${var/pattern/replacement}
remove first occurrence of pattern
[131033980060] |${var//pattern/replacment}
remove all occurrences of pattern
[131033980070] |So, in your example ${f#image}
expands to the value of f
(e.g., image01.png
) removing the image
prefix, so it yields the value 01.png
.
[131033980080] |The word
and replacement
part in the ${...}
expansions are subject to the same wildcard expansions as filenames; therefore, if you want to remove spaces and -
, you could use ${f//[ -]/}
(replaces any occurrence of characters
and -
with a null string.
[131033980090] |All details on the man page.
[131033990010] |You might also use sed to build the new name.
[131033990020] |There it would be
[131034000010] |If you have the perl rename
(e.g. because you're on Debian or Ubuntu), it is the simplest way of skinning this particular cat (as in an individual cat — there are plenty of tools for mass renaming, and searching the archives of this site and Super User should find all the major ones).
[131034000020] |That being said, your script would have worked if you had followed the most important shell programming principle: all variable substitutions must be double-quoted.
[131034000030] |(Why do you need this extra bit of syntax fluff?
[131034000040] |Because there are cases where you actually want the unquoted behavior.
[131034000050] |But these cases are rather rare.)
[131034000060] |With zsh, you wouldn't need to write a loop: you could use the convenient zmv
function.
[131034010010] |ZSH to auto-complete directories in command-line arguments?
[131034010020] |Transferred from regular StackOverflow...
[131034010030] |I'm a new ZSH convert on OSX (git status in the prompt was my main driver to make the leap).
[131034010040] |Everything seems to be working swimmingly but I'm missing something from BASH..
[131034010050] |In BASH I would start to run a CLI PHP script that's built around Symfony's CLI tools.
[131034010060] |Typically I would run the command like this:
[131034010070] |In BASH I would be able to use the TAB key while typing out the ../conf/doctrine.xml
portion to complete the directories.
[131034010080] |In ZSH hitting the TAB key while typing out the directory results in... nothing.
[131034010090] |Is there a setting in ZSH or a configuration I can set to enable this behavior of attempting to complete almost anything that looks like a directory?
[131034020010] |Zsh is attempting to be clever in choosing completions, but it's not clever enough to know that after doctrine --configuration=
, it should complete a file name.
[131034020020] |Bash is either clever enough to parse this command correctly, or too stupid to complete anything but a file name here.
[131034020030] |You can write a completion function for doctrine
.
[131034020040] |This is a little advanced, though — completion functions tend to be a little arcane.
[131034020050] |You may find it easier to write a completer using zsh's alternative, older, simpler, but less powerful completion system, compctl
(documented in the zshcompctl
man page).
[131034020060] |If you have a bash completion function for doctrine
, you might be able to get zsh to read it by including autoload bashcompinit; bashcompinit
in your ~/.zshrc
.
[131034020070] |See Switching from bash on the zsh wiki.
[131034020080] |You may find it useful to bind a few keys to _bash_complete-word
.
[131034020090] |This widget (interactive command) performs completion of a few built-in types, depending on the last character in the key sequence that invoked the widget: /
for directories, $
for parameter names, etc.
[131034020100] |For example, include bindkey '^X/' _bash_complete-word
in your ~/.zshrc
, and press Ctrl+X / to complete a file name in any context (you might need to temporarily insert a space before the file name, if the file name is preceded by punctuation that is not a word separator in the shell).
[131034030010] |What is the bash shortcut to change to the previous directory?
[131034030020] |Sadly, I only learned about this last year by stumbling upon it randomly on the internet.
[131034030030] |I use it so infrequently that I always forget what it is by the time I need it again.
[131034030040] |How do you change to your previous directory?
[131034040010] |The shortcut is -
[131034040020] |Try cd -
[131034040030] |If you want to use this in your prompt, you have to refer to it with ~-
.
[131034040040] |See the example:
[131034050010] |You might also want to look at pushd
and popd
, which create a stack of directories to remember where you were.
[131034060010] |Good Introductory resources for linux
[131034060020] |I find myself spending a lot more time working with the web server at work than I have in other recent jobs.
[131034060030] |Are there any good tutorials or resources I can read to get myself more up to speed so that I am not confused by basic things like 'how to search all files in a directory and below for a given string', or 'how to find out how much memory is allocated to php'
[131034060040] |Basically what should I know for daily interaction with a Linux system and what are good resources to learn it.
[131034060050] |( I'd like this to be kind of a faq for good introductory resources to Linux as an OS, so server tips are good, but it shouldn't have to be server only )
[131034070010] |Specific things you might want to look into is:
[131034070020] |Shell scripting Being able to use bash is a must for anyone thats going to get intimate on the command line
[131034070030] |Services You will have to understand the services your webserver will be running.
[131034070040] |If your running PHP and MYSQL.
[131034070050] |You'll want to read about LAMP.
[131034070060] |As Falmarri says, solving individual problems when they arise will help you learn a lot quicker than studying up a book or doing all the theory.
[131034070070] |If you need to know the unix basics, get a unused PC at home and play with it.
[131034070080] |Install and use distro's that doesn't do everything for you, Arch Linux has the perfect installation wiki for this.
[131034070090] |Slackware is another good one.
[131034070100] |Also, to solve those individual problems, ask questions here on Unix SE or Serverfault :)
[131034080010] |A good general starting point for Linux administration is this book:
[131034080020] |Linux Administration Handbook (2nd Edition)
[131034080030] |It is about a lot of the basics and also has a chapter about web.
[131034080040] |Besides the points already mentioned, this things might come handy:
[131034080050] |perl/python or another scripting language for automating tasks
[131034080060] |sed
and awk
are always useful (IBM has some good tutorials, search for sed by example and awk by example)
[131034080070] |make yourself comfortable with the logs (webserver and system logs)
[131034080080] |cron
and at
are your friends for repeating and timed tasks
[131034080090] |monitoring is always useful (system + web). there are too much tools to give any advice here, just as starting point: sar
, nagios, cacti, ...
[131034080100] |For some more inspiration, take a look at these posts on serverfault:
[131034080110] |Recommend books/resources for this stack: linux, lighttpd, postgres, webpy
[131034080120] |Linux System Administrator Guide?
[131034080130] |What is the best resource for really understanding Linux deeply
[131034080140] |“Learning” Linux
[131034080150] |They will give you a lot starting points.
[131034080160] |Don't feel overwhelmed.
[131034080170] |Try to pick the topics you currently need the most to get the job done.
[131034090010] |You can read some of the various online linux-for-newbies resources, and they might be some help.
[131034090020] |Going through the documentation for your distribution is worthwhile — both Ubuntu and Fedora have teams producing professional-quality documentation, at https://help.ubuntu.com/ and http://docs.fedoraproject.org/ respectively.
[131034090030] |If you're a book learner, there's plenty of books.
[131034090040] |But the only real way to learn is to get your hands dirty.
[131034090050] |Therefore, I recommend setting up your system to dual-boot rather than just putting Linux in a VM.
[131034090060] |And then, boot into Linux and stay there even when it gets difficult or annoying or frustrating.
[131034090070] |(In fact, I might go so far as to say just put Linux on there as the primary OS — you can always put the other one back if need be.)
[131034090080] |When you get stuck, come back here (or to other similar sites, but, y'know, I recommend this one) and ask questions.
[131034090090] |Malcolm Gladwell has this thing called "the 10,000 hour rule", which he puts forth as the time you need to really master any particular skill.
[131034090100] |Of course, you can become competent with Linux (or many other things) far more quickly than that, but it really is about putting in the hands-on time.
[131034100010] |The Linux Documentation Project (TLDP) is a useful resource; some of the information is old, but lots of it is still very applicable.
[131034100020] |Especially useful is the Introduction to Linux - A Hands on Guide from the TLDP guides.
[131034110010] |How do I watch my webcams feed in linux
[131034110020] |In windows I can open "My Computer" and click on the "Webcam" icon to get a feed from my webcam.
[131034110030] |I can also take snapshots of that feed.
[131034110040] |Can I do the same in Ubuntu?
[131034110050] |Without installing any extra applications like Photobooth.
[131034120010] |Since you want an answer "without installing any extra applications like Photobooth," I've tried to give a solution that doesn't depend on very much.
[131034120020] |Also I'm assuming that your webcam uses "Video4Linux2" and that it is /dev/video0
.
[131034120030] |If this is a modern webcam and if you only have one, these are pretty good assumptions.
[131034120040] |From the command line:
[131034120050] |Note that "v4l2src" contains a lowercase L and not the number 1.
[131034120060] |On your system the command may be gst-launch
or something starting with gst-launch
but with a different version number.
[131034120070] |Tab completion should help you find the exact command name.
[131034120080] |This tool is in the gstreamer0.10-tools
package on my Ubuntu system, which is a dependency of libgstreamer, which is a dependency of a large number of the apps on my Ubuntu system and is likely present in the default installation.
[131034120090] |Other Applications
[131034120100] |If you don't mind installing other applications, here is how you can do this in a few other applications.
[131034120110] |All of them can easily be installed via apt-get
or another package manager of your choosing:
[131034120120] |VLC: $ vlc v4l2:///dev/video0
Also, you can do this from the VLC GUI by going to File->Open Capture Device
[131034120130] |mplayer: mplayer tv://device=/dev/video01
(from Stefan in the comments)
[131034120140] |Cheese: This is a photobooth-like app that is very simple to use.
[131034130010] |Linux distro installation on 64 bit processor
[131034130020] |Hi, I am trying to install Ubuntu/Fedora 64 bit versions on my machine and they shout back saying my CPU does not support x86_64 bit architecture and forces me to use i686 versions.
[131034130030] |I am currently running Windows 7 64 bit version on my laptop.
[131034130040] |The processor is : Intel Centrino Core 2 Duo CPU T6500 @ 2.10 GHz
[131034130050] |I am hoping this is the right place to ask this question.
[131034130060] |Why is it that even though I have a 64 bit CPU, I am unable to install Linux 64 bit OS?
[131034130070] |Ivar
[131034140010] |Montage together five gifs
[131034140020] |I have five gifs all with the same number of frames and the same framerate.
[131034140030] |Is it possible to montage these together into one gif where each frame is similar to a normal montage of static images and is the next frame in the gif.
[131034140040] |Furthermore, i would want the first two gifs on the first row, then a space, then the last three gifs.
[131034150010] |If I understood you correctly, you want one animated gif that looks like 5 animated gifs playing in parallel, right?
[131034150020] |Imagemagick can do that (and much more).
[131034150030] |Probably even in one line of code, but I'll do it in several steps.
[131034150040] |Lets assume your gifs are called anim1.gif
…anim5.gif
and are each 100x100 pixels.
[131034150050] |The technique is described in more detail here (“Layered Composition“)
[131034150060] |The final result (with animations examples from the imagemagick web page) looks like this:
[131034160010] |What's the best tool chain or single tool to transform a NTFS to ext[n] filesystem?
[131034160020] |I've decided to go Linux only, finally!
[131034160030] |This also means that I have a bunch of disks still under NTFS.
[131034160040] |I don't have spare space anywhere to transfer the files to and then just re-format the drive so I need a tool, or tool chain, to make it on the disk itself.
[131034160050] |I imagine I could do it with some patience like this:
[131034160060] |Defrag NTFS.
[131034160070] |Shrink NTFS partition some percent.
[131034160080] |Create ext[n] partition on left space.
[131034160090] |Copy some files until ext[n] is full.
[131034160100] |Shrink NTFS partition.
[131034160110] |Grow etx[n] partition.
[131034160120] |Copy files.
[131034160130] |and repeat 5, 6 and 7 until it's all transferred.
[131034160140] |It's a last resource path if I can't find any tool or tool-chain to do it automatically.
[131034170010] |You can do the described steps using gparted, however, I'd advice you to think twice about it.
[131034170020] |When fiddling with partitions one should be 100% sure to have a working and up-to-date backup ready because there is a low but significant chance that something goes wrong.
[131034170030] |When you have such a backup, it is probably easier to just reformat and copy the backup back to disk.
[131034170040] |If you don't have (which is risky in itself), I wouldn't take the risk.
[131034180010] |The problem is that you grow a file system from the end, not the beginning, so you can't really do what you're after.
[131034180020] |Your best approach is to copy the contents of the partition to another device, verify it, and then destroy the NTFS partition before re-creating it as ext3.
[131034180030] |Then you can restore the data.
[131034190010] |The best tool I found for resizing ntfs, ever, was partition magic, and unlike gparted it could move the partition on either side.
[131034190020] |Unfortunately it was discontinued when Symantec acquired PowerQuest, so it might be difficult to find a copy, and its ext supports sucks (only because it hasn't been updated in years).
[131034190030] |I then recommend making your ext partitions before the ntfs partition, and use gparted to grow the ext partition, and partition magic to shrink the ntfs partition, from windows.
[131034200010] |From what you say I think you have more than one hard drive, and each might have more than one partition.
[131034200020] |This doesn't directly answer your question, but do you really have to convert them all?
[131034200030] |Linux handles NTFS quite well, so access to your old files will be no problem.
[131034200040] |You can also configure those partitions to be automounted easily.
[131034200050] |Using Linux with a permission-unaware filesystem has its advantages (especially if you use it alone and/or intend to setup a multiple-boot system).
[131034200060] |A typical Linux installation will need from 2GB to 5GB, and believe me a 5GB installation is rather full-featured.
[131034200070] |In your case it's easier to shrink a partition to make 10GB for Linux, and it doesn't even need to be at the beginning of your disk (shrinking the "right" of a partition is easier and faster and less risky than it's "left").
[131034200080] |The Ubuntu installation CD provides an option to do this automatically, although I prefer to prepare the disk myself with gparted
.
[131034200090] |Backups are always recommended, but if you can't afford it (and are willing to take the risk) then the risk is quite small.
[131034210010] |In theory convertfs might work (in one shot).
[131034210020] |I'm not 100% sure though, this depends on the Linux NTFS driver being able to create a sparse file.
[131034220010] |I don't have spare space anywhere to transfer the files to and then just re-format the drive so I need a tool, or tool chain, to make it on the disk itself.
[131034220020] |As others have said, fiddling with partitions has a small but significant risk of data loss/corruption.
[131034220030] |If you've copied half your data to ext3 and then either the shrink ntfs or enlarge ext3 has a problem, you lose half you data!
[131034220040] |The risk is small but the damage could be large.
[131034220050] |If your data is valuable to you you should already have backups.
[131034220060] |But shrinking and growing partitions without a backup is just asking for trouble.
[131034220070] |Hard drives are getting ever larger and cheaper.
[131034220080] |Go buy yourself a new one.
[131034220090] |(You're switching to Linux.
[131034220100] |You deserve it.)
[131034220110] |Unless you have a lot of data already you may be able to copy all your data onto the new drive.
[131034220120] |It might be wise to keep the old drives as a backup of your data.
[131034220130] |If you have some really small partitions it may also be useful to burn the data to a DVD.
[131034230010] |Step-by-step guide for installation of 2 different Linux OSs and Window OS - on the same computer
[131034230020] |Is there any existing step by step guide instructing how to install 2 different Linux OSs (say, Red Hat and SUSE), and Windows OS on the same machine?
[131034230030] |(When tried it I entangled with the partitions configuration, and I've heard from others that the secondary Linux has to be installed without its boot loader).
[131034240010] |In my experience always install windows as first OS. Otherwise it will overwrite the boot loader of the before installed OS.
[131034240020] |There are ways around it, but these just make it more complicated.
[131034240030] |After installing windows, install your first linux distribution.
[131034240040] |It normally will find your windows installation and add it to its boot loader automatically so you can dual boot with windows and linux.
[131034240050] |now comes the third linux distribution. some distributions find other distributions and will add them to there boot loader (I don't know it for sure for suse and red hat).
[131034240060] |Just try it during your installation.
[131034240070] |When all OSes are recognized install the boot loader of your third OS, otherwise boot into your first linux distribution and add the second one manually to the boot loader.
[131034240080] |As the type and version of the boot loader depend on the distribution I can't tell you how to do it, but you'll find some good tutorials in the net.
[131034250010] |First install Windows operating system
[131034250020] |Next install Your 1st linux after that 2nd linux.
[131034250030] |Following links help you to do it....
[131034250040] |http://ubuntuguide.org/wiki/Multiple_OS_Installation
[131034250050] |http://www.hentzenwerke.com/wp/installingmultiplelinuxdistributions_onasinglebox.pdf
[131034250060] |Study this too...
[131034250070] |http://www.linux.org/docs/ldp/howto/MultiOS-HOWTO-6.html
[131034260010] |My experience : Knowledge : There are only 4 primary in a a disk .
[131034260020] |Make 2 NTFS for windows OS and one for common data.
[131034260030] |Install windows first
[131034260040] |Install linux , make 2 linux partition : 1 ext 4 and 1 swap for linux installing .
[131034260050] |Install ntfs-3g to access ntfs in your linux .
[131034260060] |My example :
[131034260070] |Laptop 500Gb :
[131034260080] |C: 50Gb for windows OS [NTFS]
[131034260090] |D: 436Gb for Common Data [NTFS]
[131034260100] |ext4: 10Gb for arch root and 4Gb left for swap partition
[131034260110] |Note: you shouldn't try to access linux on windows, it doesn't have any software can access ext4 and many errors appears when you're trying to do that.
[131034260120] |In linux ntfs-3g does smoothly :)
[131034270010] |Ok, here are some steps:
[131034270020] |First of all, prepare your hard disk.
[131034270030] |I use the parted live cd for that.
[131034270040] |So you don't have to worry about partitioning while you're installing the distributions.
[131034270050] |-
[131034270060] |Use the following layout: One primary Windows NTFS partition; One primary Linux partition, which has a size of ~200 MB for /boot (ext2 oder 3). Two primary Linux partitions (ext4), One logical partition for swap.
[131034270070] |This should be twice the size of your ram.
[131034270080] |Install Windows
[131034270090] |Install the first Linux Distribution, with grub as bootloader
[131034270100] |Install the second Linux Distribution, don't install a bootloader
[131034270110] |But into the first Linux Distribution, edit /boot/grub/menu.lst and add the second linux distribution
[131034270120] |-
[131034270130] |That's it.
[131034270140] |Windows should be added to grubs menu.lst automatically.
[131034270150] |If you want to install 2 distributions, that name their kernels the same way, you'll have to reboot into the first linux distro before installing the second one, and rename your kernel to something else, and change the menu.lst file to match it.
[131034280010] |SSH getting disrupted intermittently
[131034280020] |I have Red Hat Linux Enterprise Edition 5 on my box.
[131034280030] |Recently I started having issues with my SSH service not keeping my connection and randomly disconnecting my clients.
[131034280040] |Ping works perfectly fine, except that port 22 is blocked randomly.
[131034280050] |Has anyone faced this issue?
[131034280060] |What is the solution?
[131034290010] |The problem is likely an intervening firewall with a timeout.
[131034290020] |You can try telling ssh to keep the connection active by appending to ~/.ssh/config:
[131034290030] |It could also be intermittent failures of the network.
[131034290040] |From the sshd_config man page regarding TCPKeepAlive (on by default):
[131034290050] |this means that connections will die if the route is down temporarily, and some people find it annoying.
[131034300010] |Is it possible to mount an NFS partition with a label?
[131034300020] |I export 3 directories via NFS (pisces:/media/music, pisces:/media/video, pisces:/media/photo) that I mount individually.
[131034300030] |Currently, they all appear on my (Ubuntu) Gnome desktop as 'pisces', which is less than useful.
[131034300040] |The doc seems to indicate that the -LABEL
switch isn't supported for NFS mount
; is there some other way of labeling the mounts?
[131034300050] |The directories are mounted using /etc/fstab
(unfortunately I don't have access to that machine right now, and can't remember the options; it is pretty close to the defaults though).
[131034300060] |ta, -- peter
[131034310010] |What are the fundamental differences between the mainstream *NIX shells?
[131034310020] |What are the fundamental differences between the mainstream *NIX shells and what scenarios might prompt you to use one over the other?
[131034310030] |I understand that some of it probably comes down to user preference but I've only ever used bash and I'm interested to hear where another shell might be useful.
[131034310040] |Also, is there an impact on user-written shell scripts when running under one shell or another or is it simply a matter of changing the shell at the top of the file?
[131034310050] |My instinct says it's not that easy.
[131034320010] |There are two basic flavors of shell, sh (e.g., bash) and csh (e.g., tcsh).
[131034320020] |For interactive use, it mostly comes down to what you are used to.
[131034320030] |I've used csh, and then tcsh, for years and it would be painful to switch, just because I'm so used to it.
[131034320040] |I've used bash as well, and I don't think there are any compelling reasons to switch.
[131034320050] |Except maybe if one or the other is not available on machines that you regularly use.
[131034320060] |For programming, the syntax is different.
[131034320070] |You can't just change the shell, but need to change the syntax of the script as well.
[131034320080] |For scripting, you want to use sh or bash.
[131034320090] |The syntax is much more amenable to scripting, as explained here (thanks to Riccardo Murri for the link.
[131034320100] |The is a good guide on bash scripting.
[131034320110] |If you haven't decided on a shell, and you expect to write some scripts, I would use bash just to reduce the amount of things you need to learn.
[131034330010] |The two main branches of shells are the Bourne shell derivatives (sh, bash, ksh, ash and zsh) and the csh derivatives (tcsh and...uhm...tcsh).
[131034330020] |I suspect (though I have no actual numbers) that bash is the most widely used, it seems to be the default shell in most linuxes.
[131034330030] |Most things written in one bourne shell derivative will probably work in others.
[131034330040] |Most things written in a Bourne shell will probably need to be modified to run under csh or tcsh.
[131034330050] |Personally I used ksh when I started out because that's what was on the system I was using.
[131034330060] |I mainly use bash now.
[131034340010] |Back in the old days, when AT&T invented UNIX, there was Bourne Shell, written by Steve Bourne.
[131034340020] |It was pretty basic, and lacked a lot of tools we take for granted nowadays.
[131034340030] |AT&T wasn't really in the UNIX business, so at this time the very basic OS was adopted somewhat by Berkelely, and they made some changes into BSD UNIX.
[131034340040] |Among many changes, was a new shell, called csh, which had a lot of improvements over sh, including job control better interactive use and so on.
[131034340050] |Unfortunately, they decided the sh programming syntax sucked and created their own, (somewhat badly) copied from C coding styles.
[131034340060] |(A classic rant is http://www.faqs.org/faqs/unix-faq/shell/csh-whynot/) So now there were two syntaxes.
[131034340070] |Later, they made improvements to CSH adding tab completion and some other things.
[131034340080] |This became tcsh, and if you use CSH, this is probably the one you use.
[131034340090] |AT&T decided it wasn't totally out of the UNIX business, and they polished it up too.
[131034340100] |David Korn (nice guy) created the Korn shell.
[131034340110] |Based on the idea of extending Bourne shell syntax, it added a lot of things for both programmers and interactive use.
[131034340120] |There are actually a few versions, and you may rarely see things like ksh88 and ksh93, denoting the variants.
[131034340130] |Then came FSF and the GNU OS.
[131034340140] |They wanted to make their own UNIX-compatible OS named the Hurd, and wanted a better shell for it.
[131034340150] |They called bash, for Bourne Again SHell.
[131034340160] |POSIX rules came in just around this time, and they wanted to make the POSIX shell.
[131034340170] |They looked around, taking the syntax from Bourne shell and the improvements from Korn shell, plus stealing and extending the interactive features from tcsh.
[131034340180] |It became the de facto shell on Linux, so it's very common.
[131034340190] |There's also the zsh, written to be the 'ultimate' shell.
[131034340200] |It's also very common in the Linux world.
[131034340210] |It extended bash (and cross pollinated a bit, some new things went back to bash).
[131034340220] |If i were to pick a shell, I'd pick bash or zsh. bash is possibly in a few more places than zsh. zsh is more powerful, but bash has been fine for me.
[131034340230] |Real /bin/sh Bourne shell is around just for historical reasons. bash has pretty much all that ksh has to offer and more.
[131034340240] |The syntax is cleaner than csh or tcsh, and has better features than either one of them.
[131034340250] |To convert a script depends on from what to what.
[131034340260] |Bourne shell style (sh, ksh, bash, zsh) to or from csh style (csh, tcsh) will be hard.
[131034340270] |Going from old to newer (/bin/sh => bash, /bin/ksh => zsh) will be easier than the other way.
[131034350010] |For interactive use, there are two main contenders, bash and zsh, plus the straggler tcsh and the newcomer fish.
[131034350020] |Bash is the official shell of the GNU project and the default shell on most Linux distributions.
[131034350030] |On other unices that don't ship with a decent interactive shell as part of the base installation, I think bash is what people tend to choose, in a self-reinforcement “bash is everywhere so I'll use it too” loop.
[131034350040] |See also Why is bash everywhere? (with a lot of historical information).
[131034350050] |Zsh has almost every feature of bash and many more (useful!) features.
[131034350060] |Its main downside is being less well-known, which as a practical matter means you're less likely to find it already installed on a system someone else set up and there is less third-party documentation about it.
[131034350070] |See also What zsh features do you use?, What features are in zsh and missing from bash, or vice versa?.
[131034350080] |Tcsh was once (in the 1980s) or so the shell with the best interactive features, like its predecessor csh.
[131034350090] |Zsh caught up with tcsh and fairly quickly improved further, and bash caught up (with programmable completion) in the early 2000s. Therefore there is little reason to learn tcsh now.
[131034350100] |Fish tries to be cleaner than its predecessors.
[131034350110] |It has some neat features (simpler syntax, syntax coloring on the command line) but lacks others (whatever the author doesn't like).
[131034350120] |The fish community is a lot smaller than even zsh's, making the effects even more acute.
[131034350130] |See also What are the differences between fish and zsh ?.
[131034350140] |For scripting, there are several languages you might want to target, depending on how portable you want your scripts to be.
[131034350150] |Anything that pretends to be unix-like has a Bourne-derived shell as /bin/sh
.
[131034350160] |There are still some commercial unices around where /bin/sh
is not POSIX compliant.
[131034350170] |Almost every now-running unix has an sh
executable that is at least compliant with at least POSIX.2-1992 and usually at least POSIX:2001 a.k.a. Single Unix v3.
[131034350180] |This shell might live in a different directory such as /usr/bin/posix
or /usr/xpg6/bin
.
[131034350190] |POSIX emulation layers also exist for just about every system that's powerful enough to support it, making it an attractive target.
[131034350200] |Many unix systems have ksh93, which brings some very useful features that POSIX sh lacks (arrays, associative arrays, extended globs (*(foo)
, @(foo|bar)
, …), null globs (~(N)foo*
), …).
[131034350210] |Ksh was initially commercial software (it became free in 2000, after some habits had set), and many free unices (Linux, *BSD) got into the habit of only providing a much older free clone (http://web.cs.mun.ca/~michael/pdksh/) lacking many of these useful features.
[131034350220] |Today, you can't count on ksh93 being available everywhere, especially on Linux where bash is the norm.
[131034350230] |Bash is always available on Linux (except some embedded variants) and often on other unices.
[131034350240] |It has most of ksh93's useful features, though sometimes with a different syntax.
[131034350250] |Zsh has most of ksh93 and bash's useful features.
[131034350260] |Its core syntax is cleaner but incompatible with Bourne.
[131034350270] |Don't count on zsh being available on a system you didn't install.
[131034350280] |For more advanced scripting, you can turn to Perl or Python.
[131034350290] |These languages have proper data structures, decent text manipulation features, decent process combination and communication mechanisms, and tons of available libraries.
[131034350300] |Most unix systems have them, either bundled with the OS or installed by the administrator (because there are so many Perl and Python scripts out there that it's a rare system that doesn't have at least one of each).
[131034360010] |What makes OSX programs not runnable on Linux?
[131034360020] |I know there are many differences between OSX and Linux, but what makes them so totally different, that makes them fundamentally incompatible?
[131034370010] |Why OSX applications won't run natively on linux:
[131034370020] |First of all OSX uses a different binary format than Linux, so Linux can't execute binaries compiled for OSX (the same way it can't execute binaries compiled for Windows or BSD).
[131034370030] |Second of all, if you're talking about GUI applications, Apple's GUI toolkit Cocoa a) is only available for OSX and b) does not run on top of X11.
[131034370040] |Why there is no equivalent of wine for OSX applications:
[131034370050] |A lot of work had to be done before wine was even halfway usable.
[131034370060] |Since there is not as much demand for an OSX equivalent, no one has invested the same amount of effort into such a project yet.
[131034380010] |The whole ABI is different, not just the binary format (Mach-O versus ELF) as sepp2k mentioned.
[131034380020] |For example, while both Linux and Darwin/XNU (the kernel of OS X) use sc
on PowerPC and int 0x80
/sysenter
/syscall
on x86 for syscall entry, there's not much more in common from there on.
[131034380030] |Darwin directs negative syscall numbers at the Mach microkernel and positive syscall numbers at the BSD monolithic kernel — see xnu/osfmk/mach/syscall_sw.h and xnu/bsd/kern/syscalls.master.
[131034380040] |Linux's syscall numbers vary by architecture — see linux/arch/powerpc/include/asm/unistd.h, linux/arch/x86/include/asm/unistd_32.h, and linux/arch/x86/include/asm/unistd_64.h — but are all nonnegative.
[131034380050] |So obviously syscall numbers, syscall arguments, and even which syscalls exist are different.
[131034380060] |The standard C runtime libraries are different too; Darwin mostly inherits FreeBSD's libc, while Linux typically uses glibc (but there are alternatives, like eglibc and dietlibc and uclibc and Bionic).
[131034380070] |Not to mention that the whole graphics stack is different; ignoring the whole Cocoa Objective-C libraries, GUI programs on OS X talk to WindowServer over Mach ports, while on Linux, GUI programs usually talk to the X server over UNIX domain sockets using the X11 protocol.
[131034380080] |Of course there are exceptions; you can run X on Darwin, and you can bypass X on Linux, but OS X applications definitely do not talk X.
[131034380090] |Like Wine, if somebody put the work into
[131034380100] |implementing a binary loader for Mach-O
[131034380110] |trapping every XNU syscall and converting it to appropriate Linux syscalls
[131034380120] |writing replacements for OS X libraries like CoreFoundation as needed
[131034380130] |writing replacements for OS X services like WindowServer as needed
[131034380140] |then running an OS X program "natively" on Linux could be possible.
[131034380150] |Years ago, Kyle Moffet did some work on the first item, creating a prototype binfmt_mach-o for Linux, but it was never completed, and I know of no other similar projects.
[131034380160] |(In theory this is quite possible, and similar efforts have been done many times; in addition to Wine, Linux itself has support for running binaries from other UNIXes like HP-UX and Tru64, and the Glendix project aims to bring Plan 9 compatiblity to Linux.)
[131034390010] |How do free software companies make money?
[131034390020] |How do companies that provide free (as in beer) software make money?
[131034390030] |I'm thinking of things like Linux distros, as some even provide free overseas shipping!
[131034400010] |In short, distro's like Ubuntu make money by providing companies 24/7 support packages.
[131034400020] |So now the money that usually spent on software can now go for the guy keeping the software running... which in this case is Canonical...
[131034410010] |Red Hat is worth over a billion dollars these days.
[131034410020] |Yes, they make money.
[131034410030] |By doing consulting, offering support, providing training etc.
[131034410040] |That said, there's not a lot of open source companies that actually make money.
[131034410050] |Canonical certainly doesn't (yet).
[131034410060] |Novell is in a patch of bad weather.
[131034410070] |Mandriva is always in a patch of bad weather.
[131034410080] |Zarafa is relatively new and small.
[131034410090] |On the other hand, ask yourself whether there needs to be a single company offering something.
[131034410100] |Companies like IBM, Oracle, Red Hat, Novell, Intel, AMD, Fujitsu, Dell, HP, QLogic and a whole lot of others work together on the kernel.
[131034410110] |They do not all make money solely on 'selling' that kernel or support to it, but they sure as hell make money.
[131034410120] |The difference between (companies like) Microsoft and (companies like) Novell or Red Hat is that the latter are able to provide value on top of a commodity, whereas (companies like) Microsoft can only make money by making sure that what they are selling never becomes a commodity.
[131034410130] |That's why Microsoft is scared shitless about open standards.
[131034410140] |Same goes for Apple.
[131034410150] |Open standards are not cool if your business is to tie people to your product.
[131034410160] |Open standards are very cool if you can provide something (support, consulting) on top of an open, standardized commodity platform.
[131034410170] |That is how it works :)
[131034420010] |Except a company here or there, most "free" software companies donot make money and endup gutting the investment made.
[131034420020] |The "free" software community wastes computer time and resources of the universities where they hang around for morsels and sex.
[131034420030] |The rest of the "free" folks are all "consultants" and some more "consultants"!
[131034420040] |Most people here don't want to admit it, but thats the fact.
[131034430010] |MySQL is another free software project that could not make any money in its original form !
[131034430020] |Dual licensing "free software license" and "closed proprietary license EULA" is what saved Widneius and Axmark's MySQL project !
[131034430030] |Come on guys grow up and don't try to fool the world !
[131034440010] |Ubuntu Desktop lost
[131034440020] |On my Ubuntu 10.10 system, after configuring an external CRT to clone my desktop over S-Video, I've lost my desktop icon and right clicking on it does not show the menu or the menu bar.
[131034440030] |I've tried to resolve this with following commands:
[131034440040] |Using the following sequence of commands I've gotten the main menu back, but nothing I have found so far will get back my desktop:
[131034440050] |Have you any idea how to resolve my problem?
[131034440060] |Thanks in advance.
[131034450010] |Since the application responsible for drawing the desktop, is nautilus not gnome-panel, you might have more luck by looking at nautilus' settings.
[131034450020] |Specifically, if the gconf-key apps/nautilus/preferences/show_desktop
is set to false, nautilus won't show any desktop icons.
[131034450030] |So if the key is set to false, you should change it to true.
[131034450040] |If that's not the case, you might try to backup and delete your nautilus settings same way you did with your gnome-panel settings.
[131034460010] |Restarting nautilus may work.
[131034460020] |It should respawn automatically when killed.
[131034460030] |Nautilus will immediately respawn, hopefully with your icons showing.
[131034470010] |What is the equivalent of Active Directory on Linux
[131034470020] |I have a couple of machines at home (plus a number of Linux boxes running in VMs) and I am planning to use one of them as a centralized file server.
[131034470030] |Since I am more a Linux user rather than a sysadmin, I'd like to know what is the equivalent of, let's say "Active Directory"?
[131034470040] |My objective is to have my files in any of the machines that I logon in my network.
[131034480010] |You either build your own Active Directory-equivalent from Kerberos and OpenLDAP (Active Directory basically is Kerberos and LDAP, anyway) and use a tool like Puppet (or OpenLDAP itself) for something resembling policies, or you use FreeIPA as an integrated solution.
[131034480020] |There's also a wide range of commercially supported LDAP servers for Linux, like Red Hat Directory Server.
[131034480030] |RHDS (like 389 Server, which is the free version of RHDS) has a nice Java GUI for management of the directory.
[131034480040] |It does neither Kerberos nor policies though.
[131034480050] |Personally, I really like the FreeIPA project and I think it has a lot of potential.
[131034480060] |I don't think there is a commercial solution based on it (anymore) though.
[131034480070] |That said, what your are asking about is more like a fileserver solution than an authentication solution (which is what AD is).
[131034480080] |If you want your files on all machines you log into, you have to set up an NFS server and export an NFS share from your fileserver to your network.
[131034480090] |NFSv3 has IP-range based ACL's, NFSv4 would be able to do proper authentication with Kerberos and combines nicely with the authentication options I described above.
[131034480100] |If you have Windows boxes on your network, you will want to setup a Samba server, which can share out your files to Linux and Windows boxes alike.
[131034480110] |Samba3 can also function as an NT4 style domain controller, whereas the upcoming Samba4 will be able to mimic a Windows 2003 style domain controller.
[131034490010] |If you're really just trying to share files from one server to a few other machines, you may just want to use something simpler like Samba (especially if you're interoperating with some Windows clients) or NFS shares.
[131034500010] |I have tried OpenLDAP and Samba 3.x and both won't give you the centralized authentication that you are looking for.
[131034500020] |As wzzrd said, Samba 4.x probably will give you that.
[131034500030] |Samba 3.x domain controller is more like a workgroup option.
[131034500040] |You still need to create users in Unix/Samba as well as Windows and then map them.
[131034500050] |In the end I removed OpenLDAP and use only Samba now.
[131034510010] |If you just want centralised authentication, look at NIS or NIS+ (formerly known as yellow pages which is why all the commands begin with 'yp').
[131034510020] |Configure your main server as the master NIS server, then configure all the other boxes to use NIS to authenticate users.
[131034510030] |The wikipedia page for NIS is here: http://en.wikipedia.org/wiki/Network_Information_Service and the Linux NIS Howto is here: http://www.tldp.org/HOWTO/NIS-HOWTO/
[131034510040] |For a basic home network NIS will be fine.
[131034510050] |If you need more control over which users can see which servers you'll need to use NIS+
[131034520010] |How to get a persistent "history"-file even after a non-clean shutdown?
[131034520020] |I am trying Ubuntu Server 10.10 in VirtualBox.
[131034520030] |I like that the commands I run is saved in a "history"-file that can be shown with the history
command.
[131034520040] |But if I just turn off VirtualBox without the shutdown
command, then next time when I boot, the commands from the last session is not saved in the history file.
[131034520050] |The commands are only saved if I shutdown the machine with the shutdown
command.
[131034520060] |E.g. shutdown -P 0
.
[131034520070] |Is there any way I can save the history file after every command so they are persistent even after a "non-clean" shutdown?
[131034530010] |You should always use shutdown
.
[131034530020] |You can add this to your ~/.bashrc
file:
[131034530030] |This will append the in-memory history to your history file after each command is completed.
[131034540010] |NTFS Junctions and Unix Hard Links
[131034540020] |I'm building an IO abstraction library for CIL code, and I have a very hard time understanding the differences between a junction between two folders in NTFS and hard links.
[131034540030] |Are they equivalent?
[131034540040] |Are there differences I should know about?
[131034550010] |NTFS junction points are a type of reparse point, effectively acting as symbolic links but restricted to targeting local directories by absolute path.
[131034550020] |Suppose you have
[131034550030] |What happens is that within the directory table for C:\b
, a directory entry named c
is inserted, with an attribute $Reparse
containing data specifying that it is a junction point with target C:\a
.
[131034550040] |When Windows 2000 (when reparse points and junction points were introduced) encounters a reparse point, it gets handed off to the appropriate handler.
[131034550050] |In this case, when accessing a path below C:\b\c
, the handler for junction points would replace the path C:\b\c
with C:\a
and normal filesystem operations would continue from there on.
[131034550060] |Other file system filters can be installed which intercept and handle other types of reparse points; Windows Vista, Server 2008, and later come with a handler for "symlink" reparse points on NTFS, which can point to a file or directory, absolute or relative, local or remote - pretty much like symlinks on other systems.
[131034550070] |Separately, NTFS does have support for hardlinks, in much the same manner UNIX does - multiple directory entries can point to the same "inode", which is the actual file data.
[131034550080] |This has nothing to do with reparse points.
[131034550090] |On almost all systems, hard links can only be made to files; hardlinking directories is fraught with danger.
[131034550100] |(Among other things, what should the ..
entry of a hardlinked directory point to?)
[131034560010] |How do I quit from Vi?
[131034560020] |I started Vi on my Ubuntu machine.
[131034560030] |However I don't know Vi, and now I can not quit.
[131034560040] |I see the editor and I can write text, at the bottom line there is a label "recording".
[131034560050] |How do I quit from Vi?
[131034570010] |vim is a modal editor.
[131034570020] |Hit the ESC key to get into Normal (command) mode then type :q and press Enter.
[131034570030] |To quit without saving any changes, type :q! and press Enter.
[131034570040] |See also Getting out in Vim documentation.
[131034580010] |As Sinan said, vim is a modal editor.
[131034580020] |If you want to know whether that works for you you should maybe invest some time and run vimtutor
which is an interactive way to learn vim.
[131034580030] |(It also covers how to exit, what the modes mean and what you can do in each mode).
[131034590010] |I use ctrl+[ to generate the esc sequence, this keeps me from having to move my fingers from the home row (remember the esc key was in a different place when vi
was invented. :wq
will write all files regardeless of necessity.
[131034590020] |I suggest using ZZ
(which is shift+z twice) which will only write if a change has been made in the file.
[131034590030] |Also :xa
is the same as ZZ
except if you have more than 1 file open in the editor instance (such as vim tabs). note: I'm not sure all this is 100% compat with all vi clones, but I know it works with vim
[131034600010] |How do you use badblocks?
[131034600020] |I can never remember the specific incantations for using badblocks
, and apparently google isn't much help either.
[131034600030] |Yes I could read the man page but I remember it's like 4 options I have to use to get it to work the way I want.
[131034600040] |I need to do a destructive (rw) test on a new drive, and a read only on a drive that fell out of my raid array.
[131034600050] |I want to see the output if it finds problems and how far along it is.
[131034610010] |Let /dev/sda
be the new drive on which to test destructive-rw and /dev/sdb
the old drive where you want non-destructive-r
[131034610020] |-s
gives the process indicator
[131034610030] |-v
gives verbose output
[131034610040] |-w
enables destructive read-write
[131034610050] |-n
would be non-destructive read-write
[131034610060] |Read-only testing is the default and doesn't need special parameters.
[131034620010] |How come I installed Ubuntu 64 bit on a Pentium 4 machine?
[131034620020] |Hi,
[131034620030] |I have just tried booting the Ubuntu 10.10 64 bit live USB on this machine and to my amazement everything works fine.
[131034620040] |I even installed the system, after which I checked with uname -a
and the result is
[131034620050] |Linux T205-04 2.6.35-22-generic #33-Ubuntu SMP Sun Sep 19 20:32:27 UTC 2010 x86_64 GNU/Linux
[131034620060] |This is quite confusing to me.
[131034620070] |To my knowledge Pentium 4 is 32 bit only.
[131034620080] |How was that possible?
[131034620090] |Below is the result of cat /proc/cpuinfo
(there are 2 CPUs, but the information is the same)
[131034630010] |Apparently there were some 64 bit pentium 4 chips made.
[131034630020] |Check your processor with cat /proc/cpuinfo
[131034640010] |From Wikipedia: “In 2004, the initial 32-bit x86 instruction set of the Pentium 4 microprocessors was extended by the 64-bit x86-64 set.”
[131034640020] |Your /proc/cpuinfo
output shows flags: … lm …
.
[131034640030] |The flag lm
stands for “long mode“ which means 64-Bit extension.
[131034640040] |Thus, you have indeed a 64-bit processor.
[131034650010] |How to upgrade PHP to 5.3 on Debian 5.0 (Lenny)?
[131034650020] |I currently have PHP 5.0 installed on my Debian VPS and was wondering how I would be able to upgrade it to PHP 5.3 and keep all of my installed modules running.
[131034660010] |You might have a look at dotdeb.
[131034660020] |They have Debian packages for Debian-based LAMP servers and offer among others packages for PHP 5.3.
[131034670010] |unstable repositories contain right now the 5.3.3-2 version of php.
[131034670020] |Using a test environment, add unstable to your sources-list and just try:
[131034670030] |In my previous experience, it works very well.
[131034670040] |If you're using non-standard modules, check for compatibility before trying to upgrade.
[131034680010] |The staging area for the next Debian release, Squeeze, has been having PHP 5.3 since early 2010.
[131034680020] |So, add squeeze
to your '/etc/apt/sources.list', then:
[131034680030] |Check output, ensure important packages won't be deleted, and system libraries won't be upgraded (or keep it at a minimun), and if things look ok:
[131034680040] |Of course this should be a temporary measure.
[131034680050] |That is, remove that squeeze line from 'source.list' when your are done installing that PHP.
[131034690010] |If you haven't checked already, you should check if the package that you want is in backports before adding testing or unstable to your sources.
[131034690020] |If it isn't, you'll want to be very careful about pulling in packages from other releases, they might also want to upgrade many other things on the system.
[131034700010] |Has the GNU coreutils stat tool changed?
[131034700020] |Has the GNU coreutils (8.6) version of stat
changed?
[131034700030] |After a recent update, some scripts I wrote that call stat are not working as expected; in particular, I had been making use of stat -c %Y "${filename}"
.
[131034700040] |This is supposed to return the time of last modification in the form of seconds since "Epoch".
[131034700050] |However, I could have sworn this used to take the form of a whole integer.
[131034700060] |However, I'm now getting output from this of the form 1287630625.956667227
rather than 1287630625
.
[131034700070] |Since Bash can't handle floating point arithmetic, this is a litttle annoying.
[131034700080] |Is this a deliberate change?
[131034700090] |Or a bug?
[131034700100] |Or has my memory gone fuzzy?
[131034700110] |I tried searching for information on this, but couldn't find anything.
[131034710010] |stat: print timestamps to full resolution was committed to coreutils-8.6.
[131034720010] |Wildcard in /etc/hosts file
[131034720020] |It seems that wildcards are not supported in the /etc/hosts file.
[131034720030] |What is the best solution for me to resolve all *.local domains to localhost?
[131034730010] |You'd really need to run your own DNS server and use wildcards.
[131034730020] |Exactly how you'd do that would depend on the DNS package you ran.
[131034740010] |How to restore GnuPG key after reinstall?
[131034740020] |I know that GnuPG is all about security, thus it's not giving many chance of retrieve private keys (otherwise anyone could do it) but I've got a private key, and my own rev.asc file.
[131034740030] |I had to reinstall my Ubuntu box (former Ubuntu Studio) and I have backup of /home and /etc.
[131034740040] |Is it possible to recover my GnuPG key instead of revoke it and create another one?
[131034750010] |By default, GPG stores everything under the .gnupg
directory in your home directory.
[131034750020] |(Your encrypted private key should be in ~/.gnupg/secring.gpg
).
[131034750030] |Restoring the entire ~/.gnupg
directory from your backup will do the trick.
[131034760010] |How do you recall the last (n-th?) passed argument of the previous command you used with bash?
[131034760020] |Often times I issue different commands on the same file.
[131034760030] |For example:
[131034760040] |Is there a way to reuse arguments from the previous command in the current so that I don't have to rewrite it?
[131034770010] |One relatively slow way is recalling the previous command with ↑ and replacing the previous command with the newer one.
[131034780010] |In bash you can use the shortcut Alt
+ .
. Hitting it once, will give you the last argument.
[131034780020] |Hitting it more will cycle through your last arguments.
[131034790010] |In bash, you can also use $_ for the last command line argument of the last command you typed:
[131034790020] |becomes:
[131034800010] |In bash, the designator for the "last word on previous command line" is !!$
:
[131034800020] |You can also use the "caret syntax" to replace the initial part of the command line; this comes handy if you want to execute several commands on the same file:
[131034800030] |There are many more possibilities; see "History substitution" in the bash(1) man page for details.
[131034810010] |alt-.
is certainly nice, but if you happen to already know which numbered argument you want, you can be faster: !:n
is the n
th argument of the previous command.
[131034810020] |It's often helpful to combine this with magic space.
[131034810030] |To enable that, put in your .inputrc Space: magic-space
.
[131034810040] |With that enabled, when you type space after !:2
, it will be immediately expanded to its value instead of waiting for you to hit enter.
[131034810050] |Saves you from accidentally grabbing the wrong argument.
[131034820010] |List of *nix terminal commands
[131034820020] |As a Unix beginner, I often find myself wanting to know the name of the command that achieves a particular function I'm after.
[131034820030] |I want this post to act as a comprehensive command reference where anyone can browse and find what they're after.
[131034830010] |A good starting point, if you don't know the exact command name, is apropos
.
[131034830020] |You'll find a short description here or with man apropos
.
[131034840010] |You might want to print out or bookmark a cheat sheet.
[131034840020] |I like this one which is the first result on the Google search for "unix cheat sheet" for a reason.
[131034850010] |is the unix way of answering this question.
[131034860010] |If you want to list all possible commands try hitting
twice
[131034870010] |How do I remove every file that has x in its title?
[131034870020] |I have a lot of directory where there are hundreds of files.
[131034870030] |In every directory there are pairs of my_file-01.jpg and my_file-€01.jpg
[131034870040] |I want to remove every file that contains € sign in its title: how to do that?
[131034880010] |find
does the job:
[131034880020] |find . -iname "*€*" -delete
gets rid of all files whose name matches "€", be careful, find goes into subdirectories as well, if you don't want that you have to tune the find params a little
[131034890010] |Something along the lines of
[131034890020] |You might want to try this first, to see if the correct files would be delete:
[131034900010] |In each example, the first command lists files whose name contain a €
and the second command deletes them.
[131034900020] |Using GNU find (as found on Linux):
[131034900030] |Add -type f
after -name '*€*'
if you only want to match regular files and not directories as well.
[131034900040] |Using find, relying only on POSIX features:
[131034900050] |Add -type f
after -name '*€*'
if you only want to match regular files and not directories as well.
[131034900060] |Using bash 4 or zsh:
[131034900070] |Under zsh only, **/*€*(.)
restricts the matching to regular files, excluding directories.
[131034900080] |If you only want to list files that have €
in their names and such that an identical file without the €
exists, here are standard find and zsh solutions.
[131034910010] |vnc connection working with PuTTY but not with command line
[131034910020] |Hi,
[131034910030] |I am using PuTTY to connect to a distant network to then set up x11vnc and then using ssl/sshvnc as a client.
[131034910040] |in the host name for PuTTY I have: ssh.inf.uk
[131034910050] |and port: 22
[131034910060] |in the ssh tunnel options I have source port set to: 5910
[131034910070] |and destination: markinch.inf.uk
[131034910080] |Then putty brings up an xterm and I am prompted for my username and password.
[131034910090] |I get to the common gateway machine and do
[131034910100] |then I set up the x11vnc server
[131034910110] |I use ssl/ssh vnc viewer with the verify certs off and host port set to, localhost:10 and put the password, and connect fine.
[131034910120] |---Now I want to bypass usuing PuTTY, and do the ssh connection via command line.
[131034910130] |So I do
[131034910140] |which brings me into the gateway machine, then I need to log into a specific desktop
[131034910150] |Then I set up the x11vnc server,
[131034910160] |then I use ssl/ssh vnc viewer with verify certificates off, localhost:10, and with the password in, and get: PORT=5910
[131034910170] |What is putty doing so different?
[131034910180] |Best,
[131034920010] |In your putty config, the traffic is exiting the tunnel at ssh.inf.uk and being forwarded directly to markinch.inf.uk.
[131034920020] |So you're only building 1 tunnel.
[131034920030] |In your ssh statements, you're building 2 tunnels - one from localhost to ssh.inf.uk, and a second from ssh.inf.uk to markinch.inf.uk.
[131034920040] |I haven't yet worked out why the 2-tunnel solution isn't working for you.
[131034920050] |However, you might try adjusting your ssh command to match what putty's doing and see if that works.
[131034930010] |How do I set the group (gid) of a process I'm about to launch?
[131034930020] |I'm porting a Debian init.d script to CentOS.
[131034930030] |In the Debian script, it uses start-stop-daemon for launching the process.
[131034930040] |The script uses start-stop-daemon's --group flag to change to a different group-id when starting the daemon process.
[131034930050] |How do I set the group-id of the daemon process in the init script on CentOS?
[131034940010] |There is setuidgid: "setuidgid runs another program under a specified account's uid and gid."
[131034940020] |It is part of daemontools, however, it probably is not available in the CentOS repositories due to DJBs strange licenses.
[131034940030] |So you might have to find a RPM (e.g. here) or build from source.
[131034950010] |If CentOS doesn't provide any better way (which would surprise me a little), you can fall back on su
's lesser known counterpart, sg
:
[131034960010] |CentOS init scripts use /etc/init.d/functions, which declares a "daemon" function that most other init scripts use.
[131034960020] |But daemon doesn't accept any group flags.
[131034960030] |It ends up calling:
[131034960040] |A quick /sbin/runuser --help
shows that runuser accepts a flag to specify group, so try:
[131034970010] |Modifying PDF files
[131034970020] |I need to do some basic editing on existing PDF file.
[131034970030] |I.e.
[131034970040] |I need to:
[131034970050] |Add chapters/bookmarks
[131034970060] |Change some page numbering
[131034970070] |However I cannot find any tool - GUI or commandline - which would offer this functionality.
[131034970080] |Is there any alternative to just write such tool myself?
[131034970090] |PS.
[131034970100] |I look only for free tools.
[131034970110] |Preferably open.
[131034980010] |I think adobe works fine for all this.
[131034990010] |I know two programs for manipulating PDFs under Linux:
[131034990020] |PDEedit "Pdf Editor is primary created for simple editation and manipulation with objects of documents in PDF format and storing them as new version of document.
[131034990030] |Editation and manipulation with objets is by graphical and by commandline interface too.
[131034990040] |For simple use command line is using script language, which is usefull in graphical interface too."
[131034990050] |and pdftk "If PDF is electronic paper, then pdftk is an electronic staple-remover, hole-punch, binder, secret-decoder-ring, and X-Ray-glasses.
[131034990060] |Pdftk is a simple tool for doing everyday things with PDF documents."
[131035000010] |I use pdftk mainly.
[131035000020] |But here are some others to consider:
[131035000030] |pdfsam (PDF Split and Merge): "pdfsam is an open source tool (GPL license) designed to handle pdf files"
[131035000040] |PDFJam "A small collection of shell scripts which provide a simple interface to much of the functionality of the excellent pdfpages PDF file package (by Andreas Matthias) for pdfLaTeX." (You can also use pdfLaTeX directly.)
[131035000050] |jPDFTweak: "jPDF Tweak is a Java Swing application that can combine, split, rotate, reorder, watermark, encrypt, sign, and otherwise tweak PDF files."
[131035000060] |Inkscape: is a vector graphics editor that can both import PDF pages into its native SVG format, and also export as PDF.
[131035000070] |Calibre: Open source ebook management software that can convert PDFs to other formats, and manipulate them in other ways.
[131035000080] |Comes with command line tools such as pdfmanipulate which can be useful.
[131035000090] |Ghostscript of course can do a lot of things with PDF files too.
[131035010010] |pdfimport for openoffice is a good alternative to edit PDF documents and re-export to PDF or save to another format.
[131035010020] |The imported PDF can be edited with OpenOffice Draw.
[131035020010] |Rosetta Stone for Linux Distributions?
[131035020020] |Is there is something like a Rosetta Stone for the different Linux Distributions?
[131035020030] |Perhaps a site where you can look up a commands, configuration files or problem solutions for a specific task organized as translations of ones of another distribution (you know well).
[131035020040] |For example you know Debian based distributions well and you want to know the Fedora equivalent to
[131035020050] |or
[131035020060] |or
[131035020070] |etc.
[131035020080] |There is a Rosetta Stone for different Unices, but it is not that detailed and does not really differentiates between different Distributions.
[131035030010] |It's called POSIX.
[131035030020] |Well, it's POSIX and reading man(1) pages very, very carefully.
[131035040010] |I think the service is called http://unix.stackexchange.com.
[131035050010] |http://distrowatch.com/ will, at least, tell you what software is available in what distro...
[131035060010] |How do you layout extra storage.
[131035060020] |When I was doing dual boot all my extra storage was simple in /win/d, /win//e, /win/f, ... and formated ntfs.
[131035060030] |Now the desktop only runs Windows in a VM, and I access all my partitions from samba ( except for the destop ).
[131035060040] |I'm totally confused how to organise things.
[131035060050] |So how do you layout your extra HD space that you want multiple users to access ( ps I use different accounts for different things... ie personnas )?
[131035060060] |Can't all be in /home if different people want to have access to it.
[131035060070] |Also how do you organize your different data: movies, books, music, scripts written on the computer, software projects, software created outside the package system ( I like to keep such separate ) etc.
[131035070010] |Really finding something that works for you is the best option.
[131035070020] |I always create a new mount point either /data
or /storage
depending on my mood. any non transient data I think I might need but is just cluttering up /home/
gets moved there, as well as shared data.
[131035070030] |as far as how do I organize data:
[131035070040] |/storage/movies/ /storage/music/artist/album
/storage/projects//project
/storage//
[131035080010] |So you want to have different people have access to the same mounted devices?
[131035080020] |The standard places are /mnt, /opt, /mnt/media.
[131035080030] |Or you could set a loopback mount to /home//.
[131035080040] |There are lots of options, you can do wahtever you want.
[131035080050] |Personally I think having different logins is stupid unless you have a security issue or something.
[131035090010] |I like to split my data into two central folders: One (I call it normally /heap) with recoverable data which I don't have to backup (everything which is just a replication from a central server) and one (I use /data) for the rest.
[131035090020] |This makes automated backup mach easier than having to carry a list of directories which are under backup.
[131035090030] |That also means I split data from settings and keep just settings in /home as also recommended by Zypher.
[131035100010] |All of this is being served by Samba?
[131035100020] |I'd say for example /srv/smb/music
is approprate then.
[131035100030] |Per the FHS
[131035100040] |/srv contains site-specific data which is served by this system.
[131035100050] |Rationale
[131035100060] |This main purpose of specifying this is so that users may find the location of the data files for particular service, and so that services which require a single tree for readonly data, writable data and scripts (such as cgi scripts) can be reasonably placed.
[131035100070] |Data that is only of interest to a specific user should go in that users' home directory.
[131035100080] |The methodology used to name subdirectories of /srv is unspecified as there is currently no consensus on how this should be done.
[131035100090] |One method for structuring data under /srv is by protocol, eg. ftp, rsync, www, and cvs.
[131035100100] |On large systems it can be useful to structure /srv by administrative context, such as /srv/physics/www, /srv/compsci/cvs, etc.
[131035100110] |This setup will differ from host to host.
[131035100120] |Therefore, no program should rely on a specific subdirectory structure of /srv existing or data necessarily being stored in /srv.
[131035100130] |However /srv should always exist on FHS compliant systems and should be used as the default location for such data.
[131035100140] |Distributions must take care not to remove locally placed files in these directories without administrator permission.
[131035110010] |Mount an iso without root access?
[131035110020] |Is it possible for a user without root access to mount an arbitrary iso?
[131035110030] |If so how?
[131035120010] |The easiest way is probably with sudo
.
[131035120020] |Let's assume that you want everybody in the cdrom
group able to mount and unmount ISO images.
[131035120030] |Make the following addition to the sudoers file using visudo
:
[131035120040] |This should allow anybody in the cdrom
group to mount a file ending in .iso
as type iso9660
on a directory inside the /media folder and also unmount anything in the /media folder.
[131035130010] |You can do this without root access using the fuse module fuseiso.
[131035130020] |After fuse and fuseiso have been installed, you can do as a normal user fuseiso cdimage.iso ~/somedirectory
to mount it.
[131035140010] |Compiling old solaris programs under Linux
[131035140020] |I've got sources for a program developed under Solaris in ANSI-C.
[131035140030] |I am told it was developed around 1996-1997.
[131035140040] |I'm currently struggling with compiling it under a current Ubuntu.
[131035140050] |I got pretty far and I think only the GUI is still missing.
[131035140060] |They used a library called guide for this it seems.
[131035140070] |Does anyone know, if this library still exists somewhere?
[131035140080] |The relevant parts of the Makefile:
[131035150010] |From your description it appears that these were written against the SunOS Xview Code Generation Suite.
[131035150020] |I have a dim memory of this package which is probably roughly analogous to GTK or Java Swing, but predates most of their concepts and mechanism.
[131035150030] |I expect the best that you can do is either:
[131035150040] |Write a library that simulates libguide which is fraught with error or
[131035150050] |Yank out the affected UI code and replace it with your own perhaps using something like glade
[131035150060] |I don't envy you this task.
[131035160010] |As msw says, it appears that your application wants to use the OpenWindows and Xview libraries that were provided in older Sun systems.
[131035160020] |I believe they're not even around on newer Solaris installs anymore, but the free software projects OpenWindows Augmented Compatibility Environment and The XView Toolkit may provide compatible-enough implementations of these libraries on newer systems.
[131035170010] |Beginning to learn Unix and Linux
[131035170020] |I am a beginner, I want to learn unix and linux, and be a professional.
[131035170030] |Where should I start?
[131035180010] |To get proficient with unix, you will need to work on it regularly.
[131035180020] |Practise makes perfect.
[131035180030] |Firstly, I would suggest that you pick a Linux distribution.
[131035180040] |Don't worry too much about picking the best one for you yet, when you are ready you will find the one.
[131035180050] |For a beginner, a distro like Ubuntu will be good enough.
[131035180060] |Problems will arise, be ready for them.
[131035180070] |Ask questions on the web ( here on Unix SE or at other forums ), the linux community ( more correct is the "opensource community" ) is a helpful community.
[131035180080] |The more you partake in that community, the faster you will learn.
[131035180090] |Now you will need to try and perform basic tasks on your shiny new OS. Chatting, Browsing, typing up documents, emailing, watching movies, etc.
[131035180100] |Use Linux for everything.
[131035180110] |Be aware that Linux does have a learning curve, and that you will need to dedicate time to it if you want to become professional.
[131035190010] |You should try starting with friendly Linux distributions, like Ubuntu or PCLinuxOS.
[131035200010] |My suggestion would be to NOT start out with a "beginner" distro like Ubuntu.
[131035200020] |How many *nix servers have a GUI running on them?
[131035200030] |What I did was start out with Slackware (http://www.slackware.com) and learned how to install, configure, and use a *nix system.
[131035200040] |Slackware is a hands-on system that requires you know what you're doing to make things happen.
[131035200050] |Lastly, if you haven't before, I'd suggest building your own PC and using Slackware as the OS for your home-brew system (also what I did).
[131035200060] |You will learn A LOT by going this route.
[131035200070] |If you want to take your learning to another level I'd highly suggest rolling your own system via "Linux From Scratch" (http://www.linuxfromscratch.org/).
[131035200080] |Good luck with your educational endeavors!
[131035200090] |~ tim
[131035210010] |The Linux Documentation Project (TLDP) has some very useful guides.
[131035210020] |www.tldp.org
[131035220010] |I agree with Stefan and disagree (partially, see further) with tim: start out with a good desktop distro, and use it for your basic daily tasks.
[131035220020] |That will allow you to experiment and learn without having to reboot all the time (IME if you have to reboot, you just don't do it very often).
[131035220030] |If you want to become a professional, you will have to get familiar with the underlying system though.
[131035220040] |Just like you need to know about the registry and permissions and how DLLs are loaded, etc. on Windows...
[131035220050] |And once you're somewhat starting to get familiar with the GUI &a bit of the commandline, and you want to learn about linux/unix servers, you can run them in a virtual machine (kvm/qemu, virtualbox, vmware, ...) and ssh to them.
[131035220060] |Then when you start to understand the commandline well, something like CRUX, Slackware or LFS is a good tool to get more in-depth knowledge about how all the parts of the OS (can) fit together.
[131035230010] |I think that rather than choose one particular distribution you should try out lots of them in a relatively short time; say change every couple of months or so.
[131035230020] |This has two main benefits; you get to see different ways of doing things (eg compare Ubuntu with other distros, is using sudo rather than su really much of a benefit?) and the chances are you will get experience in sorting out rather more problems (and so learn more) than just installing one Linuxy operating system.
[131035230030] |I'm not sure how much this will actually help: I've used several Linux distributions and am fortunate not to have had any real problems.
[131035230040] |Therefore I would suggest that you answer questions on this site (and others, such as superuser.com.
[131035230050] |I believe that there are also other sites on the internet which are not operated by the Stack Exchange team where one can answer problems posted by users).
[131035230060] |I realise that you know very little at the moment, but by doing some research and answering questions you will learn quite quickly.
[131035230070] |Perhaps keep a few virtual machines handy in which to try stuff.
[131035240010] |put a linux distribution like arch linux on your computer....you'll be forced to learn as you go in order to make the system useful...arch simply isn't useful to the complete novice.
[131035240020] |there's no point putting off the painful lessons! the best way to learn how to fix something is to have it break...and chances are if you use a system like ubuntu, you will never even know that there are problems to solve.
[131035240030] |i also recommend an alternative system like freebsd, you'll get a different perspective and have access to some neat features not available for linux (dtrace, zfs)
[131035240040] |bottom line - if you want to learn about a system, install an OS that forces you to learn. if you want to use the system, install ubuntu
[131035250010] |Go install Debian and try to do everything you did with whatever system you are more familiar with, and from there, go through a guide called Debian Reference.
[131035250020] |It's a basic intro to Debian and Unix concepts.
[131035250030] |Why Debian?
[131035250040] |It's what Ubuntu and Linux Mint are based on, and those 2 are the most popular Unix-like systems out there.
[131035250050] |That means if you get familiar with Debian, you will get familiar with those two.
[131035260010] |difference between Fedora's, openSUSE's, and Mandriva's initscripts?
[131035260020] |Hi,
[131035260030] |I'm trying to figure out how the init system works for these distributions, but they seem to use different layouts...
[131035260040] |How do each of them work?
[131035270010] |Ubuntu Login for first time
[131035270020] |I installed Ubuntu 10.04, but when I tried to login I found out that my keyboard is inactive.
[131035270030] |I searched the Internet and learned how to activate the on-screen keyboard.
[131035270040] |But, when I activate the on-screen keyboard, it immediately disappears.
[131035270050] |I don't know what's happening but need to login into Ubuntu.
[131035270060] |Can anyone help me?
[131035270070] |Thanks in advance.
[131035280010] |Yes i meant 10.04, i fixed it. i just needed to shut it down and boot it again.
[131035280020] |Then the onscreen keyboard would appear. but you would need to use onscreen keyboard every time you would login your ubuntu. you can fix this using following instructions:
[131035280030] |first login to your ubuntu 10.04
[131035280040] |open a terminal
[131035280050] |type "sudo dpkg-reconfigure console-setup"
[131035280060] |follow the instructions you see on your terminal.
[131035280070] |enjoy using ubuntu.
[131035290010] |This is a shot in the dark, but you could try adding i8042.nopnp
to your boot line in grub.
[131035290020] |I swore I updated my answer...
[131035290030] |When you boot up, it should display the grub menu.
[131035290040] |I forget the exact commands, but you hit like e to edit the line, then go down to where it says kernel= and hit e again to edit that line.
[131035290050] |Add i8042.nopnp
to the end of that line.
[131035290060] |Then hit enter, then either b or ctrl-x, I forget which.
[131035300010] |Killing a running process in an Ubuntu machine remotely from a windows machine which is in LAN
[131035300020] |Is it possible to kill a process running in an Ubuntu machine from a Windows (XP) machine remotely connected via LAN?
[131035300030] |I can kill the process in a windows machine from a remote windows machine (in LAN) by the following command:
[131035300040] |Is there any thing like that to kill the process running on the Linux machine?
[131035310010] |Do you have SSH or Telnet access to the Linux machine?
[131035310020] |(Typically, SSH is more reliable in trouble situations, but either can work.)
[131035310030] |Login, use top
, ps
, or pgrep
to get the process id (pid) of whatever needs killing, and then kill away with kill PID
or kill -9 PID
on the command line.
[131035310040] |If you have a new enough system, you can even kill processes from within top
by choosing one and pressing 'k'.
[131035310050] |Furthermore, ssh
and rsh
can be used to remotely execute commands without the whole interactive login session, if that's what you end up needing in the future:
[131035320010] |Windows has these tools for remote management built-in to the shell.
[131035320020] |For remote management of a *nix host, you need to get a shell on the remote host.
[131035320030] |As suggested above, you need an ssh client of some sort.
[131035320040] |You can use a windowed application like putty (linked above), or there is a native port of openssh for win32 that doesn't require cygwin.
[131035320050] |You can find it here: http://www.nomachine.com/contributions.
[131035330010] |How do I fix unix so that I can use the arrow keys in a terminal?
[131035330020] |Before I was able to use the Up/Down arrow keys to cycle through previous commands, but now when I press a directional key it outputs "^[[A".
[131035330030] |I'm running a bash shell.
[131035330040] |How do I fix this?
[131035340010] |Try typing
[131035350010] |What is the font used for GNU documentation?
[131035350020] |Hello,
[131035350030] |I am interested in learning how documentation for GNU related software is written, and am wondering what type of font is used in their PDF documentation? (for instance, the GCC manual here: http://gcc.gnu.org/onlinedocs/gcc-4.5.1/gcc.pdf)
[131035350040] |Thanks.
[131035360010] |The font is Donald Knuth's Computer Modern.
[131035360020] |The documentation was no doubt created with LaTeX (or maybe even plain TeX).
[131035360030] |(Actually, these are both confirmed by the PDF metadata.)
[131035360040] |(Edit: Poking around a bit more, it looks like, strictly speaking the documentation is created in a base format, which, thanks to GNU texinfo is exported to a variety of formats, but the PDF format goes through TeX.)
[131035360050] |If you want a high quality clone of Computer Modern in Open Type format, look at the Latin Modern collection.
[131035360060] |TrueType versions of Computer Modern are also available.
[131035360070] |Or you can just install LaTeX (see TeXlive) and get the real deal, with Type3 and Type1 fonts in a variety of encodings.
[131035360080] |TeX is one of the earliest examples of free software, and Stallman even mentions it in the GNU manifesto: "We will use TeX as our text formatter ..."
[131035370010] |SELinux denied access
[131035370020] |Hello-
[131035370030] |I keep receiving this message from SELinux in a bug report.
[131035370040] |I am new to Linux, running Fedora 13 and I am learning as I go.
[131035370050] |Any advice on what this might be would be appreciated.
[131035380010] |This probably happened after an update of the system, and as temporary file are usually not needed after a reboot, I'd try to delete the file.
[131035380020] |With this command you see if the file is used by any process and will give you one or more numbers (Process ID, PID).
[131035380030] |No numbers it means the process is not used and you can delete safely the file
[131035380040] |Do the above with a user that have the rights to do it (your user? root?), you can check this with an ls
[131035380050] |If the file is used by any process you can do a ps and filter by each PID you found with the execution of the fuser
[131035380060] |You must substitute $PID with numbers found above (with fuser).
[131035380070] |At this point you should decide if you can, identify the aplication is using the file and close it if you can, or kill the process (kill $PID), or delete the file anyway (it maybe be risky).
[131035380080] |If you have troubles to decide let us know.
[131035390010] |Turn off SELinux and you won't get these messages anymore - do you really need this feature on?
[131035390020] |To turn it off, login as root:
[131035390030] |echo 0 >/selinux/enforce edit this file: vi /etc/selinux/config and change the attribute SELINUX to be SELINUX=disabled
[131035400010] |How to increase the TTY fontsize?
[131035400020] |I have a fedora machine, I am used to working on CLI since I had CRT monitor, after switching to TFT - the experience is not the same I had with CRT, the font is smaller now.
[131035400030] |How to change it?
[131035410010] |updated answer
[131035410020] |Since you're using Fedora, the variable that you need to play with is SYSFONT
in the file /etc/sysconfig/i18n
.
[131035410030] |Play with the font-sizes ( 8, 12, 16, 32, etc ).
[131035410040] |The avialable fonts are listed in /lib/kbd/consolefonts/
.
[131035410050] |You should be able to test the fonts by using setfont
from your TTY:
[131035410060] |Note: run setfont
without any arguments to restore to default font, you might need to do this "blind" if one of the fonts messes up your display.
[131035410070] |Refer to Change console font in Fedora.
[131035410080] |old answer
[131035410090] |Changing the fontsize of your CLI depends on a lot of things.
[131035410100] |Firstly, as Michael mentioned in his comment, what CLI are we talking about?
[131035410110] |A CLI within Xorg or a TTY?
[131035410120] |If this is a emulator within Xorg, you will need to specify wich emulator.
[131035410130] |I will assume that you meant the TTY font size.
[131035410140] |Before we started using KMS in our boot procedures, you could have changed the TTY font by adding vga=blah
to grub's boot line and then playing arround with the values of blah
.
[131035410150] |See this link on that.
[131035410160] |If you are using KMS, things get more tricky.
[131035410170] |You will need to configure things withing your distro, and each distro has its own way of configuring things.
[131035410180] |Take a look at these two forum posts:
[131035410190] |KMS &Manual tty modesetting - ArchLinux Forum
[131035410200] |how do I change the console font? - Ubuntu Forum
[131035420010] |Share aliases and PATH setting between zsh and bash
[131035420020] |The shell that I normally use is zsh
.
[131035420030] |I have several aliases to enable color in some programs such as ls
and grep
.
[131035420040] |I've also set my custom path so that I can execute programs in non-standard place (such as in ~/bin/
).
[131035420050] |I won't change root's shell to zsh
, but I would like to share these setting so that root can have it as well.
[131035420060] |I find out that zsh
is not sourcing /etc/profile
.
[131035420070] |I can source it in /etc/zsh/zprofile
, but I would like some other more "proper" way.
[131035430010] |What about a simple symlink?
[131035430020] |ln -s /etc/profile /etc/zsh/zprofile
[131035430030] |You can also append something like this if you need some conditional initialization:
[131035440010] |I'd create a file /etc/commonprofile and source it in both /etc/profile and /etc/zsh/zprofile.
[131035440020] |This gives you the opportunity to share common settings and still use bash respectively zsh specific settings and syntax in /etc/profile respectively zprofile.
[131035450010] |Zsh has an sh compatibility mode which will let it execute POSIX sh code and some bash extensions.
[131035450020] |As long as you don't use bash features that zsh doesn't have (with the same syntax), you can have the same file sourced by both shells.
[131035450030] |Use the emulate
built-in to put zsh in compatibility mode; with the -L
option, the emulation is local to the enclosing function (not the enclosing sourced script).
[131035450040] |For things like environment variables, you can use source_sh /etc/profile
in /etc/zprofile
and source_sh ~/.profile
in ~/.zprofile
, since the profile files aren't likely to use bash-specific features.
[131035450050] |For things like aliases and function definitions, since the shell rc files are likely to contain a lot of things that can't be shared (prompt, key bindings, completion settings, …), use a file like ~/.sh_aliases
that is sourced in ~/.bashrc
and source_sh
'd in ~/.zshrc
.