[131069390010] |
foo.sh
file in my current directory.
[131069390030] |If I try to run ./foo.sh
, I get:
[131069390040] |But if I run /bin/sh ./foo.sh
then it runs fine.
[131069390050] |How can I fix this so I can just run ./foo.sh
and it automatically runs it with /bin/sh?
[131069390060] |Edit: Okay, this is Chrome OS and this particular folder is mounted with noexec
.
[131069390070] |Apparently that foils the ability to just run ./foo.sh
; but why?
[131069390080] |Why can I still run sh foo.sh
to achieve the exact same thing?
[131069390090] |What security, then, does noexec
give?
[131069400010] |The noexec
flag will appropriately apply to scripts, because that would be the "expected" behavior.
[131069400020] |However, setting noexec
only stops people who don't know enough about what they're doing.
[131069400030] |When you run sh foo.sh
you're actually running sh
from its default location (probably /bin
) which is not on a filesystem mounted with noexec
.
[131069400040] |You can even get around noexec
for regular binary files by invoking ld
directly.
[131069400050] |This will run bash, regardless of wether or not it's on a filesystem mounted with noexec
.
[131069410010] |You can also get this error (or a very, very similar message) if you try to execute a file with MS-DOS 2-byte (carriage-return linefeed) line endings.
[131069410020] |Vim is so smart these days, that it doesn't neccessarily show you the carriage returns as '^M'.
[131069410030] |So you can get fooled if you don't check what Vim thinks the "file format" is and just rely on the on-screen-appearance.
[131069410040] |In this case the "#!/bin/sh^M" causes the kernel to try to find "/bin/sh^M", which it can't. Bad interpreter, indeed.
[131069420010] |yum search boost
and rpm -qR
and indeed like with other distributions the boost library is split into several packages.
[131069420050] |For example there is boost-program-options which only contains the shared libraries of the boost program-options component.
[131069420060] |It seems that package boost
depends on all shared-library sub-packages.
[131069420070] |There is boost-devel
which seems to provide all headers and depends on all shared-library sub-packages (via boost).
[131069420080] |Am I right that it is not possible to just install the boost headers via yum (using the default repositories) without all boost shared library packages?
[131069430010] |Looking at the boost-devel package, it requires the boost package of the same version.
[131069430020] |Here is what the boost-devel package requires:
[131069440010] |Most of Boost is header only library.
[131069440020] |But there are a few that needs to be compiled.
[131069440030] |The answer from jsbillings lists them all.
[131069440040] |If you will not be using any of those libraries that require the compiled libraries then you dont have to install them.
[131069440050] |Just install the headers only.
[131069440060] |See here for: Getting started on Unix platforms.
[131069450010] |You could always ask the maintainer of boost if they'll split the headers which don't need shared libraries out into another package.
[131069450020] |However I really wonder why the 1¢ of diskspace matters here? esp. as I wouldn't be surprised to find that some of the headers don't guarantee that they'll never need a shared library (even though they are implemented that way now).
[131069460010] |kill -9
.
[131069470030] |But please note: kill -9 cannot be ignored or trapped.
[131069470040] |If a process sees signal 9, it has no choice but to ie.
[131069470050] |It can't do anything else- not even gracefully clean up its files.
[131069480010] |Read this, this and this .
[131069480020] |It will help you understand how kill command works.
[131069480030] |You can search for the java pid with:
[131069480040] |pgrep -l java or pidof java
[131069480050] |Maybe you have zombie processes on the system, in that case read here
[131069490010] |I occasionally have to kill -9.
[131069490020] |However, if this is happening regularly, you should fix the issue that is causing it.
[131069490030] |Kill -9 means something is way off.
[131069490040] |In general, I only see this happen when you get yourself into serious memory thrash mode, which means you either need more system memory, or you're giving java too much memory when you start.
[131069490050] |More commonly, though, especially if you're developing stuff, you can see this when you run out of "PermGen" memory.
[131069490060] |http://www.brokenbuild.com/blog/2006/08/04/java-jvm-gc-permgen-and-memory-options/
[131069490070] |In any case, it may be due to OutOfMemory errors of some sort.
[131069500010] |xterm
or whatever) with these properties:
[131069500030] |wmctrl
to list open windows, then:
[131069510040] |wmctrl
finds a window with a title of "bash", raise itxterm
xbindkeys
to call that script when you press your shortcutzsh
is easier, because it has the preexec
hook.
[131069510110] |You can see my shell configs for more details, e.g. the getcommand
function which handles commands like fg
in a nicer way.
[131069510120] |raising the xterm that has a bash prompt otherwise starting a new one
[131069510130] |Write a script that uses wmctrl -l
to list windows, looking for one with bash
in the title.
[131069510140] |If one is found, then run wmctrl -i -a
to raise it, else just call xterm
.
[131069510150] |Here is a script that does it:
[131069510160] |Or download it from my scripts repository.
[131069510170] |running the script when you press Win+R
[131069510180] |Assuming your script is called /usr/local/bin/bashprompt
, make a file ~/.xbindkeysrc
containing:
[131069510190] |then run xbindkeys
.
[131069510200] |Add it to your .Xclients
file or similar to make it start up automatically.
[131069520010] |fluxbox
can match windows based upon certain patterns on its own.
[131069520020] |With at least fluxbox-1.1.1
this is possible:
[131069520030] |(translates to: on press windows-key + r: check, if there is a xterm with its title ending in 'bash'. if there is, go to that window; if not, open a new one
.
[131069520040] |With a bleeding edge version (git) you can even go to windows on a different workspace.
[131069520050] |The only thing you have to do is to modify the title (or any other property of the window carrying the bash) depending on what you do. if you look at the prompt you have to set a property, if you launch a command, you have to take away that property. fluxbox
is not able to look inside the applications, it only knows about the windows.
[131069530010] |/proc
and /sys
virtual filesystems, but I don't know where to begin.
[131069550030] |Can anyone suggest any good sources to learn from?
[131069550040] |Also, since I think sys has regular additions, what's the best way to keep my knowledge current when a new kernel is released.
[131069560010] |You can look into the documentation which comes with the kernel source. (possibly greping for proc/sys ...).
[131069560020] |Located at Documentation/filesystems
: proc.txt and sysfs.txt.
[131069570010] |The IBM DeveloperWorks library is a good place for articles like this.
[131069570020] |I didn't find anything directly applicable, but the 'Resources' section of a paper led me to this.
[131069570030] |It has some good info...
[131069570040] |http://www.comptechdoc.org/os/linux/howlinuxworks/linux_hlproc.html
[131069580010] |The documentation in the Linux source tree is a good place (usually found in /usr/src/linux/Documentation is source is installed).
[131069580020] |Some distros make a separate package out of it.
[131069580030] |But, alas, much of it can only be understood by looking at the kernel source code.
[131069580040] |I have tried to encapsulate some of it in Python modules, so you might also be interested in that.
[131069580050] |The procps source code is also a good source of information.
[131069590010] |Read this blog post: Solving problems with proc
[131069590020] |There are a few tips what you can do with the proc filesystem.
[131069590030] |Among other things, there is a tip how to get back a deleted disk image or how to staying ahead of the OOM killer.
[131069590040] |Don't forget to read the comments, there are good tips, too.
[131069600010] |/usr
, /home
, /var
and /tmp
partitions (on one physical disk).
[131069600030] |What is the practical reason for this?
[131069600040] |I understand that /home
can be advantageous to put on a separate partition, because user files can be encrypted separately, but why for anything else?
[131069610010] |A separate /usr
can be useful if you have several machines sharing the same OS.
[131069610020] |They can share a single central /usr
instead of duplicating it on every system. /usr
can be mounted read-only.
[131069610030] |/var
and /tmp
can be filled up by user programs or daemons.
[131069610040] |Therefore it can be safe to have these in separate partitions that would prevent /
, the root partition, to be 100% full, and would hit your system badly.
[131069610050] |To avoid having two distinct partitions for these, it is not uncommon to see /tmp
being a symlink to /var/tmp
.
[131069620010] |Because ordinary users can cause things to be written to /var and /tmp, and thus potentially cause problems for the whole system.
[131069620020] |This way user processes can fill up /var and /tmp, but not the root fs.
[131069620030] |A separate /usr is useful for /usr over NFS, or other remote fs.
[131069620040] |(I hope this is clear, I haven't had any coffee yet)
[131069630010] |The issue is that a full root fs makes the linux system unoperable to an extend that even an admin fix it without a recovery CD or similar.
[131069630020] |When /tmp
and /var
and in particular /home
are in a separate partition, the root fs cannot never fill up without an admin doing it.
[131069630030] |Take /usr
into the mix in which all the usual installs will be placed, and even installing new software cannot cause this problem.
[131069640010] |In general, the arguments for having separate partitions are:
[131069640020] |/usr/local/
so that any software I've built and installed separately from my distro's package manager could possibly be re-used if I change/upgrade my distro or by another distro installed along side it.
[131069670020] |It's obviously not guaranteed to work across all possible combinations but it does no harm.
[131069680010] |g++
is available in current Ubuntu:
[131069680040] |http://packages.ubuntu.com/search?keywords=g%2B%2B&searchon=names&suite=maverick§ion=all
[131069680050] |I directly see that the default version is 4.4.4, also available is 4.5.1.
[131069680060] |In Natty it is 4.5.1:
[131069680070] |http://packages.ubuntu.com/search?keywords=g%2B%2B&searchon=names&suite=natty§ion=all
[131069680080] |Via http://packages.ubuntu.com/natty/g++ you can browse coveniently through the dependencies and directly see which architectures are supported.
[131069680090] |You can also search the contents of the packages.
[131069680100] |For Fedora I've found
[131069680110] |https://admin.fedoraproject.org/pkgdb
[131069680120] |Searching for g++ returns nothing:
[131069680130] |https://admin.fedoraproject.org/pkgdb/acls/list/g++
[131069680140] |Ok, perhaps it is split differently:
[131069680150] |https://admin.fedoraproject.org/pkgdb/acls/list/?searchwords=gcc
[131069680160] |This yields results and it seems that there is only one big gcc
package which includes g++
:
[131069680170] |https://admin.fedoraproject.org/pkgdb/acls/name/gcc
[131069680180] |But this is not true.
[131069680190] |Using yum search
on a Fedora 14 system yields:
[131069680200] |(which includes g++)
[131069680210] |Without access to an actual Fedora system, do I really have to somehow expect this and browse down into the package git tree to get the same information?
[131069680220] |I mean like this:
[131069680230] |http://pkgs.fedoraproject.org/gitweb/?p=gcc.git;a=blob;f=gcc.spec;h=683faf0cb3d528bd53fe6a4024fda3e84cc986d0;hb=HEAD
[131069680240] |(and then search for '%package' ?)
[131069680250] |The https://admin.fedoraproject.org/pkgdb/acls/name/gcc shows me that a gcc package is available in Fedora 13 and 14 but it does not show:
[131069680260] |file:/usr/bin/g++
and click Builds.
[131069690020] |Click on the blue (i) for more details.
[131069690030] |The GCC package has several sub-packages, as described in the gcc.spec file you showed above, which has added to the confusion.
[131069700010] |I use http://koji.fedoraproject.org
[131069700020] |I hadn't seen PkgDB before, so I can't say much about it.
[131069700030] |Koji works well but the only caveat is that it shows packages that have been built and they aren't necessarily in the repository yet.
[131069710010] |dd
calls like this:
[131069740010] |This is the kind of situation where it pays to know a scripting language such as Python.
[131069740020] |Instead of wasting time fiddling around with the shell to do this simple task, you would just open the file in binary mode, skip the bytes, and copy.
[131069740030] |The scripting language library is sensible enough to notice that the last block is not full.
[131069750010] |/proc
?
[131069760010] |Here's what lshw -c memory
(as root) gives me:
[131069760020] |What you are looking for is "System Memory".
[131069770010] |You could try running (as root) dmidecode -t memory
.
[131069770020] |I believe that's what lshw
uses (as described in the other Answer), but it provides information in another form, and lshw
isn't available on every linux distro.
[131069770030] |Also, in my case, dmidecode produces the Asset number, useful for plugging into Dell's support web site.
[131069780010] |vim-gnome
and vim-gtk
packages.
[131069820020] |For Mac there is MacVim, for Windows gvim.
[131069820030] |Both are linked from the vim download page.
[131069830010] |On most systems :set mouse=a
will enable your mouse inside vim even on the console.
[131069830020] |I personally prefer using the vim keybindings, but some variant on:
[131069830030] |Will probably do what you want for ctrl-c and ctrl-v
[131069840010] |yum
or repoquery
iptables
and run
[131069950030] |Assuming eth0
is the device connected to your router / the internet.
[131069950040] |Also, as mentioned by Zeb, you might want to put the AP somewhere which will offer better coverage than behind a computer under a desk.
[131069950050] |Anyways if you have any other wired devices you're going to need a second NIC in the server.
[131069960010] |/etc/hosts
.
[131069980030] |I need to do this for testing vhosts with apache, whose dns hasn't yet been set up.
[131069980040] |I have access to firefox, and chrome, so if there's a plugin that could facilitate it; or other options are helpful.
[131069980050] |update: the alternative to overriding the dns is probably modifying the HTTP headers, if the correct ones are sent to apache, the correct content should be returned.
[131069990010] |Check out following question at superuser:
[131069990020] |http://superuser.com/questions/184643/override-dns-in-firefox
[131069990030] |If the discussed options and the SO link are not viable solutions then check out:
[131069990040] |http://superuser.com/questions/100239/hostname-override-in-firefox
[131069990050] |Especially check out:
[131069990060] |https://addons.mozilla.org/en-US/firefox/addon/redirector/
[131069990070] |It sounds like this addon could help - but I depends on its actual implementation.
[131070000010] |Unlucky not, you cannot, except you write your own internet browser.
[131070000020] |If you have to do some tests you need a test machine, whatever is a virtual machine or a real one, so you have to ask your Unix admin (or hosting provider) how you can put in place a development environment.
[131070000030] |You can also install a VM on your PC, install a Linux distribution, Apache and test your changes (it's not as hard as it sounds)
[131070000040] |Update
[131070000050] |To better explain, each application is written using the standard libraries, this way nobody has to rewrite the low level library and functions like the gethostbyname().
[131070000060] |These functions normally are set to use file (/etc/hosts) and DNS, so, unlucky, if you need that your browser will resolve a name than the one is set in the /etc/hosts you don't have too many alternatives.
[131070000070] |gnome-terminal
disappears, forcing me to work on a new tab/window.
[131070040030] |It seems like a random occurrence.
[131070040040] |Does anyone else experience this?
[131070040050] |What about other X terminal emulators?
[131070040060] |How can I fix this (or maybe it's just a bug)?
[131070040070] |[update] I simple work-around is to switch away from the terminal and switch back.
[131070050010] |Could it be that you inadvertently press CTRL+S, sending XOFF to your terminal and thus locking it?
[131070050020] |Next time it happens, try pressing CTRL+Q to unlock it.
[131070060010] |if running Ctrl-Q (as described in another Answer) doesn't work, it's possible that your TTY has been mangled by some other program you've run.
[131070060020] |Try running reset
and then clear
(or ctrl-L) to initialze your terminal.
[131070070010] |man
pages.
[131070070030] |What procedure would I have to follow, and is there a particular format that the documentation needs to be written in for me to be able to do this?
[131070080010] |In brief, see man groff_man
for the file format (web version).
[131070080020] |Save it in /usr/local/man/man1
or /usr/share/man/man1
if that doesn't work.
[131070080030] |See the Man Page HOWTO for more details.
[131070090010] |I've found that using Perl's POD is much easier than writing man pages directly, and you can create a man page from the POD file with the pod2man
utility (part of the base Perl package).
[131070090020] |Since some of your executables are already written in Perl, you can add POD formatting directly to your scripts and they can be turned into POD files directly.
[131070090030] |I've also seen several projects use POD format even though their code is written in other languages, due to POD's simplicity.
[131070090040] |To add an additional directory of man pages, you can set the $MANPATH
environment variable.
[131070090050] |Prefix $MANPATH
with a :
to have it added to the list of already-configured man paths.
[131070090060] |Use the manpath
command to see the currently defined man paths.
[131070100010] |I've been using for a while this quick and easy tutorial for creating custom man pages.
[131070100020] |The general process is like this:
[131070100030] |sed
script to format it for nroff
nroff
sudo gconf-schemas --register-all
doesn't help, and gives me the same warning messages.
[131070110080] |[note] I use Debian Squeeze.
[131070120010] |I think this has something to do with gconf2's schema files located at /usr/share/gconf/schemas
.
[131070120020] |Try to register the schemas again:
[131070120030] |gconf-schemas(8)
says:
[131070130010] |Use grep gettext /usr/share/gconf/schemas/*
.
[131070130020] |Identify the .schemas
file containing the issue and then take action upon the related package(either reinstall or update to later version).
[131070130030] |Anjuta for eg. had an issue just like that.
[131070140010] |/tmp/
.
[131070160030] |I do
[131070160040] |And I want to do a md5sum
in each file, outputting to a file with the same name, but with .md5
extension.
[131070160050] |This is supposed to create a md5 file for each file found by find command.
[131070160060] |Instead, it creates a single FILES.md5 file on disk with checksums from all files.
[131070160070] |How do I say to md5sum command that the FILES represent the current filename and not a FILES literal string?
[131070170010] |You need to use a subshell to handle the IO redirection:
[131070180010] |You need some way to say that you want to send the output of md5sum
to a file.
[131070180020] |Since find
(or xargs
) doesn't have this functionality built-in, and md5sum
only knows how to print to standard output, a shell redirection is the most straightforward way.
[131070180030] |Note that your command won't work in the general case for another reason: the output format of find
is not the input format of xargs
, they differ with file names containing whitespace or \"'
.
[131070180040] |Use find -exec
instead.
[131070190010] |With GNU Parallel you can do:
[131070190020] |You get the added benefit of running md5sum in parallel and that files like:
[131070190030] |will not cause your command to crash.
[131070190040] |Watch the intro video to learn more: http://www.youtube.com/watch?v=OpaiGYxkSuQ
[131070200010] |chmod
commands that use four octal digits instead of three -- what is the optional first digit for?
[131070200030] |For example, chmod 777
is equivalent to chmod a+rwx
; what's the same command for chmod 2777
?
[131070210010] |Please note that chmod 777 filename
is the equivalent of chmod 0777 filename
in this example.
[131070210020] |The first octal digit sets the setuid, setgid and sticky bits (see this article for more details on setuid/setgid). octal 2 means to set group ID on the file.
[131070210030] |So, the equivalent would be to do a chmod a+rwx filename
, then chmod g+s filename
.
[131070210040] |The chmod
info page does explain this in more detail.
[131070220010] |bwmon --pid 1 --log init.log
.
[131070220050] |Is there such?
[131070220060] |Can it run without admin privileges?
[131070230010] |try nethogs:
[131070230020] |NetHogs is a small 'net top' tool.
[131070230030] |Instead of breaking the traffic down per protocol or per subnet, like most tools do, it groups bandwidth by process.
[131070230040] |NetHogs does not rely on a special kernel module to be loaded.
[131070230050] |If there's suddenly a lot of network traffic, you can fire up NetHogs and immediately see which PID is causing this.
[131070230060] |This makes it easy to indentify programs that have gone wild and are suddenly taking up your bandwidth.
[131070240010] |something to get you started (just in case you want to write it yourself):
[131070240020] |comments:
[131070240030] |stat --printf="%N\n" /proc/PID/exe | cut -d ' ' -f 3
/proc//io
file.
[131070250020] |You want the rchar
and wchar
fields.
[131070250030] |You might want to subtract read_bytes
and write_bytes
, since they represent reads and writes to the storage layer.
[131070250040] |See section 3.3 of http://www.kernel.org/doc/Documentation/filesystems/proc.txt.
[131070250050] |If you need more resolution.... you could maybe script this using lsof
and strace
, though it would be painful to get all the corner cases right.
[131070250060] |The basic idea is to parse the output of strace -p
, grabbing the first parameter (= the file descriptor) and the return value (= number of bytes) from read()
, write()
, send()
, and recv()
calls (NOTE there are several more syscalls to listen for; I haven't tracked them all down).
[131070250070] |Discard negative values; they indicate errors.
[131070250080] |Use lsof -p
to figure out which file descriptors are TCP/UDP sockets, and add up the counts per fd.
[131070250090] |This strategy doesn't require root as long as you own the process you're inspecting, but it would be really hairy to write, let alone write well.
[131070260010] |NEWLINE
and then it gets rid of all the line breaks with awk (I found that trick on some website) and then it replaces the NEWLINE
s with the requisite two linebreaks.
[131070260080] |This seems like a long winded way to do a pretty simple thing.
[131070260090] |Is there a simpler way?
[131070260100] |Also, if there were a way to replace multiple spaces (which sometimes creep in for some reason) with single spaces, that would be good too.
[131070260110] |I use emacs, so if there's some emacs specific trick that's good, but I'd rather see a pure sed or pure awk version.
[131070270010] |You can use awk like this:
[131070270020] |Or if you need an extra newline at the end:
[131070270030] |Or if you want to separate the paragraphs by a newline:
[131070270040] |These awk commands make use of actions that are guarded by patterns:
[131070270050] |or
[131070270060] |A following action is only executed if the pattern matches the current line.
[131070270070] |And the ^$.
characters have special meaning in regular expressions, where ^
matches the beginning of line, $
the end and .
an arbitrary character.
[131070280010] |:a
is creating a label and not using the a
command.
[131070280030] |tr
: $ tr -s ' '
[131070290010] |If I've understood correctly, an empty line implies two consecutive newlines, \n\n
.
[131070290020] |If so, one possible solution would be to eliminate all singular occurrences of newlines.
[131070290030] |In Perl, a lookahead assertion is one way to achieve this:
[131070290040] |The -0777
flag effectively slurps the whole file into a single string
[131070290050] |-p
tells perl to print the string it's working on by default
[131070290060] |-i
specifies in-place editing
[131070290070] |Global matching ensures that all single newline occurrences are dealt with
[131070300010] |Use Awk or Perl's paragraph mode to process a file paragraph by paragraph, where paragraphs are separated by blank lines.
[131070300020] |Of course, since this doesn't parse the (La)TeX, it will horribly mutilate comments, verbatim environments and other special-syntax.
[131070300030] |You may want to look into DeTeX or other (La)TeX-to-text converters.
[131070310010] |Installing CentOs 5.5-x86_64 Kernal hangs with message NET: Registered Protocol family 2
[131070310020] |The title pretty much says it all.
[131070310030] |I am attempting to install CentOS 5.5 on Oracle VM VirtualBox.
[131070310040] |I press enter for normal installation, things start to happen and then it hangs with the above message.
[131070320010] |See this ticket.
[131070320020] |I must append kernel boot parameter nolapic or enable IO APIC in the settings of the guest
[131070330010] |Text editor with font-size selection for specific human languages (ie. Unicode Block) eg Devanagari.
[131070330020] |Pre Linux, I used Windows.. (too many years in the wilderness :) ... however there was a ray of sunshine in amongst all the general virus/re-install flack.. and that was Notepad++, a text Editor I really like(d).
[131070330030] |I'd probably still be using it, even now that I've shifted fully across to Linux(Ubuntu), but it doesn't behave 100% in 'wine'... (and its regex is stunted)...
[131070330040] |There is one feature in Notepad++ which I sorely miss, and that is the ability to display different SIZE fonts within a single document (at the same time)...
[131070330050] |At some point, I started learning Hindi, and found that the Devanagari script really needs to be larger than the Latin script (used here)...
[131070330060] |Devanagari is by nature a "taller" script, with frills above, and below the main line, and has more detail.
[131070330070] |Because of this I utilized Notepad++'s Syntax Highlighting to display my learning notes in a manner my eyes could handle...
[131070330080] |Now my dilemma, is to find a Linux Text Editor which can (at least) do what Notepad++ can do (ie. allow me to specify my own mix of font SIZES, and also to specify my own comment-delimiters)...
[131070330090] |Now, the big ask...
[131070330100] |What I would really like is an editor which is "Human-Language" aware, or "Font-Type" aware, or "Unicode-Codeblock" aware... so I don't have to fiddle and twiddle with syntax-highlighting, which is really not intended for what I want..
[131070330110] |(PS...
[131070330120] |I don't want a word-processor)
[131070330130] |In October, last year, I asked here about SciTe (Scintilla) specifically (Notepad++ is based on Scintilla), but as per one answer, it is too painful :) ...
[131070330140] |A comment suggested that Emacs could do it, so if that means "at the same time", then I'm interested, but I need some initial pointers on how to go about it...
[131070330150] |Here is an example of the Notepad++ presentation..
[131070340010] |Emacs has the ability to show fonts with different faces, colors, and sizes in the same buffer.
[131070340020] |For instance, the following is produced by the AUCTeX major-mode, a useful mode for those who use LaTeX to create documents:
[131070340030] |The two search terms that will be helpful are "font-locking" and "major mode".
[131070340040] |Essentially, to accomplish this in Emacs you would have to write your own major mode.
[131070340050] |Unfortunately, this basically amounts to you having to "fiddle and twiddle with syntax-highlighting", but on steroids.
[131070340060] |For your particular purpose, the most difficult part will be properly displaying the Devanagari script.
[131070340070] |Everything else is relatively straight forward.
[131070340080] |The best places to get started are the EmacsWiki and the Emacs Manual.
[131070340090] |The following links might be useful:
[131070340100] |Emacs Manual:Major-Modes
[131070340110] |EmacsWiki: Mode Tutoria
[131070340120] |EmacsWiki: Derived Mode
[131070340130] |EmacsWiki: Generic Mode
[131070340140] |EmacsWiki: Hindi Support
[131070340150] |Since you really only need your mode to provide font locking, I would take a look at making a "Derived Mode" (see the relevant link above).
[131070340160] |Creating such a mode basically involves defining regular expressions that will match the various parts of the code you want highlighted in a certain way, and then assigning that either to one of the predefined font-lock faces or a custom face you would define.
[131070350010] |FreeBSD 8.1 on MacBook 5,2
[131070350020] |Hi,
[131070350030] |I've been trying to dual-boot FreeBSD 8.1 with Mac OS X on my MacBook (5,2), but am having trouble trying to get the live cd to boot.
[131070350040] |I reach the FreeBSD Boot Loader Screen with the options for boot, boot without ACPI etc.
[131070350050] |However, I cannot select an option.
[131070350060] |I have tried with both the built in keyboard and a USB keyboard, but I do not think that that is the problem...
[131070350070] |Previously I have tried dual booting Ubuntu, but had problems with the live CD also.
[131070350080] |During the boot process, the boot seemed to freeze, and the CD stopped spinning.
[131070350090] |I think it might be a similiar problem here.
[131070350100] |Soon after arriving at this screen, the CD stops spinning.
[131070350110] |Thanks for any advice!
[131070350120] |I've tried using both the amd64 disc1 iso image and the i386 disc1 image for FreeBSD 8.1
[131070350130] |Further Boot Info
[131070350140] |When booting, Mac's EFI allows me to choose to boot from the CD.
[131070350150] |The screen goes black, then the following appears:
[131070350160] |It then continues to the FreeBSD Boot Loader Screen, and freezes
[131070350170] |Update
[131070350180] |Having tried both the amd64 and the i386 versions of FreeBSD 8.1, I've tried using the i386 version of FreeBSD 8.2.
[131070350190] |It acts exactly the same as previous attempts, except that instead of freezing at the FreeBSD Boot Loader Screen (linked above), it prints some information to the screen as follows (below the boot loader screen, as it doesn't clear it first):
[131070350200] |Then it freezes, I cannot even enter y/n.
[131070350210] |Again, thanks for any help.
[131070360010] |Problem seems to be related to ACPI.
[131070360020] |Since you can not use keyboard while booting, I suggest to build custom kernel without ACPI (or even few with different configs), install system on flash drive and try to boot from it.
[131070360030] |If one of those kernels will boot correctly - just create partitions, format needed filesystems and dump|restore
system from flash drive to hard disk.
[131070370010] |Hi
[131070370020] |I have a 8.2 Installation running here on a MacBook 5,5 with ZFS root on a GPT partition (next to OSX, grml and Windows 7)
[131070370030] |I had a similar error with older releases of FreeBSD, but that has been resolved at least for my hardware revision. jkim@ did an awesome Job and was very active in the freebsd.org forums: http://forums.freebsd.org/showthread.php?t=12289
[131070370040] |This post is not necessarily very helpful, but could lead in the right direction.
[131070370050] |Feel free to ask me about any details.
[131070380010] |If you will install your system in a flash drive, you don't need to build a new kernel.
[131070380020] |Just disable ACPI:
[131070380030] |http://www.freebsd.org/doc/handbook/install-trouble.html#Q2.11.3.1.
[131070380040] |BTW: Did you try with option 2 of CD (To start without ACPI)?
[131070380050] |Cheers
[131070390010] |Copying large files in Linux
[131070390020] |What is the effect of copying a file say fileA.big (900mb) from location B to locationC.
[131070390030] |If during that cp operation, say 35% through the process, fileA.big is appended with new information and grows from 900mb to 930mb.
[131070390040] |What is the result of the end copy (i.e. fileA.big at locationC)?
[131070390050] |What if the copy is about 70% through, and the original file is updated but this time truncated to 400mb (i.e. the progress of the copy is beyond the truncation point), what is the result of the end copy?
[131070390060] |Referring to a linux OS on an ext3/ext4 filesystem.
[131070390070] |No volume shadow magic etc..
[131070390080] |Just plain old cp. Curiousity sparked by copying live couchdb files for backup, but more interested in general scenarios rather than specific use case.
[131070400010] |If fileA.big
is grown during the copy, the copy will include the data that was appended.
[131070400020] |If the file is truncated shorter than where the copy is currently at, the copy will abort right where its at and the destination file will contain what was copied up to the time it aborted.
[131070410010] |Patrick has it more or less correct, but here's why.
[131070410020] |The way you copy a file under UNIX works like this:
[131070410030] |Try to read some (more) bytes from fileA
.
[131070410040] |If we failed to get bytes because we're at (or past) the end of the file, we're done; quit.
[131070410050] |Otherwise, write the bytes to fileB
and loop back to step 1.
[131070410060] |Knowing that, and knowing it's as simple as that, lets us see some corner cases.
[131070410070] |As soon as we find the end of the file, the copy is done.
[131070410080] |So let's say our file is growing during the copy, but is growing more slowly than we're copying it.
[131070410090] |The copy program will keep going past the original file size, because by the time it gets there, there is more to the file.
[131070410100] |But at some point, it catches up with the end of the file, and it knows it's at the end because it can't read any more bytes right now.
[131070410110] |So it quits right there, even if the file is about to grow further.
[131070410120] |If the file is truncated, the copy program says "Whoa, I'm past the end of the file!" and quits.
[131070410130] |And if pieces of the file are being updated at random by, say, a database program :-), then your copy is going to be some mix of old and new data, because the data is not all copied at the same time.
[131070410140] |The result will probably be a corrupt copy, which is why it's not generally a good idea to make copies of live databases.
[131070410150] |(That said, I'm not familiar with CouchDB, and it's possible to design a database to be resistant to this sort of corruption.
[131070410160] |But best to be absolutely sure.)
[131070420010] |Are any SCSI modules needed in initrd if only SATA and IDE disks are used?
[131070420020] |I am creating a cloning script to automate a minimalistic installation of Cent OS 5.5 on about 100 workstations of various hardware and age (2-10 years).
[131070420030] |The workstations are all either IDE or SATA.
[131070420040] |I am currently developing the script and testing it on VMs (ESXi 4) with virtual IDE disks.
[131070420050] |In the initrd I have commented out scsi_mod.ko, sd_mod.ko &scsi_transport_spi.ko, and it seems to work just fine for booting a VM that uses an IDE disk.
[131070420060] |The problem is that I don't have easy access to the physical workstations and there are no virtual SATA disks for ESXi, so I cannot test with SATA disks.
[131070420070] |Are the above SCSI modules needed on a workstation that only has a SATA disk?
[131070420080] |Are any SCSI modules needed for SATA disks (with a stock CentOS 5.5 kernel)?
[131070420090] |Thanks, Lars
[131070430010] |The SATA driver uses the SCSI kernel modules.
[131070430020] |You'll need scsi_mod and sd_mod at least, I'm not sure about scsi_transport_spi, it's certainly not loaded on any of my SATA-only systems.
[131070440010] |Depends on the chipsets involved, you may need a chipset-specific driver to get at your SATA drives.
[131070440020] |Perhaps you might want to look into kickstart, which lets you define how you want the system to end up looking, then does a fresh install of CentOS to that specification -- including any required kernel modules.
[131070450010] |How is a message queue implemented in the Linux kernel?
[131070450020] |I would like to know how Message Queues are implemented in the Linux Kernel.
[131070460010] |The Linux kernel (2.6) implements two message queues: (rather 'message lists', as the implementation is done by using a linked list not strictly following the FIFO principle)
[131070460020] |System V IPC messages
[131070460030] |The message queue from System V.
[131070460040] |A process can invoke msgsnd()
to send a message.
[131070460050] |He needs to pass the IPC identifier of the receiving message queue, the size of the message and a message structure, including the message type and text.
[131070460060] |On the other side, a process invokes msgrcv()
to receive a message, passing the IPC identifier of the message queue, where the message should get stored, the size and a value t.
[131070460070] |t specifies the message returned from the queue, a positive value means the first message with its type equal to t is returned, a negative value returns the last message equal to type t and zero returns the first message of the queue.
[131070460080] |Those functions are defined in include/linux/msg.h and implemented in ipc/msg.c
[131070460090] |There are limitations upon the size of a message (max), the total number of messages (mni) and the total size of all messages in the queue (mnb):
[131070460100] |The output above is from a Ubuntu 10.10 system, the defaults are defined in msg.h.
[131070460110] |More incredibly old System V message queue stuff explained here.
[131070460120] |POSIX Message Queue
[131070460130] |The POSIX standard defines a message queue mechanism based on System V IPC's message queue, extending it by some functionalities:
[131070460140] |Simple file-based interface to the application
[131070460150] |Support for message priorities
[131070460160] |Support for asynchronous notification
[131070460170] |Timeouts for blocking operations
[131070460180] |See ipc/mqueue.c
[131070460190] |Example
[131070460200] |util-linux
provides some programs for analyzing and modifying message queues and the POSIX specification gives some C examples:
[131070460210] |Create a message queue with ipcmk
; generally you would do this by calling C functions like ftok()
and msgget()
:
[131070460220] |Lets see what happened by using ipcs
or with a cat /proc/sysvipc/msg
:
[131070460230] |Now fill the queue with some messages:
[131070460240] |Again, you generally do not hardcode the msqid in the code.
[131070460250] |And the other side, which will be receiving the messages:
[131070460260] |See what happens:
[131070460270] |After two receives, the queue is empty again.
[131070460280] |Remove it afterwards by specifying the key (-Q
) or msqid (-q
):
[131070470010] |What is the difference between a Library call and a System call in Linux?
[131070470020] |I would like to know what is the difference between a Library call and a System call in Linux.
[131070470030] |Any pointers for a good understanding of the concepts behind both will be greatly appreciated.
[131070480010] |There's not really such a thing as a "library call".
[131070480020] |You can call a function that's linked to a shared library.
[131070480030] |And that just means that the library path is looked up at runtime to determine the location of the function to call.
[131070480040] |System calls are low level kernel calls handled by the kernel.
[131070490010] |If you type man man
in your shell, you will see the list of the manual sections
[131070490020] |2 System calls (functions provided by the kernel)
[131070490030] |3 Library calls (functions within program libraries)
[131070490040] |For instance, you will find chmod
in the section 2 of the manual when typing man chmod
.
[131070490050] |And fprintf
in the section 3.
[131070500010] |System calls are functions that the operating system provides for applications to use.
[131070500020] |Whereas functions like the ones in math.h, string.h, etc are those that doesn't have anything to do with interfacing with the operating system.
[131070510010] |System calls are handled directly by the kernel.
[131070510020] |Library calls are handled by a dynamic library.
[131070510030] |The program making the library call must first import that library, before the call will work.
[131070510040] |The library calls themselves may use system calls.
[131070510050] |Sometimes libraries are provided as "front-ends" to system calls, to provide extra functionality or ease of use not provided by the kernel.
[131070520010] |How does a debugger work in Linux?
[131070520020] |How does a debugger work in Linux?
[131070520030] |How does it gets 'attached' to an already running executable or process.
[131070520040] |I understand that compiler translates code to machine language, but then how does debugger 'know' what it is being attached to?
[131070530010] |There is a system call named ptrace.
[131070530020] |It takes 4 parameters: the operation, the PID of the target process, an address in the target process memory, and a data pointer.
[131070530030] |The way the last 2 parameters are used is dependent on the operation.
[131070530040] |For example you can attach/detach your debugger to a process:
[131070530050] |Single step execution:
[131070530060] |You can also read/write the memory of the target process with PTRACE_PEEKDATA and PTRACE_POKEDATA.
[131070530070] |If you want to see a real example check out gdb.
[131070540010] |Package management strategy with Pacman
[131070540020] |Since I switched my favorite distro to Gentoo I have been very familiar with the following pattern for installing and removing packages:
[131070540030] |Install a bunch of applications that do pretty much the same thing to try them out: emerge
.
[131070540040] |This command installs the package, and mark the package as explicitly installed.
[131070540050] |Do try them out and remove those I don't like (and hopefully keep one or two that satisfy my needs): emerge --deselect
.
[131070540060] |This command removes the package from the list of explicitly installed applications, but does not uninstall the package.
[131070540070] |Finally remove everything that is not required on my system: emerge --depclean
.
[131070540080] |This command removes all packages that are (1) not a system package, (2) not installed explicitly and (3) not a dependency of those two.
[131070540090] |And optionally check that all package dependencies are OK: revdep-rebuild
.
[131070540100] |This command checks all dependencies and reinstall broken packages.
[131070540110] |Once in a while I would look at the entries in /var/lib/portage/world
(the list of explicitly installed packages) to review the top-level applications that I use, and remove those that I don't use anymore using the commands in step 2, 3 and 4.
[131070540120] |Now that I'm trying to learn Arch, I wonder if I could use the same strategy with Pacman?
[131070540130] |Or another strategy that can keep my system clean of unused packages?
[131070540140] |Note: the Pacman Rosetta helps a lot in quickly understand things, but I could not figure out Arch's equivalent of the /var/lib/portage/world
file. pacman -Qe
is said to do it, but it contains things that I swear I haven't explicitly installed...
[131070540150] |Anyway please answer this question in terms of strategy (with command examples, of course :)
[131070550010] |If I recall correctly,
[131070550020] |installs a package
[131070550030] |removes a package and all its dependencies---but only those that wouldn't break other packages and only those that you didn't explicitly install.
[131070550040] |Checkout the pacman man page.
[131070550050] |I unfortunatly don't know how to check for broken packages.
[131070560010] |The most likely reason you are seeing packages with "pacman -Qe" that you don't remember installing is that they were part of a "group" (like base-devel, etc) that you installed.
[131070560020] |Side Note: I have personally also been looking for a while to switch a package from "explicit" to "implicit" (and even visa versa) without reinstalling it, it even taking a package I installed explicitly to get another package working and turn it into a dependency of that package (again without reinstalling).
[131070570010] |Thanks to DarwinSurvivor's answer I have been able to better understand how package management works in Arch.
[131070570020] |Now I can apply the same strategy that I use with Gentoo (with small modifications).
[131070570030] |The "equivalents" of the commands in the question are, respectively:
[131070570040] |pacman -S
[131070570050] |pacman -D --asdeps
[131070570060] |pacman -Rs $(pacman -Qqtd)
[131070570070] |Not available / not needed
[131070570080] |The closest thing to /var/lib/portage/world
in Gentoo is the result of the command pacman -Qe
.
[131070570090] |Differences:
[131070570100] |Arch has package groups, which is basically several packages "grouped" together under a name.
[131070570110] |When a group is installed everything in the group is considered explicitly installed.
[131070570120] |Arch doesn't have "system packages", so reducing items from the result of pacman -Qe
can actually result in important packages being removed.
[131070580010] |What does 'uni' mean in unistd.h
[131070580020] |What does uni mean in unistd.h
[131070580030] |Does it mean unix? or universal?
[131070580040] |What is it?
[131070590010] |The stuff in there is largely Unix-idiom (chown
, fork
, gethostname
, nice
), so I'm guessing that it originally did mean Unix.
[131070590020] |It's part of the POSIX standard, though, so it's no longer just Unix.
[131070600010] |How do I delete a file named "°" in bash
[131070600020] |I've accidentally created a file named °
.
[131070600030] |Now I'm having trouble deleting it with bash.
[131070600040] |Typing rm °
seems to only move the caret to the beginning of the line, i.e. no character is entered.
[131070600050] |(For what it's worth I'm running bash 3.2.0 on a remote machine conntected with SSH using Mac OSX Terminal)
[131070600060] |Any ideas?
[131070610010] |How about?
[131070610020] |I think this should work...
[131070620010] |If there are a lot of matches to using a wildcard with rm like rm -i ?
, you can always remove it by inode number instead:
[131070620020] |ls -i
find . -inum -ok rm '{}' \;
[131070620030] |Where
is the inode number from ls -i
, which lists all the inode numbers of the files in the current directory.
[131070630010] |The rm -i ?
answer is fine.
[131070630020] |This would also work:
[131070630030] |as would
[131070630040] |And as to why it's going back to the start of the line, how are you typing the ˚
?
[131070630050] |Perhaps the input is being interpreted as Ctrl+A
or some other shortcut that is used by the shell to go to the start of the line.
[131070630060] |Is there a setting to set the encoding or character set to utf-8
in the terminal app?
[131070630070] |What does it print if you run locale
inside the terminal session?
[131070630080] |And how did you create the file?
[131070630090] |Maybe you can use a similar method to delete it?
[131070640010] |Keeping multiple root directories in a single partition
[131070640020] |I'm working out a partition scheme for a new install.
[131070640030] |I'd like to keep the root filesystem fairly small and static, so that I can use LVM snapshots to do backups without having to allocate a ton of space for the snapshot.
[131070640040] |However, I'd also like to keep the number of total partitions small.
[131070640050] |Even with LVM, there's inevitably some wasted space and it's still annoying and vaguely dangerous to allocate more.
[131070640060] |So there seem to be a couple of different options:
[131070640070] |Have the partition that will contain bulky, variable files, like /srv
, /var
, and /home
, be the root partition, and arrange for the core system state — /etc
, /usr
, /lib
, etc. — to live in a second partition.
[131070640080] |These files can (I think) be backed up using a different backup scheme, and I don't think LVM snapshots will be necessary for them.
[131070640090] |The opposite: putting the big variable directories on the second partition, and having the essential system directories live on the root FS.
[131070640100] |Either of these options require that certain directories be pointers of some variety to subdirectories of a second partition.
[131070640110] |I'm aware of two different ways to do this: symlinks and bind-mounts.
[131070640120] |Is one better than the other for this purpose?
[131070640130] |Is there another option?
[131070640140] |Do some linux distros support installation using this style of partition layout?
[131070650010] |Well for starters, your root partition MUST contain '/', '/bin', '/sbin', '/lib', and '/etc'.
[131070650020] |You can not put these on a separate partition as they are all needed during the boot process before other filesystems are mounted up. (though you can do some messy initrd stuff to get around this, but it'll be a pain when you want to perform some simple task like modify your fstab)
[131070650030] |After that, if you want to put the other directories on other partitions you can.
[131070650040] |Mount bind is the cleaner method of doing this as if you were symlinking, and some task want to look at the free space on /usr, it will query it, only to get the free space on the root partition instead.
[131070650050] |While I dont know of anything off the top of my head that would do this, the solution is less prone to problems than symlinking.
[131070660010] |What makes CentOS "enterprisey" compared to "generic" distributions like Ubuntu?
[131070660020] |What makes CentOS "enterprisey" compared to "generic" distributions like Ubuntu?
[131070660030] |When I say "enterprisey" I actually mean "better for server deployments".
[131070660040] |Just a general question, because I was thinking of hosting a web application on my computer (which runs Ubuntu) and came upon a page that said that CentOS had 30% market share for servers.
[131070660050] |Of course, that doesn't exactly indicate that it's better in anyway, so I just wanted to ask.
[131070660060] |Edit
[131070660070] |There's another thing I really fail to understand ... most of these distributions, use the same applications, have the same package manager, and all of them are powered by the same kernel.
[131070660080] |Where's the difference, then?
[131070660090] |RHEL's "happy text page" says:
[131070660100] |more secure applications
[131070660110] |protection against commonly exploited security flaws, e.g. buffer overflows integrated in the standard software stack
[131070660120] |highest-grade protection via the SELinux security feature.
[131070660130] |Protects system services from attacks, full transparency and is easy to extend and adopt.
[131070660140] |smartcard authentication support
[131070660150] |Questions
[131070660160] |How?
[131070660170] |Unless RHEL somehow has modified versions of the software stack that you'll be using (in my case, Python and SQLite3), there wouldn't be any difference.
[131070660180] |Doesn't every other distribution claim that?
[131070660190] |I've heard about problems concerning SELinux.
[131070660200] |Would like to know more about it.
[131070660210] |?
[131070670010] |CentOS is a free derivative of Red Hat Enterprise Linux, which is targeted at the "enterprise" market, so it is specifically design for deployment on a variety of platforms such as servers, etc.
[131070670020] |To target that market, the distribution is probably going to focus more on older, stable versions of packages rather than including anything bleeding-edge.
[131070670030] |Security will also be a focus.
[131070670040] |Check out the RHEL Server Features and Benefits and Desktop Features pages for detailed information.
[131070680010] |One of the things that RHEL/CentOS (and other Enterprise Linux products) provides that other distros don't provide is API/ABI stability.
[131070680020] |This is a frustration to a lot of people who are new to RHEL, because all they see is that the versions available are all older than the latest releases found in the latest release of Ubuntu/Fedora/Gentoo/Whatever.
[131070680030] |But, if you're supporting a product that was deployed on an RHEL box, you don't have to worry about the underlying technology the product uses having it's API change (with new versions of apache, php, perl, python, glibc, whatever).
[131070680040] |This even applies to most kernel modules provided for RHEL.
[131070680050] |As an example, if I've developed a web application that runs on RHEL 5.0, I can be fairly certain that it will continue to run on RHEL 5.6 two years later, all the while the RHEL system has been getting security updates and bug fixes the whole time.
[131070680060] |To answer the "more secure" question: Because RHEL backports security fixes to the released version they provide, you can continue to have a stable API to release software on without worrying about the security of the underlying system.
[131070690010] |This really depends on your situation.
[131070690020] |Ubuntu has a sever and even LTS (long term support) version that in a lot of ways is just as good as RHEL/CentOS.
[131070690030] |I work in a mixed environment.
[131070690040] |Generally using Fedora or Ubuntu for desktops, use FreeBSD, Gentoo and such for appliances and for servers I stick mainly to CentOS but manage a lot of Ubuntu servers as well.
[131070690050] |I won't say that either is better or worse than the other, just different goals.
[131070690060] |Both offer paid support and really, CentOS is just RHEL rebuilt to be free so we're really comparing RHEL to Ubuntu
[131070690070] |Ubuntu server is usually more current on new features than RHEL if you want to do an install and have the latest and greatest version of PHP, MySQL or other programs, you're gonna want Ubuntu.
[131070690080] |You can get them on RHEL, but it's a pain.
[131070690090] |So it really boils down to how you'll be using it.
[131070690100] |If this server is going to sit in a closet, alone and you run mainly off the shelf programs and have plenty of time to work on it, pick Ubuntu.
[131070690110] |In this case updates to this box aren't going to be a problem.
[131070690120] |If an update breaks something, you can have it fixed in a few minutes.
[131070690130] |I have an Ubuntu server sitting in a rack right next to my chair, it is on the non LTS Ubuntu and isn't a problem to do dist upgrades or security updates.
[131070690140] |If however you are going to be managing a lot of servers and using a lot of non standard software or other custom setups on the box, please pick RHEL/CentOS.
[131070690150] |I have never had an update break anything on RHEL/CentOS.
[131070690160] |I have boxes several hundred miles from me with very limited access that happily run automatic security updates and have never caused an issue with my customizations.
[131070690170] |Can't say the same for Ubuntu.
[131070690180] |Spend time with both, see what you like and what fits with your specific needs.
[131070700010] |In the world I work in, the CAD tools used all require RedHat Enterprise be used -- some with specific kernel version and build numbers -- or the vendors won't support their products.
[131070700020] |The reason why they do this is obvious.
[131070700030] |There are just too many distributions and potential kernels and library combinations for them to be able to reproduce every possible environment to either validate their product or reproduce errors that customers are seeing.
[131070700040] |Requiring RedHat means both that they can use their reference platform to reproduce customer errors; and it means that the customer has a support contract with RedHat to increase the likelyhood that any real problem traced to the RedHat reference environment will actually get fixed.
[131070700050] |When you are spending multiple-000 $ per seat per year on some CAD tool, the RedHat support costs are rounding noise.
[131070700060] |That said, what most of my customers do is have only one or two genuine RedHat systems, and run most of their compute on CentOS, which is a free-rebuild of RedHat.
[131070700070] |If a problem if found, it is reproduced on the RedHat systems, and the vendor will happily support the issue from there.
[131070710010] |gdm graphical login prompt problem (OpenSUSE 11.2)
[131070710020] |I'm trying to figure out why the graphical login prompt won't show up at the login page.
[131070710030] |I see the wallpaper just fine, but the graphical login prompt wont show up no matter how long I wait.
[131070710040] |So, In the console, I've done init 3
to shut down gdm and then restarted it with init 5
.
[131070710050] |The problem still persists.
[131070710060] |I downloaded and installed kdm and set it as the default display manager through editing /etc/systemconfig/displaymanager
.
[131070710070] |It worked fine, except that the main menu and many other items in the panels are gone.
[131070710080] |So I removed gdm with zypper and then reinstalled it again.
[131070710090] |I set the gdm as the default display manager and restarted gdm.
[131070710100] |The same problem shows up again.
[131070710110] |So I tried to bypass the login page entirely by enabling autologin.
[131070710120] |I put my username in the autologin section of /etc/systemconfig/displaymanager
and restarted gdm.
[131070710130] |No go, I still get the same problem.
[131070710140] |I'm thinking it mightn't be gdm related and that something else is interfering with gdm startup, but I'm stumped at this point.
[131070710150] |Any ideas?
[131070720010] |Try this: cat /etc/sysconfig/desktop
[131070720020] |That should tell what your DISPLAYMANAGER
and DESKTOP
are set to.
[131070720030] |This file is used by /etc/X11/xinit/Xclients
to determine which desktop to start.
[131070730010] |Local Apache Not Recongnizing New Folder
[131070730020] |I'm running OpenSuse 11.3 X64.
[131070730030] |I have installed apache2, PHP5 and MySQL in order to do some web-design offline e.g. they are for internal network use only.
[131070730040] |There is also phpMyAdmin installed.
[131070730050] |The default directory for the "server" is /srv/www/htdocs
.
[131070730060] |To access a specific site in progress I create a subfolder there then just navigate via http://10.13.23.201/NAMEOFFOLDER
from my internal network.
[131070730070] |At least that is how it should work but it doesn't.
[131070730080] |I created a new folder called wlc so it's directory is /srv/www/htdocs/wlc
however when I go to the address http://10.13.23.201/wlc
I get a Remote Server Or File Not Found error from my browser, there is files in there ,index.php
, that should load and apache has been set to recognize *.php files.
[131070730090] |I know the theory should work as I can access /srv/www/htdocs/phpMyAdmin
by going to http://10.13.23.201/phpMyAdmin
and it loads just fine.
[131070730100] |Also the error is different if I go to a folder that doesn't exist for example http://10.13.23.201/THISFOLERDOESNTEXISIT
will return an Object Not Found error.
[131070740010] |Sorry , is your wlc folder inside a vhosts or in the httpd.conf ?.
[131070740020] |If it isn't , just add it ( i don't know how it's set on suse though , because i'm way used to FreeBSD ), but check the httpd.conf or the /path/to/apache/conf/extra/httpd-vhosts.conf in that machine.
[131070740030] |( the configurations be different on that one , and may have a Debian Layout ( sites enabled... ) ) Lemme know what you have ;)
[131070750010] |make sure your application folder is owned by wwwrun:www
[131070760010] |Where does Mac OS X come from?
[131070760020] |Discussing with Mac owners, I got several versions of where Mac OS X comes from.
[131070760030] |It is known to have some root in BSD, but how much, and where?
[131070760040] |Some say that Mac OS X has a FreeBSD kernel, with all the utilities above that makes it an OS being Mac specific.
[131070760050] |(Not speaking about user apps here, only all of the init, ls, cd, and others. binutils? )
[131070760060] |Others say Mac OS X is a Darwin kernel, that is pure Mac, and that the OS utilities come from BSD.
[131070760070] |Where's the truth?
[131070770010] |On the Unix side, OS X is a descendant of NeXTSTEP, which was derived from 4.3BSD with the core parts of the kernel replaced with Mach.
[131070770020] |The NeXT programming API, which eventually came to be called OpenStep, is the basis of today's Cocoa API for OS X. Obviously there have been 10 years further development on Cocoa, so the two APIs have diverged, though there are ongoing efforts to provide open source API-compatible Cocoa clones.
[131070770030] |Add to that the Classic MacOS compatibility API, called Carbon, and you have OS X.
[131070770040] |As for the FreeBSD kernel idea, it's sorta correct, but it's an unsophisticated way to look at it.
[131070770050] |The original kernel came, as I said, from NeXT, which assembled their first kernel from 4.3BSD and Mach.
[131070770060] |This means that both FreeBSD and NeXTSTEP shared some code via 4.3BSD.
[131070770070] |The meme that OS X has some FreeBSD in it, however, has two more recent sources.
[131070770080] |First, Apple has continued to borrow innovations from the BSD world, usually from FreeBSD.
[131070770090] |Second, Apple hired FreeBSD project co-founder Jordan Hubbard not long after making the first public OS X release.
[131070770100] |As far as I can tell, he still works for Apple.
[131070780010] |The history of MacOS is a little bit more convoluted.
[131070780020] |I was very interested in this in the late 90's as Mach had been pitched around the world as a faster way of building a Unix system.
[131070780030] |The origin of the kernel is a bit more complicated.
[131070780040] |It all starts with AT&T distributing their operating system to some universities for free.
[131070780050] |This Unix was improved extensively at Berkeley and became the foundation for the BSD variations of Unix and incorporated several new innovations like the "Fast File System" (UFS), introduced symlinks and the sockets API.
[131070780060] |AT&T went on their own way and built System V at the same time.
[131070780070] |Meanwhile, research continued and some folks adopted the work from BSD as a foundation.
[131070780080] |At CMU, the BSD kernel was used as the foundation for prototyping a few new ideas: threads, an API to control the virtual memory system (through pluggable "pagers" - user level mmap), a kernel-level remote procedure call system and most importantly the idea of moving some kernel level operations to user space.
[131070780090] |This became the Mach kernel.
[131070780100] |I am not 100% sure if mmap came from Mach, and later was adopted by BSD, or if Mach merely pioneered the idea and BSD added their own mmap based on the ideas of Mach.
[131070780110] |Although the Mach kernel was described as a micro-kernel, up to version 2.5 it was merely a system that provided the thread, mmap, message passing features but remained a monolithic kernel, all the services were running on kernel mode.
[131070780120] |At this time Rick Rashid (now at Microsoft) and Avie Tevanian (now at Apple) had come up with a novel idea that could accelerate Unix.
[131070780130] |The idea was to use the mmap system call to pass data to be copied from user space to the "servers" implementing the file system.
[131070780140] |This idea was essentially a variation of trying to avoid making copies of the same data, but it was pitched as a benefit of micro kernels, even if the feature could be isolated from a micro kernel.
[131070780150] |The benchmarks of this VM-backed faster Unix system is what drove people at Next and at the FSF to pick Mach as the foundation for their kernels.
[131070780160] |Next went with the Mach 2.5 kernel (which was based on either BSD 4.2 or 4.3) and GNU would not actually start on the work for years.
[131070780170] |This is what the Nextstep operating systems were using.
[131070780180] |Meanwhile at CMU, work continued on Mach and they finally realized the vision of having multiple servers running on top of a micro kernel with version 3.0.
[131070780190] |I am not aware of anyone in the wild being able to run Mach 3.0 as all of the interesting user-level servers used AT&T code so they were considered encumbered, so it remained a research product.
[131070780200] |Around this time the Jolitz team had done a port of 4.3+ BSD to the 386 architecture and published their porting efforts on DrDobbs.
[131070780210] |386BSD was not actively maintained and a group emerged to maintain and move 386BSD forward, the NetBSD team.
[131070780220] |Internal fights within the NetBSD group caused the first split and FreeBSD was formed out of this.
[131070780230] |NetBSD at the time wanted to focus on having a cross-platform BSD, and FreeBSD wanted to focus on having a Unix that did great on x86 platforms.
[131070780240] |A little bit later, NetBSD split again due to some other disputes and this lead to the creation of OpenBSD.
[131070780250] |A fork of BSD 4.3 for x86 platforms went commercial with a company called BSDi, and various members of the original Berkeley team worked there and kept good relations with the BSD team at the University.
[131070780260] |AT&T was not amused and started the AT&T vs BSDi lawsuit, which was later expanded to sue the University as well.
[131070780270] |The lawsuit was about BSDi using proprietary code from AT&T that had not been rewritten by Berkeley.
[131070780280] |This set back BSD compared to the up and coming Linux operating system.
[131070780290] |Although things were not looking good for the defendants, at some point someone realized that SystemV had incorporated large chunks of BSD code under the BSD license and AT&T had not fulfilled their obligations in the license.
[131070780300] |A settlement was reached in which AT&T would not have to pull their product from the market, and the University agreed to rip out any code that could still be based on AT&T code.
[131070780310] |The university then released two versions of BSD 4.4 encumbered and 4.4 lite.
[131070780320] |The encumbered version would boot and run, but contained AT&T code.
[131070780330] |The lite version did not contain any code from AT&T but did not work.
[131070780340] |The various BSD efforts re-did their work on top of the new 4.4 lite release and had a booting system within months.
[131070780350] |Meanwhile, the Mach 3.0 micro kernel remained not very useful without any of the user-land servers.
[131070780360] |A student from a Scandinavian university (I believe, I might have this wrong) was the first one to create a full Mach 3.0 system with a complete OS based on the 4.4 lite release, I believe this was called "Lites".
[131070780370] |The system worked, but was slow.
[131070780380] |During the 1992-1996 and by now BSD already had an mmap() system call as well as most other Unix systems.
[131070780390] |The "micro kernel advantage" that was not there, never really came to fruition.
[131070780400] |Next still had a monolithic kernel.
[131070780410] |The FSF was still trying to get Mach to build, and not wanting to touch the BSD code or contribute to any of the open source BSD efforts, they kept charging away at a poorly specified kernel vision and they were drowning on RPC protocols for their own kernel.
[131070780420] |The micro kernel looked great on paper, but turned out to be over engineered and just made everything slower.
[131070780430] |At this point we also had the Linus vs Andy debate over micro-kernels vs monolithic kernels and the world started to realize that it was just impossible to add all of those extra cycles to a micro kernel and still come ahead of a well designed monolithic kernel.
[131070780440] |Apple had not yet acquired NextStep, but was also starting to look into Mach as a potential kernel for their future operating systems.
[131070780450] |They hired the Open Software Foundation to port Linux to the Mach kernel, and this was done out of their Grenoble offices, I believe this was called "mklinux".
[131070780460] |When Apple bought Next what they had on their hands was a relatively old Unix foundation, a 4.2 or 4.3 based Unix and by now, not even free software ran well out of the box on those systems.
[131070780470] |They hired Jordan Hubbard away from FreeBSD to upgrade their Unix stack.
[131070780480] |His team was responsible for upgrading the user land, and it is not a surprise that the MacOS userland was upgraded to the latest versions available on BSD.
[131070780490] |Apple did switch their Mach from 2.5 to 3.0 at some point, but decided to not go with the micro-kernel approach and instead kept everything in-process.
[131070780500] |I have never been able to confirm if Apple used Lites, hired the scandinavian hacker, or if they adopted the 4.4 lite as their OS.
[131070780510] |I suspect they did, but I had already moved on to Linux and had stopped tracking the BSD/Mach world.
[131070780520] |There was a rumor in the late 90's that Avie at Apple tried to hire Linus (who was already famous at this point) to work on his baby, but Linus chose to continue working on Linux.
[131070780530] |History aside, this page describes the userland and the Mach/Unix kernel:
[131070780540] |http://developer.apple.com/mac/library/documentation/Darwin/Conceptual/KernelProgramming/Architecture/Architecture.html#//apple_ref/doc/uid/TP30000905-CH1g-CACDAEDC
[131070780550] |I found this graphic of the history of OSX:
[131070790010] |How to use IBus with kde
[131070790020] |Hi,
[131070790030] |The KDE install on my PC includes IBus and the Japanese Anthy IME.
[131070790040] |In the IBus preference, I have enabled the Anthy IME.
[131070790050] |The IBus daemon is running and there is an IBus icon in the system tray, however, there does not appear to be any way of switching IMEs.
[131070790060] |This is on PC-BSD 8.2.
[131070790070] |What am I doing wrong here?
[131070790080] |Thanks
[131070800010] |First, run ps -ef | grep ibus
to check that the daemon is running with the correct option.
[131070800020] |There should be a process like ibus-daemon --xim
.
[131070800030] |Second, beware of the program you use to test iBus.
[131070800040] |For example, Kate (the KDE text editor) doesn't work with iBus (or at least not by default, you'll have to look more if you want that).
[131070800050] |I believe other KDE apps are like Kate as well, though I'm not sure.
[131070800060] |I use Chromium or Firefox to test iBus (click on the address bar and perform the key combination).
[131070800070] |Third, (you may have already discovered that) iBus only functions properly once you have logged out and logged back in.
[131070800080] |Update: a little searching revealed that there is ibus-qt for KDE applications.
[131070810010] |Will a "customized" initrd survive a kernel update via yum?
[131070810020] |I have a CentOS 5.5 installation with the stock CentOS 5.5 kernel.
[131070810030] |I have modified the init script in the initrd, commenting out some unneeded modules, lowering the interval time of the "stabilized" command , etc.
[131070810040] |My question is, what will happen in the future when Yum updates the kernel?
[131070810050] |Will my initrd modifications make it into the initrd of the new kernel?
[131070820010] |No, your changes won't be in the new initrd.
[131070820020] |The CentOS kernel packages have a post-script that runs /sbin/new-kernel-pkg --package kernel --mkinitrd --depmod --install 2.6.18-238.1.1.el5
(an example from the RHEL5 kernel I have installed).
[131070820030] |The command will run mkinitrd, which will build a new initrd, and the changes that you made to the previous initrd won't be created there, unless you've also changed the mkinitrd script or its files (or you patched nash or something like that).
[131070830010] |More free blocks than reserved, but still I get "no space on device"
[131070830020] |As you see in this dumpe2fs -h output (snipped the end, left the head in case something is important), I have more (about 86000 more, in fact) 'Free blocks' than are reserved, but I get a "no space on device" error even for a little tiny file (echoing something into a file for testing).
[131070830030] |Color me stumped.
[131070840010] |You are probably experiencing disk corruption.
[131070840020] |Boot to single user or recovery mode and run fsck
on the affected partition(s).
[131070850010] |What functionality do I lose by disabling GDM/KDM/SLIM/CDM etc display managers ?
[131070850020] |I've been wondering lately why do I need GDM, so I got it disabled for the sake of experiment sake by modifying upstart file /etc/init/gdm (I run Ubuntu 10.10 desktop).
[131070850030] |So now computer boots to command prompt and I just type in startx if I need GUI.
[131070850040] |So far everything runs just fine.
[131070850050] |But, does anyone know if there are any drawbacks to not using gdm?
[131070850060] |Would I lose any functionality?
[131070860010] |If you are an expert command line user, then I would say no.
[131070860020] |You still have all the programs you have installed.
[131070860030] |All you are not seeing is the graphical representation.
[131070860040] |I had startx run at startup so that I can open up the browser without having to run startx everytime I want to start the browser.
[131070870010] |If you are starting your X anyways .. all the time .. then there is no point in repeating startx
manually over and over again.
[131070870020] |If you use *DM, you can use a program to lock the screen and go away from the machine.
[131070870030] |If you do not use *DM but just launch your xsession via startx
you have to lock your xsession AND you have to lock your console. otherwise a person can altfN and take over your account.
[131070870040] |*DM can be configured so that people can attach remotly to that *DM. http://www.faqs.org/docs/Linux-mini/XDM-Xterm.html#XDM
[131070870050] |Other than that: you don't lose anything.
[131070870060] |Regarding "want to become better at CLI": X is (mostly as a joke) considered a great xterm multiplexer to give the muser more space to improve his CLI skills :) I do not consider that as a valid argument against a X session, does not matter if triggered by *DM or by startx
.
[131070880010] |Fluxbox borderWith on focused Maximized windows
[131070880020] |Using fluxbox for a few days now trying to get things setup as I want them.
[131070880030] |I added window.focus.borderWidth: 2 (and .borderColor: white) to my .fluxbox/overlay so I can easily determine what window is focused without looking down to the toolbar.
[131070880040] |This works perfect.. except for:
[131070880050] |If a window is maximized such as Firefox then no focus border appears, so for maximized windows I cant use the same visuals to determine if they are focused.
[131070880060] |Why the obsession: I use deco: border (i dont need window title) on a multi-head setup so I like to know whats in focus.
[131070880070] |In a nutshell, I want large border for focused maximized windows in Fluxbox, how?
[131070890010] |submit a feature request to sf.net/projects/fluxbox. what you want is currently blocked by this piece of code in src/WindowState.cc:
[131070890020] |(if maximized is not vertical-only or horizontal-only)
[131070900010] |Detailed sparse file information on Linux
[131070900020] |I have a sparse file, in which only some blocks are allocated:
[131070900030] |I would like to know which blocks of the file are actually allocated.
[131070900040] |Is there a system call or kernel interface that could be used to get a list of either the allocations, or the holes of file?
[131070900050] |Simply checking for a long enough string of zeros (the approach used by GNU cp, rsync, etc) does not work correctly:
[131070900060] |It detected other sequences of zeros that were actually allocated.
[131070910010] |It depends on the file system.
[131070910020] |I don't believe their is a call, which may be why many tools don't handle copying sparse files well.
[131070910030] |The GNU tool chain use searching for large blocks of zeros as that allows them to remove unused allocated blocks.
[131070910040] |Many copy tools will convert a sparse file into a file with all blocks allocated.
[131070910050] |You will likely have to open the inode, and parse the result.
[131070910060] |Inode format is file system dependent.
[131070910070] |Some file systems may have part of your data in the inode itself.
[131070920010] |grep multiple times per line
[131070920020] |grep -c
is useful for finding how many times a string occurs in a file, but it only counts each occurence once per line.
[131070920030] |How to count multiple occurences per line?
[131070920040] |I'm looking for something more elegant than:
[131070930010] |I think the perl expression you have is as good as you are going to get.
[131070930020] |If you want to use grep to count you could pipe sed into it to add \n after each of your matches.
[131070940010] |Your example only prints out the number of occurrences per-line, and not the total in the file.
[131070940020] |If that's what you want, something like this might work:
[131070950010] |grep's -o
will only output the matches, ignoring lines; wc
can count them:
[131070950020] |This will also match 'needles' or 'multineedle'.
[131070950030] |Only single words:
[131070960010] |How to change/remove blue panel glow in KDE4.6?
[131070960020] |After a system (as well as KDE) update, I got this blue panel glow which appears on hover.
[131070960030] |I browsed through system settings, panel settings, desktop settings, but couldn't find an option to change/remove this blue glow.
[131070960040] |Did anyone else encounter this problem?
[131070960050] |Where should I look?
[131070960060] |I also tried this, but it didn't help.
[131070960070] |I use Arch, but don't think it matters much in this case...
[131070960080] |Thank you.
[131070970010] |The blue glow is not an effect, it's part of the Desktop theme.
[131070970020] |Don't use the Oxygen or a few other desktop themes.
[131070970030] |Aya should work for you
[131070970040] |Here's what the resulting panel looks like, no blue glow