[131052290010] |
Buffers or tabs in vim? What are advantages/disadvantages of each approach?
[131052290020] |I'm used to modeless editors.
[131052290030] |Only the past year I've been using vim/gvim, which has a modal approach.
[131052290040] |I'm used to tabs in all editors, since before vim all of them are used to it.
[131052290050] |In gvim, you don't necessarily need to use tabs: you can just use buffers.
[131052290060] |My question is: what are the advantages/disadvantages between these two approaches (buffers vs tabs)?
[131052290070] |Why do these both options exist?
[131052300010] |Tabs are generally used for views and ViM tries to maintain your producitivty with plugins like BufExplorer and NERDTree for buffer management.
[131052300020] |However, I'm also guilty of using a single buffer open in each tab - it's a bad habit.
[131052310010] |First of all, quick naming correction - anything open in Vim is a "buffer".
[131052310020] |The terminology here is similar to emacs, if you are familiar with that editor.
[131052310030] |Buffers simply refer to open files in the memory of the current Vim process.
[131052310040] |To see a list of you buffers, use
[131052310050] |which shows you a list of the current buffers, numbered in the order that they were opened.
[131052310060] |However, buffers do not have anything to do with the visual representation of files in Vim.
[131052310070] |I think you may be thinking of splits or views.
[131052310080] |As to the difference between splits or tabs - It really is your preference!
[131052310090] |Some people like to be able to see a lot of code at once (or have the screen real estate to do so), and so might prefer using splits.
[131052310100] |Other people, like yourself, are more used to tabs and so might prefer that instead.
[131052310110] |These links contain helpful information about using splits and tabs:
[131052310120] |for splits (they call them viewports in this doc): http://www.linux.com/archive/feed/54157
[131052310130] |for tabs: same site (sorry, can't post more than one hyperlink)
[131052310140] |I hope this answers your question!
[131052310150] |Please clarify, if it did not.
[131052320010] |See http://stackoverflow.com/questions/102384/using-vims-tabs-like-buffers/103590#103590 (or why spliting the vim community among all SE/SO sites is a bad idea)
[131052330010] |How do I make a shell script that sends output to a process.
[131052330020] |I'm currently running a server console program in a screen because I need to both read it and occasionally send commands.
[131052330030] |I'd like to run the app as a deamon in the background (start/stop it with init).
[131052330040] |I could tail -f the log, but that won't let me send input to the process.
[131052330050] |Is there any way to set this up so I can both read and send input, but still have it running in the background?
[131052330060] |I'd also like to be able to send input to the deamon from different processes as well (a shell script that could send a "Stop\n" command, for instance)
[131052340010] |Read from a pipe, write to a file
[131052340020] |If you want the daemon to read input produced by some arbitrary process, you need to connect that process to a pipe.
[131052340030] |Here the arbitrary process is you echoing commands, and it's going to run in a different context.
[131052340040] |So create a named pipe (often called a fifo in unix contexts).
[131052340050] |And just write commands to the pipe:
[131052340060] |This is unlikely to work as is however: there's a good chance that the daemon will exit when it sees an end of file on its standard input, which happens as soon as the first process that writes to the pipe terminates.
[131052340070] |You can use tail -f
to avoid that problem.
[131052340080] |With some tail
implementations, you may get bitten by buffering: the tail
process will wait until it has amassed enough bytes to emit some output.
[131052340090] |I don't think this is solvable in the POSIX toolbox; if that's a problem, use a trivial C or Perl or Python program.
[131052340100] |As far as I can tell the tail
from GNU coreutils (as found on Linux and elsewhere) is safe on this respect.
[131052340110] |When you stop the daemon, echo >/var/run/daemon.fifo
will kill the tail
process.
[131052340120] |Starting the program inside screen
[131052340130] |Instead of invoking the daemon directly from your service manager (are you really using just SysV init, or something additional like wrapper scripts or Upstart?), invoke
[131052340140] |Since the daemon won't be a child process of the service manager, you need to make sure to send a signal to the right process.
[131052340150] |How to do that depends on exactly how the daemon is started and by what.
[131052340160] |It's technically possible to attach a running process to a terminal, but there's a risk you'll crash the program, so this is definitely out for a production system.
[131052340170] |The -L
option makes screen write everything that appears in its window to a file.
[131052340180] |The file name is given in daemon.screenrc
with the logfile
directive.
[131052350010] |fetchmail without an mda
[131052350020] |I would like to retrieve mail from a server using fetchmail and have it deposit it directly into an mailbox/repository and not hand it off to a mda.
[131052350030] |Is this possible?
[131052350040] |How?
[131052350050] |In the alternate, is there an mda which simply recieves things from er fetchamil and stores them in some sort of repository without doing any processing?
[131052360010] |Use the mda
option in your .fetchmailrc
to specify maildrop or procmail as your MDA.
[131052360020] |This will deposit the mail in the system mailbox (typically /var/spool/mail/handygandy
or /var/mail/handygandy
).
[131052360030] |If you want it elsewhere, or if you want to dispatch the mails based on their content, write a ~/.mailfilter
file.
[131052370010] |How can I configure cgroups to fairly share resources between users?
[131052370020] |There used to be a kernel config option called sched_user or similar under cgroups.
[131052370030] |This allowed (to my knowledge) all users to fairly share system resources.
[131052370040] |In 2.6.35 it is not available.
[131052370050] |Is there a way I can configure my system to automatically share io/cpu/memory resources between all users (including root?).
[131052370060] |I have never set up a cgroup before, is there a good tutorial for doing so?
[131052370070] |Thank you very much.
[131052380010] |The kernel documentation provides a general coverage of cgroups with examples.
[131052380020] |The cgroups-bin
package (which depends on libcgroup1
) already provided by the distribution should be fine.
[131052380030] |Configuration is done by editing the following two files:
[131052380040] |Used by libcgroup to define control groups, their parameters and mount points.
[131052380050] |Used by libcgroup to define the control groups to which the process belongs to.
[131052380060] |Those configuration files already have examples in it, so try adjusting them to your requirements.
[131052380070] |The man pages cover their configuration quite well.
[131052380080] |Afterwards, start the workload manager and rules daemon:
[131052380090] |The workload manager (cgconfig) is responsible for allocating the ressources.
[131052380100] |Adding a new process to the manager:
[131052380110] |Adding a already running process to the manager:
[131052380120] |Or automatically over the cgrules.conf file and the CGroup Rules Daemon (cgred), which forces every newly spawned process into the specified group.
[131052380130] |Example /etc/cgconfig.conf :
[131052380140] |Example /etc/cgrules.conf :
[131052380150] |This will share the CPU ressources about 50-50 between the user 'alice' and 'bob'
[131052390010] |Why are spin locks good choices in Linux Kernel Design instead of something more common in userland code, such as semaphore or mutex?
[131052390020] |I understand that Spinlocks are real waste in Linux Kernel Design.
[131052390030] |I would like to know why is it like spin locks are good choices in Linux Kernel Design instead of something more common in userland code, such as semaphore or mutex?
[131052400010] |The choice between a spinlock and another construct which causes the caller to block and relinquish control of a cpu is to a large extent governed by the time it takes to perform a context switch (save registers/state in the locking thread and restore registers/state in another thread).
[131052400020] |The time it takes and also the cache cost of doing this can be significant.
[131052400030] |If a spinlock is being used to protect access to hardware registers or similar where any other thread that is accessing is only going to take a matter of milliseconds or less before it releases the lock then it is a much better use of cpu time to spin waiting rather than to context switch and carry on.
[131052410010] |Operate on command output in tcsh
[131052410020] |I think the correct operator for executing subshell command in tcsh is the backtick `
but when I attempt to do a diff on the results of two commands I get an error.
[131052410030] |When I direct the information to files first and do a diff on the two files I get exactly what I want but I'd rather just use a single command like:
[131052410040] |What is the correct syntax in tcsh?
[131052420010] |You have the correct syntax for a command substitution.
[131052420020] |But what you need to pass to diff
are two file names, not two file contents, which is what you're trying to pass.
[131052420030] |(What you're actually passing is in fact more complicated, but if you'd written diff "`jar -tvf org.jar`" "`jar -tvf new.jar`"
, you'd be passing two file contents.)
[131052420040] |I don't think tcsh has a way to do what you're trying to do, without creating a temporary file.
[131052420050] |In ksh, bash or zsh, you can do it this way:
[131052430010] |Problems moving home partition
[131052430020] |I was running low in disk space on my Linux box, Ubuntu 10.10 Desktop, (specifically on my /home partition) so I added another disk to it and I am trying to move the home partition into it.
[131052430030] |I am trying to follow the steps on this guide here: https://help.ubuntu.com/community/Partitioning/Home/Moving
[131052430040] |However, when copying the files over to the new mounted partition rsync seems to fail silently.
[131052430050] |When I compare the folders, the new one is still empty.
[131052430060] |This is the command that I am executing:
[131052430070] |Has it anything to do with the encryption on my home folder?
[131052440010] |Try adding the "verbose" flag to rsync, and see what that says:
[131052440020] |Edit your question to include the results, and we'll be able to help you further.
[131052440030] |Also, I'm not sure why you're going with /home/.
, try just /home/
:
[131052440040] |EDIT: Also, please make sure you're not logged in to any user with an encrypted home directory.
[131052440050] |If all users have encrypted home directories, add a user who does not first, and use that user to copy.
[131052450010] |How is new hardware support added to the linux kernel?
[131052450020] |Imagine there's a company A that releases a new graphics adapter.
[131052450030] |Who manages the process that results in this new graphics adapter being supported by the Linux kernel in the future?
[131052450040] |How does that proceed?
[131052450050] |I'm curious how kernel support for any new hardware is handled; on Windows companies develop drivers on their own, but how does Linux get specific hardware support?
[131052460010] |Driver support works the same way as with all of open source: someone decides to scratch their own itch.
[131052460020] |Sometimes the driver is supplied by the company providing the hardware, just as on Windows.
[131052460030] |Intel does this for their network chips, 3ware does this for their RAID controllers, etc.
[131052460040] |These companies have decided that it is in their best interest to provide the driver: their "itch" is to sell product to Linux users, and that means ensuring that there is a driver.
[131052460050] |In the best case, the company works hard to get their driver into the appropriate source base that ships with Linux distros.
[131052460060] |For most drivers, that means the Linux kernel.
[131052460070] |For graphics drivers, it means X.org. There's also CUPS for printer drivers, NUT for UPS drivers, SANE for scanner drivers, etc.
[131052460080] |The obvious benefit of doing this is that Linux distros made after the driver gets accepted will have support for the hardware out of the box.
[131052460090] |The biggest downside is that it's more work for the company to coordinate with the open source project to get their driver in, for the same basic reasons it's difficult for two separate groups to coordinate anything.
[131052460100] |Then there are those companies that choose to offer their driver source code directly, only.
[131052460110] |You typically have to download the driver source code from their web site, build it on your system, and install it by hand.
[131052460120] |Such companies are usually smaller or specialty manufacturers without enough employees that they can spare the effort to coordinate with the appropriate open source project to get their driver into that project's source base.
[131052460130] |A rare few companies provide binary-only drivers instead of source code.
[131052460140] |An example are the more advanced 3D drivers from companies like NVIDIA.
[131052460150] |Typically the reason for this is that the company doesn't want to give away information they feel proprietary about.
[131052460160] |Such drivers often don't work with as many Linux distros as with the previous cases, because the company providing the hardware doesn't bother to rebuild their driver to track API and ABI changes.
[131052460170] |It's possible for the end user or the Linux distro provider to tweak a driver provided as source code to track such changes, so in the previous two cases, the driver can usually be made to work with more systems than a binary driver will.
[131052460180] |When the company doesn't provide Linux drivers, someone in the community simply decides to do it.
[131052460190] |There are some large classes of hardware where this is common, like with UPSes and printers.
[131052460200] |It takes a rare user who a) has the hardware; b) has the time; c) has the skill; and d) has the inclination to spend the time to develop the driver.
[131052460210] |For popular hardware, this usually isn't a problem because with millions of Linux users, these few people do exist.
[131052460220] |You get into trouble with uncommon hardware.
[131052470010] |Fedora yum update error
[131052470020] |When I run yum update
, I receive the following error:
[131052470030] |Please let me know what to do.
[131052480010] |I'm not a regular Fedora user but the following is a generic solution I've used in the past.
[131052480020] |Try running
[131052480030] |and then rerunning the yum update
command.
[131052490010] |Make samba follow symlink outside share
[131052490020] |This is ubuntu server 10.04 64 and samba 3.4.7
[131052490030] |I have a shared directory /home/mit/share and another one /home/temp that I link into the shared one:
[131052490040] |ln -s /home/temp /home/mit/share/temp
[131052490050] |But on windows, after net use'ing I cannot open S:/temp, on linux it is possible to access /home/mit/share/temp like expected.
[131052490060] |This works if I link directories inside /home/mit/share/temp, so I guess samba is restricting to jump with a link outside/above the shared directory.
[131052490070] |I thought I could override this restriction with
[131052490080] |This is my smb.conf:
[131052500010] |Greetings, I've tried putting this into configuration to fix symlinks for windows for my setup , but I am not sure if it will affect windows client, otherwise it follows symlinks when I connect to this box.
[131052510010] |Linux webcam software
[131052510020] |What are some good full-featured (whatever that means) webcam applications in linux.
[131052510030] |I know about cheese, but found some of its features lacking.
[131052510040] |Are there any others?
[131052520010] |ZoneMinder
[131052520020] |Motion
[131052520030] |ekiga
[131052520040] |peercast + geekcast
[131052520050] |or even hascii
[131052530010] |Have a look here : http://guvcview.berlios.de/
[131052540010] |How to de-unzip, de-tar -xvf -- de-unarchive in a messy folder?
[131052540020] |Usually, I unarchive things by $ mkdir newFolder; $ mv *.zip newFolder; $ cd newFolder; $unzip *.zip
but sometimes I get lazy and just do in an arbitrary folder $ unzip *.zip
so time-to-time messing up with other content.
[131052540030] |Before you say that my brains are missing, I will list here some methods -- some archive version surely have crappy-flags while others more spartan, I am more interested about the latter but feel free to share any way.
[131052540040] |Some ways to de-unarchive, are there others?
[131052540050] |$ find . -anewer fileThatExistedBeforeUnarchieving -ok rm '{}' \;
Weaknesses are that it lists the *.zip
dirs, so you need to use slow -ok
, slow with many *.zip
matches and, for some reason, it does not seem to match everything extracted.
[131052540060] |If small amount of extracted files, one-by-one, slow, cumbersome and error-prone.
[131052540070] |When I want to make sure whether the content of the archieve is actually a folder, I sometimes check it with $ unzip -l *.bsd
, works at least in obsd`s unzip-version.
[131052540080] |If you are referring to certain archiving tools, please, state them when appropriate.
[131052540090] |Keep it simple though -- I am more interested about the WAYS how you do it, rather than a single tool.
[131052550010] |How about feeding the list of files in archive to xargs rm
?
[131052550020] |That would be tar -tf tarbomb.tar | xargs rm
or unzip --list zipbomb.zip | xargs rm
.
[131052560010] |Not really what you asked for but how about to use a "unzip all" script instead of.
[131052560020] |That way the output from each file goes into it's own directory.
[131052570010] |By name
[131052570020] |You can generate the list of files in the archive and delete them, though this is annoyingly fiddly with archivers such as unzip or 7z that don't have an option to generate a plain list of file names.
[131052570030] |Even with tar, this assumes there are no newlines in file names.
[131052570040] |Instead of removing the files, you could move them to their intended destination.
[131052570050] |Using FUSE
[131052570060] |Instead of depending on external tools, you can (on most unices) use FUSE to manipulate archives using ordinary filesystem commands.
[131052570070] |You can use Fuse-zip to peek into a zip, extract it with cp
, list its contents with find
, etc.
[131052570080] |AVFS creates a view of your entire directory hierarchy where all archives have an associated directory (same name with # tacked on at the end) that appears to hold the archive content.
[131052570090] |By date
[131052570100] |Assuming there hasn't been other any activity in the same hierarchy than your extraction, you can tell the extracted files by their recent ctime.
[131052570110] |If you just created or moved the zip file, you can use it as a cutoff; otherwise use ls -lctr
to determine a suitable cutoff time.
[131052570120] |If you want to make sure not to remove the zips, there's no reason to do any manual approval: find
is perfectly capable of excluding them.
[131052570130] |Here are example commands using zsh or find
; note that the -cmin
and -cnewer
primaries are not in POSIX but exist on Linux (and other systems with GNU find), *BSD and OSX.
[131052570140] |With GNU find, FreeBSD and OSX, another way to specify the cutoff time is to create a file and use touch
to set its mtime to the cutoff time.
[131052570150] |Instead of removing the files, you could move them to their intended destination.
[131052570160] |Here's a way with GNU/*BSD/OSX find, creating directories in the destination as needed.
[131052570170] |Zsh equivalent (almost: this one reproduces the entire directory hierarchy, not just the directories that will contain files):
[131052570180] |Warning, I haven't tested most of the commands in this answer.
[131052570190] |Always review the list of files before removing (run echo
first, then rm
if it's ok).
[131052580010] |I use the following function in zsh:
[131052580020] |I.e. command substitution to remove all files in the cleaned up output of unzip -l
.
[131052580030] |tar tvf
could be used in a similar way.
[131052590010] |I'm feeling stupid, anyway I scratched my head to write up this script when I had a similar problem.
[131052590020] |I used cpio
with the -it
flag to get a list of files; you can use equivalent commands for other archivers.
[131052590030] |The tricky part is, the cpio archive is from an initrd and I extracted into /
, so many folders and files have the same name as in a working system.
[131052590040] |Luckily cpio didn't overwrite any of my existing files.
[131052590050] |I use a time check to ensure not to delete anything that existed before the wrong command.
[131052600010] |Would anyone like to help me start a Wikipedia page "List of rolling release Linux Distribustions"?
[131052600020] |I can't find a definitive list of rolling distros any where on the web.
[131052600030] |DistroWatch doesn't have one, though I posted a comment suggesting they add such or label rolling releases and the type of release/dev. cycle generally.
[131052600040] |Wikipedia have a very small page for "Rolling Release" only which list about five or so merely as examples of rolling releases.
[131052600050] |I would like to start a Wikipedia page "List of Rolling Release Linux Distributions".
[131052600060] |I would be happy to help with such as I have been trying to find a good guide to Linux rolling release distros.
[131052600070] |If you would be willing to help me with the page please post below or email my screen name at gmail if you like the idea or have a better one.
[131052600080] |I still need to set up a Wikipedia account but thought I'd see if any one was interested in the project first.
[131052600090] |I would like to collaborate with other in creating the page as I've only used Linux for a year or so &I'm still something of a newbie.
[131052600100] |I look forward to hearing your thoughts on the proposal.
[131052600110] |** Edit in reply to the comments below **
[131052600120] |Fist of all I'd like to apologize for posting the question.
[131052600130] |Sorry: I should have read the guidelines first on what types of question are allowed.
[131052600140] |Feel free to erase this question entirely as I'm not sure how this is done.
[131052600150] |My question really should have been: "Does anyone know whether a comprehensive list of rolling-release Linux distros exists anywhere on the Web [before I bother to create a Wikipedia article for such]?". -- Would this be a suitable question to post on this site?
[131052600160] |If not I'll just leave things as they stand.
[131052600170] |BTW, many thanks to those who commented below (and Stefan Lasiewski &Maciej Piechotka for closing the question).
[131052600180] |In future, I'll endeavour to make sure I don't make the same mistake of asking an ambiguous question (or one that's inappropriate given the site's purpose).
[131052600190] |Many thanks for the clarification.
[131052600200] |Tuxalot.
[131052610010] |Setting processor fan to 100%
[131052610020] |How can I set fans speed to 100% or more in linux ?
[131052620010] |Be aware that fiddling around with the fan speed can overheat your machine and kill components!
[131052620020] |Anyway, the ArchLinux wiki has a page describing how to setup lm-sensors
and fancontrol
to achieve speed control.
[131052630010] |Grub 2 installed on-partition - how to not embed it
[131052630020] |It's partially not my question but well - that's a live of free technical supports for friends and family.
[131052630030] |My friend installed ArchLinux on Mac Book Pro alongside Mac OS X and Windows to try it.
[131052630040] |He's poweruser so I needed only to help with bootloader - described installation of GRUB did not worked (grub did not detect 4th GUID partion it was installed on).
[131052630050] |I heard that with EFI works GRUB 2 - and it did.
[131052630060] |The problem is that it required embedding as ext4 partition had too little space for it.
[131052630070] |Is it possible to install GRUB 2 on ext4 partition without embedding?
[131052640010] |Your best bet on Mac hardware is to use rEFIt.
[131052640020] |I've used that and it works well.
[131052650010] |Can I install Linux on Ankya 7802L 266MHz 128MB 2GB laptop?
[131052650020] |I recently bought this cute little laptop computer (not much memory/HD/CPU):
[131052650030] |http://www.dinodirect.com/netbook-anyka-7802l-266mhz-128mb-2gb-nand-flash.html
[131052650040] |Has anyone successfully installed any Linux distro on it?
[131052650050] |I know Linux works well w/ low-end specs, so my main worry is drivers/etc.
[131052650060] |The laptop doesn't come w/ a recovery disk: how can I backup the OS it comes with (modified version of Windows?), just in case my Linux install fails.
[131052650070] |[I am not affiliated w/ DinoDirect, this is just a cool toy I bought myself]
[131052660010] |On a different computer with high speed connection download and use unetbootin.
[131052660020] |Unetbootin will help you to download salix,puppy or zenwalk linux.
[131052660030] |Use unetbootin to move it to usb drive. if you can get the laptop to boot with the usb drive then you can install any one of this.
[131052660040] |All three will work on the specs you have mentioned.
[131052670010] |For this machine I would use Lupq511 windows installer. it will make a "frugal install" using Grub4dos to your windows partition and chain it into your Windows bootloader.
[131052670020] |(OS is <120 MB)
[131052670030] |So basically you can install it with a few clicks from inside Windows.
[131052670040] |You can download an exe file from here.
[131052670050] |I used this several times to quickly rescue some old windows computers.
[131052670060] |Maybe before you start, use chkdsk /f
on your windows drive.
[131052670070] |There is a boot option (pfix=ram
) which will make it run totally in Ram, so eventually you can also repartition the disk to give it its own Linux filesystem or install another Distro.
[131052670080] |(Recommend: Fluppy, small distro specialised in Netbooks/Laptops)
[131052670090] |Original thread here.
[131052670100] |Actually any Puppy Linux can be turned into an Windows exe installer file with this technology.
[131052670110] |It should work on Win 9x, WinXPm, Vista, Win7 (32 and 64 bit) it is still beta, but nonetheless I find this pretty cool!
[131052680010] |I've had good luck with Slackware on really old systems.
[131052690010] |Gentoo supports ARMv4 or later with at least 32 MB.
[131052690020] |Open your netbook and make sure that your 2Gb NAND SSD is not just a chip but something with IDE or SATA.
[131052690030] |Connect ssd to computer with normal OS and make image of your Windows CE (using dd ;) so you can play with it later.
[131052690040] |Install gentoo first inside Qemu (qemu-system-arm).
[131052690050] |Make image of installed gentoo.
[131052690060] |Expand gentoo image on ssd.
[131052690070] |...
[131052690080] |Profit!
[131052690090] |I wish you good luck.
[131052700010] |You could try installing MINIX v3 on it.
[131052700020] |It is far less demanding than Linux and has some good developers working on it.
[131052710010] |Light up a LED through USB
[131052710020] |Hey, so I'm just playing around with a usb cable and an LED.
[131052710030] |I plugged in the usb to my computer and connected ground with the LED ground and the last usb pin (+) to the LED.
[131052710040] |It stays lit bright.
[131052710050] |I moved the wire from the usb power pin to the D+ pin.
[131052710060] |Is it possible that I could send a bit stream through usb that would in turn light up this LED?
[131052710070] |I'm not even a beginner with usb, drivers, etc.
[131052710080] |I just had the idea that hit me and wanted to see if it was possible as a sort of show off to friends.
[131052720010] |Not directly, and even if you could, it wouldn't be very useful since the usb protocol constantly sends pings over the wire; the led would probably appear continuously dimly lit.
[131052720020] |If you wanted, you could make a low-pass amplifier to get it done.
[131052720030] |If you go this route, check out USB In A Nutshell to learn more about the USB protocol.
[131052730010] |If you have an old-style parallel or serial port, this is much easier.
[131052740010] |Kernel Panic because of RAM stick?
[131052740020] |Hello, one of my RAM sticks causes a Kernel Panic on my Ubuntu 10.10 (something like "not syncing" with a lot of memory adresses shown on screen).
[131052740030] |It's definitely this one RAM stick and not its socket because when I put one of the other sticks into the slot of the one RAM stick, everything is ok.
[131052740040] |How does it come that memtest doesn't find any errors after several cycles but Ubuntu is not able to boot while using this one special RAM stick?
[131052740050] |Does anybody have an explanation for that?
[131052750010] |What is "several" passes?
[131052750020] |What memtest tests have you run?
[131052750030] |I know I have seen memtest86+ take up to 6 or 7 passes to find an error with RAM sticks.
[131052750040] |Also, make sure you run the full battery of tests.
[131052750050] |It certainly does sound like the RAM is bad.
[131052750060] |I too have had not syncing
panics because of bad RAM.
[131052760010] |Are you running memtest with only the 1 (possible) faulty memory module (or a pair if they have to be paired)?
[131052760020] |You could probably get a copy of the error report by using the kexec/kdump service, particularly if you can get a copy of the crashdump kernel someplace where the memory error doesn't occur.
[131052760030] |You could also use the mem=128M kernel parameter to boot a system only using the first 128 megabytes of memory, to see if that gets you a working system.
[131052770010] |How to repair an ext3 partition after broken resize operation?
[131052770020] |I was using gparted to resize a near-terabyte ext3 partition to about 40 GB to the left.
[131052770030] |After near 12 hours of moving data (and with 23 hours more left estimated) the system hanged.
[131052770040] |Now fsck reports too many illegal data in every inode.
[131052770050] |How to fix the FS in this case?
[131052780010] |Unfortunately, I think you are quite screwed.
[131052780020] |If you only mess with the partition table then TestDisk is your best shot, but since you have been resizing (which actually means copying and maybe even deleting), your data is, more or less, corrupted.
[131052780030] |If you have a backup before performing the resize operation, this is a good time to use it.
[131052780040] |Else, I don't know what you can do, I would ditch the partition and create a new one, saying goodbye to the data inside.
[131052780050] |A lesson that has to be learned is, always be careful with your data.
[131052780060] |(Of course you should try waiting to see if there is any super great answer that can do better, but don't hold too much hope.)
[131052780070] |Now if you really have a backup, an easier way to "resize" is to delete the old partition, create a new one, then restore the data there.
[131052790010] |Loading 3rd-Party Drivers before Fedora 14 Installation
[131052790020] |Hi all, My server is equipped with a megaraid controller that cannot be identified by most linux installer.
[131052790030] |I have to load it's driver before installation can proceed.
[131052790040] |I know how to do it with CentOS that simply type "linux dd" when prompt "boot:", and I will be able to load drivers from a usb flashdisk.
[131052790050] |But when it comes to Fedora 14, it seem there is no way for me to have any opportunity to load drivers before installation, so can't it find the harddisk :( Anyone here has some advises?
[131052800010] |dd stands for driverdisk, in fact on fedora is driverdisk instead.
[131052800020] |http://docs.fedoraproject.org/en-US/Fedora/14/html/Installation_Guide/s1-kickstart2-options.html
[131052800030] |You can provide even a network resource:
[131052810010] |If I remember correctly you get the boot:
prompt only if you use the DVD install for Fedora.
[131052810020] |The LiveCD doesn't give you the option.
[131052820010] |Could not connect to host 127.0.0.1: Connection refused.
[131052820020] |Could not connect to host 127.0.0.1: Connection refused. - I am getting this error message on ktorrent on slackware 13.1 and the torrents cannot start.
[131052820030] |I tried to reconfigure the ktorrent prefernces, i flushed the iptables but still no success.
[131052820040] |The trick is when i log in via root and start ktorrent, then the torrents start and no error message is displayed, but when i am logged in as a regular user then the torrents don't work.
[131052820050] |Can you help me and give me at least some basic ideas about what the problem might be ?
[131052820060] |Thank you in advance.
[131052830010] |After you started it with root, it changed permissions of some files and temporary folders.
[131052830020] |After that, it's not possible to start it with normal user anymore.
[131052830030] |You can try reinstalling it, or running for example
[131052830040] |to get error messages.
[131052830050] |Also, you should check ktorrent logs.
[131052840010] |ZONE_NORMAL and it's association with Kernel/User-pages ?
[131052840020] |Above is presented a case where I have only 512 MB of physical memory.
[131052840030] |What I have read up so far, is that ZONE_NORMAL is mapped to the kernel virtual address space as shown.
[131052840040] |Essentially I have a 512 MB physical memory, out of which 496 MB worth of ZONE_NORMAL is mapped to the kernel virtual space.
[131052840050] |Based on this understanding, following are my question:
[131052840060] |Does, ZONE_NORMAL consists of only kernel space pages ?
[131052840070] |If ZONE_NORMAL consists only of kernel pages and is mapped completely to the kernel space virtual address range, where do the user space pages get located ??
[131052840080] |There does not seem to be any room for user space pages in physical memory.
[131052840090] |I am totally mixed up of the case where the physical memory is less than 4GB as shown in this case that I have put forth.
[131052840100] |Would really appreciate if someone can throw light on this.
[131052850010] |The same physical page can be mapped to more than one virtual address.
[131052850020] |ZONE_NORMAL consists of pages that can be mapped by the kernel.
[131052850030] |Most of that memory doesn't belong to the kernel, but the kernel needs to map all memory at some point (not necessarily all at the same time).
[131052850040] |For example, when the kernel is processing a write
system call, it needs to copy data from the user-supplied buffer, which means the buffer must be mapped in the kernel's virtual address space.
[131052850050] |The diagram describes the (relatively) simple situation with no high memory.
[131052850060] |(If you work with high-end ARM devices, now is the time to start learning about high memory.)
[131052850070] |Then the kernel can map all process memory and all physical memory at the same time.
[131052850080] |Here's an example of virtual memory repartition as seen by kernel code (I'm not sure if the exact figures are possible, but the basic idea should be right).
[131052850090] |That is, I'm describing the meaning of a pointer used by kernel code.
[131052850100] |0x00000000..0x00000fff
: unallocated.
[131052850110] |A pointer in this range is invalid.
[131052850120] |0x00001000..0xbfffffff
: process memory.
[131052850130] |This is a pointer into the virtual address space of the process that the kernel code under consideration is processing a system call for.
[131052850140] |A page in that range could be unallocated, or it could be allocated and swapped in (in which case it also has a physical address), or it could be allocated and swapped out (in which case it doesn't have a physical address in RAM, but it has a location in swap).
[131052850150] |0xc0000000..0xdfffffff
: physical memory.
[131052850160] |A pointer in this range represents the physical address p-0xc0000000.
[131052850170] |The interpretation of this pointer does not actually depend on the MMU.
[131052850180] |0xe0000000..0xffefffff
: unallocated.
[131052850190] |A pointer in this range is invalid.
[131052850200] |0xff000000..0xffffffff
: kernel memory.
[131052850210] |This is a pointer into kernel code or data.
[131052850220] |A page in this range has an associated physical address, found by the MMU.
[131052850230] |I've found Linux Device Drivers to be a good introduction to the innards of the Linux kernel.
[131052850240] |Ultimately, you may want to turn to the source.
[131052860010] |On a 32-bit architecture you have 0xffffffff
(4'294'967'295
or 4 GB) linear addresses (not physical space) to refer to a physical address.
[131052860020] |Even with only 512 MB of physical storage (the real RAM stick connected to the bus), the kernel will still use 4'294'967'295
(4 GB) linear addresses to calculate the physical ones.
[131052860030] |The linux kernel divides these 4 GB (of addresses) into the user space (high memory) and the kernel space (low memory) by 3/1, so the kernel space has 1'073'741'823
(1 GB) of linear addresses to use.
[131052860040] |These 1 GB of linear addresses, are only accessible by the kernel and are getting divided up even further.
[131052860050] |ZONE_DMA: Contains page frames of memory below 16 MB.
[131052860060] |This is used for old ISA buses, they are able to address only the first 16 MB of RAM.
[131052860070] |ZONE_NORMAL: Contains page frames of memory at and above 16 MB and below 896 MB, these are the addresses, which the kernel can map/access directly.
[131052860080] |ZONE_HIGHMEM: Contains page frames of memory at and above 896 MB, page frames above this border are not generally mapped to the kernel space and therefore not directly accessible by the kernel.
[131052860090] |Page frames from the user space can be temporarily or permanently mapped here.
[131052860100] |How much real, physical RAM space is occupied by the different zones depends on the form and number of processes you run.
[131052860110] |If you enter free -ml
in your console, you can see the usage including low- and high memory:
[131052870010] |Using syncing files only comparing file name and not extension
[131052870020] |Hi guys, I'm trying to sync 2 folders containing audio files of multiple types (WMA, MP3, M4V,...).
[131052870030] |I want to sync these folders but the sync process should only take into account the file names, not the extensions.
[131052870040] |So, if folder A contains "the suburbs.mp3" and folder B contains "the suburbs.m4v", the sync program should consider these 2 files the same (and not sync them).
[131052870050] |I was looking into the documentation of rsync but I can't seem find a way to do this.
[131052870060] |Does anyone have a suggestion, or maybe suggestions for other software that can do this?
[131052870070] |thanks, Thomas
[131052880010] |You could create empty files with all possible extensions and call rsync with --ignore-existing
.
[131052880020] |You may be interested in mp3fs, a FUSE stackable filesystem that provides a view of a directory tree where all audio files appear as MP3. I don't think it would particularly help with your question, but it may be an alternate way to solve your problem or otherwise prove useful.
[131052890010] |X.org radeon driver brightness/contrast adjusting
[131052890020] |Is there a way I can adjust contrast/brightness of the card (VGA) output when using the open source xorg radeon driver?
[131052890030] |I have a monitor that has poor image quality when its tweaked using it's knobs, and the only way I can get good image is to tweak it from the graphic card.
[131052890040] |I was using the proprietary fglrx until now, but i want to switch to radeon.
[131052900010] |Using xcalib:
[131052910010] |Bootable image with LAN drivers
[131052910020] |I'm looking for an image which will boot quickly (I assume it'll be linux) and have LAN drivers- for sending TCP / UDP packets to another system in the same network.
[131052910030] |Ideally, if possible, I would like to have an image with a parameter - the destination address for sending the packets but otherwise I'll just send - broadcast. what tools / types/ OS would suit this situation?
[131052920010] |A linux kernel image with a custom initramfs that suits your needs (i.e. includes said program to send said packets).
[131052930010] |You will have to download the needed linux source from the kernel.org. Install the development tools.
[131052930020] |For eg. in Fedora
[131052930030] |yum groupinstall "Development Tools" yum install ncurses-devel yum install qt-devel
[131052930040] |Then untar the source code and place it in /usr/src/kernels/ Then go inside the source and do a
[131052930050] |make menuconfig
[131052930060] |After that add the necessary modules needed for your kernel.
[131052930070] |If you are concentrating on Network, do the needful inside the Network options.
[131052930080] |After adding the necessary options save the profile and exit.
[131052930090] |Then do the following
[131052930100] |make &&make modules &&make modules_install &&make install
[131052930110] |Now check your grub.conf under /boot/grub/grub.conf and make sure you have the configuration for your kernel in it.
[131052930120] |Now you can add the program which does to the initrd image.
[131052930130] |The initrd has an init function... modify the init function, to include your custom function.
[131052940010] |Any live Linux distribution will work just fine for this.
[131052940020] |Ubuntu has a fancy GUI.
[131052940030] |SystemRescueCD comes with many system repair tools, including networking tools.
[131052940040] |BackTrack is targetted at penetration testing, so it comes with a lot of networking tools, especially for network inspection and packet injection.
[131052950010] |The best answer to this question will depend on (1) how fast is "fast", (2) how exotic the hardware you need to support is and (3) how robust of a system you want after boot.
[131052950020] |There are a good number of Linux distribution aimed at being small and a good number that provide live images.
[131052950030] |The intersection of these two sets is also fairly large.
[131052950040] |If the lan drivers you need are for a basic wired ethernet card, then many "generic" distributions will likely work for you.
[131052950050] |Beyond those that Gilles mentioned, here are a few options you may want to look into:
[131052950060] |grml: This is a Debian-based live-CD intended for system administrators.
[131052950070] |You might want to get the "small" image since you are worried about boot speed.
[131052950080] |The system you are left with is an incredibly functionally system with a wide range of command-line tools.
[131052950090] |Debian Live: Debian Live provides a set of tools that allows you to customize your own live image of a Debian system.
[131052950100] |You can create a pretty lean system through customization.
[131052950110] |Linux distributions that focus on being small: Puppy Linux, DSL, Tiny Core Linux
[131052960010] |Disk space disappearing
[131052960020] |Possible Duplicate: How to understand what's taking up space?
[131052960030] |Hi, I am using Ubuntu 10.10.
[131052960040] |I have 40G of disk space in my partition /home and have used up 10 G of it, so I should have 30G left.
[131052960050] |But somehow disk space disappeared almost completely.
[131052960060] |I don't know what ate up the disk space.
[131052960070] |I checked with "ps ax" and then "du" and couldn't find strange activities.
[131052960080] |I would need to reboot the system to recover the free space!
[131052960090] |Any ideas?
[131052960100] |Thanks!
[131052970010] |What is the difference between KDE and GNOME , for embedded development host decision
[131052970020] |What are all the differences between the KDE and GNOME projects?
[131052970030] |This is because, I want to select one (either KDE/GNOME) as my host OS for my embedded development project.
[131052970040] |I would like to get all details(both code level and GUI level) like C(GTK for GNOME) and C++(QT for KDE)
[131052970050] |Also , why these kind of classification?
[131052970060] |Which one is more efficient or stable in one developers perspective?
[131052970070] |__Kanu
[131052980010] |The main difference is that KDE is C++ with Qt, while Gnome is C based on top of GTK.
[131052980020] |Then you have the windowmanager that has a different philosophy on what to show the user.
[131052980030] |And then you have the applications that that is just different...
[131052980040] |Update: They are both good nice and stable (if you select the right versions).
[131052980050] |They can do approximately the same but in different ways, so it is hard to compare.
[131052980060] |My personal view is that Qt is a nice framework to write software with, but I also know that there is a lot of guys that would say exactly the same for gtk.
[131052980070] |Since this is the topic for a classic flamewar, there is a lot written all over the internet on this topic.
[131052980080] |But when it comes to embedded the answer is simpler, since Nokia (the phone maker) is now the owner of Trolltech (the maker of Qt), they have pushed Qt to become a valid choice for embedded.
[131052980090] |And there is now a version of Qt called "Qt for embedded linux".
[131052980100] |And they created a new distribution with Intel and the Linux Foundation called MeeGo.
[131052980110] |So Qt has a lot of support in the Embedded realm, and it has a quite good documentation so it is easy to get started.
[131052980120] |A example that you can look at/use is MeeGo on the BeagleBoard.
[131052990010] |How to install GRUB to a whole ext4 disk without partition table?
[131052990020] |Currently I have the entire disk /dev/sda
formatted as ext4, and installed Gentoo.
[131052990030] |(There is no MBR, no partition at all. )
[131052990040] |But finally, I can't install GRUB on it, because it seems like GRUB needs to write to MBR.
[131052990050] |neither does grub
work,
[131052990060] |Any way can I install GRUB into the /dev/sda without MBR?
[131052990070] |P.S.
[131052990080] |The /boot
directory and grub.conf
files:
[131053000010] |Its not mandatory that GRUB needs to be written to MBR.
[131053000020] |You can install it on partition boot sector and let the other boot loader from MBR load it, such as Windows 7 loader.
[131053000030] |http://www.linuxselfhelp.com/gnu/grub/html_chapter/grub_3.html
[131053010010] |Just about everything expects a partition table.
[131053010020] |I think you will have to re-install, and follow the suggested guidelines of having at least a /boot, swap, and /root partition.
[131053010030] |Where's your swap?
[131053020010] |The BIOS reads the first sector (512 bytes) of the disk and branches into it.
[131053020020] |If your disk contains PC-style partitions, the first sector also contains the partition table.
[131053020030] |If your disk contains a single filesystem, the first sector contains whatever the filesystem decides to put there.
[131053020040] |In the case of ext[234] (and many other filesystems), the first sector¹ is reserved for the bootloader (and is initially zeroed out).
[131053020050] |You can install Grub on /dev/sda
.
[131053020060] |That being said, there are occasional BIOSes that refuse to boot from a device that don't contain a partition table.
[131053020070] |(But there are also BIOSes that refuse to boot from some external devices if they do contain a partition table!)
[131053020080] |If you have one of these BIOSes, you'll have to create a partition table.
[131053020090] |Even if a partition table is not necessary, it's recommended.
[131053020100] |You only waste a few kilobytes, and gain readability under many non-Linux OSes and less surprise for any co-sysadmin.
[131053020110] |If you accidentally plug your disk into a machine running Windows, it might suggest you to reformat the disk if it doesn't see a partition table, whereas it'll just complain it can't read the data if it sees a partition table with a partition type it doesn't recognize.
[131053020120] |¹ In fact, the first block, I think, where a block is 1kB, 2kB or 4kB depending on the options passed to mkfs
.
[131053030010] |What is the architecture of OSX's Windowing system?
[131053030020] |I am familiar with how the X11 system works where clients connect via a socket to the server process and send operations to the window server to perform certain operations on their behalf.
[131053030030] |But I do not understand (and I could not find good documents) describing how a GUI application interacts with the window system on OSX.
[131053030040] |Some of the questions that I have include:
[131053030050] |How does an app receive events from the windowing system?
[131053030060] |Does the app need to register with the kernel, or some windowing system server?
[131053030070] |How does the windowing system request that an app update its display?
[131053030080] |How does an app trigger a re-display operation?
[131053030090] |Is there a socket-based protocol, or some other RPC system to the windowing system?
[131053030100] |Does the windowing system have direct hardware access?
[131053030110] |Does the application?
[131053030120] |What are the operations available between client apps and the windowing system?
[131053040010] |XQuartz is the open-source project of the X.org X Window System that runs on Mac OS X. Their developer wiki and mailing list should help you with your questions, seems like you need someone deeply involved in the project to answer those.
[131053050010] |In prior MacOS versions, the lifting was done by QuickDraw; in OS X, that has been superseded by Cocoa...
[131053050020] |It doesn't simply parallel X11 though.
[131053050030] |For example, X11 doesn't include audio, but Cocoa does.
[131053060010] |The "What is Cocoa?" section of the Cocoa Fundamentals Guide has a bunch of great illustrations of the architecture from the bottom up.
[131053070010] |The best resource for OS X internals is Amit Singh's Mac OS X Internals.
[131053070020] |It is astonishingly detailed, but unfortunately only covers OS X up to 10.4.
[131053070030] |Google books has a preview.
[131053070040] |Apple's documentation for OS X is also a nice resource, and is obviously more up-to-date.
[131053080010] |@Kevin (sorry, can't post notes yet): Quartz or Core Graphics is the drawing and windowing system in OS X (replacing QuickDraw).
[131053080020] |Core Graphics is one part of Cocoa (Cocoa as a whole being more comparable to the entire Win32 API, rather than GDI, Direct2D or X)
[131053090010] |This is what I have been able to gather so far:
[131053090020] |Applications communicate over some sort of private API to the WindowServer process, the WindowServer process is the one that actually gets hardware events (mouse, keyboard) and dispatches those to the client applications. (this is still an open question: what protocol do they use if any, do they use Mach ports and MIG, or some Socket-based API, not sure).
[131053090030] |Some information is here:
[131053090040] |https://developer.apple.com/mac/library/documentation/MacOSX/Conceptual/OSX_Technology_Overview/GraphicsTechnologies/GraphicsTechnologies.html#//apple_ref/doc/uid/TP40001067-CH273-SW1
[131053090050] |The WindowServer is the Quartz Compositor.
[131053090060] |Typically applications use the Quartz2D API which exposed in the CoreGraphics API (CGXXX funtions).
[131053090070] |Applications create CoreGraphics "Contexts" (CGContext) and draw there.
[131053090080] |Whether the context is pushed when it is done as big bitmap, or if the operations are sent to the server like they are on X11 is still an open question.
[131053090090] |There is a limited API exposed to control certain aspects of the WindowServer process, the sort of configuration settings that are typically done from the Settings application, but there is no documentation on how apps actually communicate graphic requests or pump messages from the server, other than the Carbon/Cocoa APIs exposed.
[131053100010] |Linux Live CDs that are able to save configuration on the boot disk?
[131053100020] |Puppy Linux has a great feature:
[131053100030] |as in wikipedia is mentioned
[131053100040] |However, it is possible to save files upon shutdown.
[131053100050] |This feature allows the user to either save the file to disk (USB, HDD etc.) or even write the file system to the same CD puppy is booted from if "multisession" was used to create the booted CD (on CD-Rs as well as CD-RW) where a CD burner is present.
[131053100060] |I would be interested in getting to know if there are any other livecds that also offer this feature?
[131053110010] |Instead of LiveCD, you can create LiveUSB.
[131053110020] |It functions just like LiveCD but can store the information persistently in a file system called Casper-rw.
[131053110030] |This file can reside on hardrive or USB drive itself.
[131053110040] |https://wiki.ubuntu.com/LiveUsbPendrivePersistent
[131053110050] |http://en.wikipedia.org/wiki/Live_USB
[131053110060] |http://www.debuntu.org/how-to-install-ubuntu-linux-on-usb-bar
[131053120010] |How to install Mono in AIX?
[131053120020] |I don't have root access to an AIX 5.2 machine and want to run Mono programs in it.
[131053130010] |Just compile from sources and install it into you home directory with ./configure --prefix=$HOME; make; make install
.
[131053130020] |This way you don't need root access at any step.
[131053130030] |To run .net assemblies with your compiled version of mono run ~/bin/mono program.exe
or add ~/bin
to your PATH
and just use mono program.exe
.
[131053140010] |Edit: my answer is about "how to install Mono without root access".
[131053140020] |Clearly Miguel's answer about Mono not working on AIX makes the rest moot.
[131053140030] |Alex is right, you can install in your home directory.
[131053140040] |Full instructions for installing Mono outside of /usr are available here:
[131053140050] |http://www.mono-project.com/Parallel_Mono_Environments
[131053140060] |Following these instructions is helpful if, for example, somebody installed Mono in /usr later on but you wanted to keep using your version.
[131053150010] |Mono does not support AIX.
[131053150020] |If you want to try to port Mono to AIX, you would probably want to:
[131053150030] |Turn on the manual checking of dereferences in Mono, as AIX keeps the page at address zero mapped, preventing a whole class of errors from being caught.
[131053150040] |I forget the name of the define, but it was introduced some six months ago.
[131053150050] |You would have to make sure that your signal handlers work, and that exception unwinding works on your platform.
[131053150060] |The rest is probably replacing a few Posix functions with some AIX equivalents, but if you get the two above working, you would likely have a working Mono installation.
[131053150070] |But neither one of those tasks is easy.
[131053160010] |Loading different Linux Distribution each time computer starts automatically?
[131053160020] |I have two Linux distributions (OpenSuSE, Ubuntu) installed on two different partitions.
[131053160030] |Each time i start my machine, GRUB loads up allowing me to select one of the two distributions.
[131053160040] |I don't want the GRUB to show up so I limit timeout to zero in /boot/grub/menu.lst which will most probably make my machine load OpenSuSE each time i start my PC because it's the first option in the menu.lst.
[131053160050] |Is it possible that the second time i restart my PC, Ubuntu gets loaded automatically?
[131053160060] |The third time i restart again, OpenSuSE may get booted while fourth time i restart, Ubuntu may load up and so on?
[131053160070] |In other words how can i make my machine to boot the next OS in menu.lst, the next time it is restarted?
[131053160080] |It's a weird problem. :) but i need to test something actually.
[131053160090] |Suggestions needed from you guys.
[131053160100] |Thanks a lot.
[131053170010] |Put something in the startup scripts to rewrite menu.lst
.
[131053170020] |So have Ubuntu write a version of menu.lst
that loads OpenSuSE, and have OpenSuSE write a version that loads Ubuntu.
[131053170030] |A relatively safe way to do this would be to have 3 files, menu.lst
, menu.lst.ubuntu
and menu.lst.SuSE
and have the scripts do:
[131053170040] |on SuSE and:
[131053170050] |on Ubuntu.
[131053180010] |Lilo can do this.
[131053180020] |But you might consider a simple script in each OS that sets the other OS as the grub default.
[131053180030] |For example, the following script would modify a default 1
setting to default 0
:
[131053180040] |(ed is much like Vi.
[131053180050] |Run just the first command to see what it's doing.)
[131053180060] |On the other OS, you could run the counterpart:
[131053190010] |What is your reason/objective to do this?
[131053190020] |Have you considered just running two different virtual machines?
[131053190030] |If VMs can be considered, there are a number of different ways to accomplish this from within the host machine itself, without tampering with the guests.
[131053200010] |I attained the functionality I was looking for by using the 'savedefault' option of GRUB.
[131053200020] |I used to set it's value to the other operating system at the end of entry of each OS in menu.lst.
[131053200030] |Thanks a lot to everyone who tried to help. :)
[131053210010] |The question seems really unique. baltusaj, it would be great if you can share your purpose for doing this.
[131053220010] |Accessing a remote OSX system from OSX, Linux, Windows.
[131053220020] |Is it possible to connect to a remote OSX machine using OSX, Linux or Windows in a way similar to Windows' remote desktop?
[131053230010] |One simple way is to turn on vnc screen sharing by going to System Preferences -> Sharing -> Screen Sharing on the machine you want to share.
[131053230020] |For client compatibility reasons you may need to select both "Anyone may request permission to control this screen" and the "VNC viewers may control this screen with a password" checkboxes.
[131053230030] |Once you've set up the machine for sharing you can connect to the screen on OSX using the Finder sidebar [SHARED] section or on linux using one of the many vnc clients (vinagre, vnclient etc.).
[131053240010] |What is the difference between Non-preemptive, Preemptive and Selective Preemptive Kernel?
[131053240020] |What is the difference between a "Non-preemptive", "Preemptive" and "Selective Preemptive" Kernel?
[131053240030] |Hope someone can shed some light into this.
[131053250010] |the preemption is -> The ability of the operating system to preempt or stop a currently scheduled task in favour of a higher priority task.
[131053250020] |The scheduling may be one of, but not limited to, process or I/O scheduling etc.
[131053250030] |Under Linux, user-space programs have always been preemptible : the kernel interrupts user-space programs to switch to other threads, using the regular clock tick.
[131053250040] |So, the kernel doesn't wait for user-space programs to explicitly release the processor (which is the case in cooperative multitasking).
[131053250050] |This means that an infinite loop in an user-space program cannot block the system.
[131053250060] |However, until 2.6 kernels, the kernel itself was not preemtible : as soon as one thread has entered the kernel, it could not be preempted to execute an other thread.
[131053250070] |However, this absence of preemption in the kernel caused several problems with regard to latency and scalability.
[131053250080] |So, kernel preemption has been introduced in 2.6 kernels, and one can enable or disable it using the CONFIG_PREEMPT option.
[131053250090] |If CONFIG_PREEMPT is enabled, then kernel code can be preempted everywhere, except when the code has disabled local interrupts.
[131053250100] |An infinite loop in the code can no longer block the entire system.
[131053250110] |If CONFIG_PREEMPT is disabled, then the 2.4 behaviour is restored.
[131053250120] |ReQuoted and formatted from: http://www.linuxquestions.org/questions/linux-general-1/pre-emptive-vs-non-pre-emptive-kernel-582437/
[131053260010] |On a preemptive kernel, a process running in kernel mode can be replaced by another process while in the middle of a kernel function.
[131053260020] |This only applies to processes running in kernel mode, a CPU executing processes in user mode is considered "idle".
[131053260030] |If a user mode process wants to request a service from the kernel, he has to issue an exception which the kernel can handle.
[131053260040] |As an example:
[131053260050] |Process A
executes an exception handler, Process B
gets awaken by an IRQ request, the kernel replaces process A
with B
(a forced process switch).
[131053260060] |Process A
is left unfinished.
[131053260070] |The scheduler decides afterwards if process A
gets CPU time or not.
[131053260080] |On a nonpreemptive kernel, process A
would just have used all the processor time until he is finished or voluntarily decides to allow other processes to interrupt him (a planned process switch).
[131053260090] |Today's Linux based operating systems generally do not include a fully preemptive kernel, there are still critical functions which have to run without interruption.
[131053260100] |So I think you could call this a "selective preemptive kernel".
[131053260110] |Apart from that, there are approaches to make the Linux kernel (nearly) fully preemptive.
[131053260120] |Real Time Linux Wiki
[131053260130] |LWN article
[131053270010] |What happens after loading the linux kernel image into RAM
[131053270020] |I just to want to know the flow of activities happening after loading the linux kernel image into the RAM after boot process.
[131053280010] |As of Linux 2.6:
[131053280020] |Kernel
[131053280030] |After loaded into RAM, the kernel executes the following functions.
[131053280040] |setup()
:
[131053280050] |Build a table in RAM describing the layout of the physical memory.
[131053280060] |Set keyboard repeat delay and rate.
[131053280070] |Initialize the video adapter card.
[131053280080] |Initialize the disk controller with hard disk parameters.
[131053280090] |Check for IBM Micro Channel bus.
[131053280100] |Check for PS/2 pointing devices (bus mouse).
[131053280110] |Check for Advanced Power Management (APM) support.
[131053280120] |If supported, build a table in RAM describing the hard disks available.
[131053280130] |If the kernel image was loaded low in RAM, move it to high.
[131053280140] |Set the A20 pin (a compatibility hack for ancient 8088 microprocessors).
[131053280150] |Setup a provisional Interrupt Descriptor Table (IDT) and a provisional Global Descriptor Table (GDT).
[131053280160] |Reset the floating-point unit (FPU).
[131053280170] |Reprogram the Programmable Interrupt Controllers (PIC).
[131053280180] |Switch from Real to Protected Mode.
[131053280190] |startup_32()
:
[131053280200] |Initialize segmentation registers and a provisional stack.
[131053280210] |Clear all bits in the eflags
register.
[131053280220] |Fill the area of uninitialized data with zeros.
[131053280230] |Invokes decompress_kernel()
to decompress the kernel image.
[131053280240] |startup_32()
(same name, other function):
[131053280250] |Initialize final segmentation registers.
[131053280260] |Fill bss
segment with zeros.
[131053280270] |Initialize provisional kernel Page Tables.
[131053280280] |Enable paging.
[131053280290] |Setup Kernel Mode stack for process 0.
[131053280300] |Again, clear all bits in the eflags
register.
[131053280310] |Fill the IDT with null interrupt handlers.
[131053280320] |Initialize the first page frame with system parameters.
[131053280330] |Identify the model of the processor.
[131053280340] |Initialize registers with the addresses of the GDT and IDT.
[131053280350] |start_kernel()
: Nearly every kernel component gets initialized by this function, these are only a few.
[131053280360] |Scheduler
[131053280370] |Memory zones
[131053280380] |Buddy system allocator
[131053280390] |IDT
[131053280400] |SoftIRQs
[131053280410] |Date and Time
[131053280420] |Slab allocator
[131053280430] |Create process 1 (/sbin/init
)
[131053280440] |The complete "list" is available in the sources at linux/init/main.c
[131053280450] |Init
[131053280460] |Init starts all the necessary user process to bring the system into the desired state, this routine highly depends on the distribution and the runlevel invoked.
[131053280470] |Type runlevel
into the console, this gives you the current runlevel of your system.
[131053280480] |Take a look into /etc/rcX.d/
(or /etc/rc.d/rcX.d/
), replacing the X with your runlevel.
[131053280490] |These are symlinks ordered by execution priority. S01....
means, this scripts gets started very early, while S99....
runs at the very end of the boot process.
[131053280500] |The KXX....
symlinks do the same but for the shutdown sequence.
[131053280510] |Generally, these scripts handle disks, networking, logging, device control, special drivers, environment and many other required sequences.
[131053290010] |The boot loader jumps to the image entry point passing kernel command line (if any), and the kernel handles the rest.
[131053300010] |Kernel takes in the control of the system H/W as soon as you see "Uncompressing Linux..".
[131053300020] |Kernel checks and sets the the BIOS registers of graphics cards and the screen output format.
[131053300030] |Kernel then reads BIOS settings, and initializes basic hardware interfaces.
[131053300040] |Next the drivers in the kernel initialize the hardware.
[131053300050] |Then the Kernel check for the partitons
[131053300060] |Then it mounts the root file system
[131053300070] |Then the kernel starts init, which boots the main system with all its programs and configurations.
[131053310010] |How to find the BSP sections in the Linux source code?
[131053310020] |I would like to know how can i search for the BSP(Board Specific Packages) sections in the Linux Source code?
[131053310030] |All comments are welcomed.
[131053320010] |A board support package may have pieces spread out in the kernel, but the typical parts are in arch/
, and if your board requires drivers that aren't already part of the kernel, there may be some pieces in drivers/
.
[131053320020] |Each arch/
is set up a bit differently.
[131053320030] |Arm is an interesting one: look in arch/arm/
, you'll see several cpu types and platforms there.
[131053320040] |If you look inside a cpu type, like arch/arm/mach-at91/
, you'll see lots of files for the various specific cpus as well as board-*.c
files, where board-specific peripherals are set up.
[131053330010] |Where to start to learn OpenGL
[131053330020] |As OpenGL evolves, it seems that there are three camps:
[131053330030] |OpenGL legacy, packed with "deprecated APIs"
[131053330040] |OpenGL ES, for embedded systems
[131053330050] |OpenGL "new stuff" which comes out every couple of months.
[131053330060] |If I wanted to learn OpenGL for modern systems, where should I start?
[131053330070] |And most importantly, is there a reason to go beyond OpenGL ES for someone that has never done OpenGL before?
[131053340010] |I would start with the NeHe opengl tutorials: http://nehe.gamedev.net/
[131053350010] |What's up with this 'gnome' package?
[131053350020] |I have recently grown tired of the way Rhythmbox starts up every time I plug in my MP3 player.
[131053350030] |I know I could simply disable this, but I've decided to uninstall Rhythmbox instead.
[131053350040] |It's such a memory-hungry application anyhow.
[131053350050] |However, there's a mysterious package called simply gnome
that apt-get
lists as being dependent on Rhythmbox.
[131053350060] |I guess it's the very GNOME environment.
[131053350070] |But why does aptitude recommend uninstalling it?
[131053350080] |Wouldn't that break my system?
[131053350090] |Here is the output of dpkg -L gnome
[131053350100] |Here is the output of aptitude remove rhythmbox
:
[131053360010] |To reveal this mystery just type apt-cache show gnome
and behold:
[131053360020] |Description: The GNOME Desktop Environment, with extra components This is the GNOME Desktop environment, an intuitive and attractive desktop, with extra components.
[131053360030] |This package depends on the standard distribution of the GNOME desktop environment, plus a complete range of plugins and other applications integrating with GNOME and Debian, providing the best possible environment to date.
[131053370010] |Debian (and derivatives) break up large pieces of software into many small packages.
[131053370020] |This way, if you only want, say, a specific Gnome application, you can just install its package and not waste download time, disk space or other resources installing the whole of Gnome.
[131053370030] |But for the people who do want the whole thing, there are a number of metapackages that exist solely for their dependencies.
[131053370040] |For example, if you want all of Gnome, you can install the gnome
package, and through its dependencies it will pull in all the Gnome applications.
[131053370050] |The metapackage itself doesn't contain any file, so removing it won't have any effect outside the package manager.
[131053370060] |gnome
depends on rhythmbox
because Rhythmbox is part of Gnome.
[131053370070] |If you remove the gnome
package, just make sure that apt doesn't also remove applications that were installed solely because they were dependencies of gnome
and that you want to keep.
[131053370080] |In aptitude, press m
to mark a package as manually installed, so it won't be removed if the packages that depend on it are removed.
[131053380010] |A small challenge to familiarize myself with Linux
[131053380020] |I would like to learn more about Linux.
[131053380030] |I briefly went through a few books and quite a few articles online, but the only way to learn something is to actually start using it.
[131053380040] |I would like to jump in the deep end and configure a Linux server.
[131053380050] |So far I have downloaded Ubuntu Server.
[131053380060] |I'm looking for goal or a challenge if you like, something that will familiarize me with Linux servers.
[131053380070] |Ideally, I would like to be able to configure a secure mail, file and web servers.
[131053380080] |I have a strong programming background so I hope that it will help me out.
[131053380090] |I understand that this is not a specific question, I'm just looking for a milestone or a goal, otherwise I can spend weeks reading books and online articles.
[131053380100] |Edit 1: Thank you all for replies.
[131053380110] |Based on what you have said so far, I think that there are few different areas that I need to learn about:
[131053380120] |Kernels.
[131053380130] |Am I correct to say that this is a first thing I should concentrate on?
[131053380140] |Virtualisation.
[131053380150] |Once I'm happy with my knowledge about kernels I'd like to concentrate on KVM.
[131053380160] |I've read briedly about hypervisors and I believe that they also fall under virtualisation.
[131053380170] |Please correct me if I'm wrong.
[131053380180] |Security.
[131053380190] |Ideally I would like to leave this till last, but I guess that the majority of packages that I will require are online.
[131053380200] |So I'm not sure whether I should give this a higher priority.
[131053380210] |SSH, Linux as Firewall and remote access through shell fall under this category.
[131053380220] |Finally I will have a look at backup routines (using Linux as a file-server) and I'll configure web and mail servers.
[131053380230] |I guess that mail server might be a pain.
[131053380240] |I'm tempted to start a blog and see where it takes me after two weeks.
[131053380250] |In regards to distributives, I have seen that there are hundreds of different Linux distributives.
[131053380260] |To be perfectly honest I don't want anything simple, but, at the same time, I don't want to spend hours on a very basic operation to start with.
[131053380270] |Ideally I would like to work only from command prompt, once I can do that I'll be able to work with most of pretty GUIs (I hope so anyway).
[131053380280] |Once again, thank you for your help and I will really appreciate any further advise.
[131053380290] |Edit 2: This leaves me with a final question on what distribution of Linux I should be using?
[131053390010] |I challenge you to configure a secure mail, file and web servers.
[131053390020] |Does that help?
[131053390030] |Sounds like you've done a good job of coming up with your own challenges.
[131053390040] |Do those first, then think of something new.
[131053390050] |Rinse, repeat.
[131053400010] |Here's a couple:
[131053400020] |run Linux as your primary operating system, on both your desktop and your laptop, if any
[131053400030] |install KVM and virt-manager and build a couple of virtual machines
[131053400040] |build a package for your distro of choice (a .deb or .rpm file); it helps in understanding a lot of things
[131053400050] |build your own kernel
[131053400060] |These might not seem directly related to your own personal goals of learning to build web servers, but I assure you, if you understand Linux, you will build all kinds of servers easily.
[131053410010] |I'm not sure how "on topic" this question is but I think that it is fun.
[131053410020] |The more of your computing that you move into Linux, the faster you will start to pick things up.
[131053410030] |Here is something I did shortly after moving to using Linux exclusively.
[131053410040] |It requires having a spare computer.
[131053410050] |Set up a server with Ubuntu Server.
[131053410060] |Set up SSH access to the server.
[131053410070] |Remove the Keyboard and Monitor and do all further configuration and administration remotely.
[131053410080] |For me, this was a serious learning experience since it forces you to (1) do everything via the shell and (2) be very careful about configuration changes.
[131053410090] |Get to work configuring the services you want.
[131053410100] |You might consider doing some of the following
[131053410110] |Focus on security from the start.
[131053410120] |Configure a firewall.
[131053410130] |Secure your ssh settings.
[131053410140] |Ensure you understand what services are running on the machine and why.
[131053410150] |Set up client machines to back up regularly to your server or to some external media mounted on the server or create some other backup solution that requires interaction between your clients and the server.
[131053410160] |On any Debian-based system, a file-server and web server will be relatively easy to set up and configure.
[131053410170] |An email server will be more difficult, at least if you attempt to get the type of reliable mail delivery that a service like gmail can provide.
[131053420010] |How do I install Chromium on Linux Mint using Software Manager?
[131053420020] |I intalled Linux Mint 10 and would like to use it instead of Windows as a desktop OS.
[131053420030] |The first thing I would like to do is to install the Chromium web browser.
[131053420040] |So I start Software Manager and then goes to Chromium and there is a button Install that I click on.
[131053420050] |The the progress bar at the bottom of the Software Manager goes up to 5% before it goes down to 0 againg, and nothings happens.
[131053420060] |How do I install Chromium using Software Manager or any other graphical software in Linux Mint 10?
[131053420070] |The reason to do it only by using graphical tools, was to compare the usability with Windows.
[131053420080] |Now I tested the commands that NES suggested, sudo apt-get install chromium-browser
but it failed with this messages:
[131053430010] |give the installation via terminal a try:
[131053430020] |open terminal
[131053430030] |run the command: sudo apt-get install chromium-browser
[131053430040] |alternatively you can add the chromium repository via commandline to your sources like this and install it from there:
[131053430050] |open terminal
[131053430060] |sudo add-apt-repository ppa:chromium-daily/stable
[131053430070] |sudo apt-get update
[131053430080] |sudo apt-get install chromium-browser
[131053440010] |Invoking pushd in a shell script
[131053440020] |I have a simple script that I want to invoke 'pushed' followed by another command.
[131053440030] |But the 'pushd' command inside the script doesn't seem to stick past the script.
[131053440040] |What's a way to make this execute in the shell terminal?
[131053440050] |What I really want to accomplish is invoke pushd
followed by other-command
with one command.
[131053450010] |A shell script normally executes in a separate instance of the shell program, /bin/sh
in this case.
[131053450020] |Your pushd
command affects that sub-shell's working directory only.
[131053450030] |If it were otherwise, any program you ran from the shell could mess with your shell's working directory.
[131053450040] |To execute that script within the current shell, say this instead:
[131053450050] |or, more verbosely:
[131053450060] |To make it appear that your program works like any other, you can use an alias:
[131053460010] |Scripts cannot alter their parent processe's environment.
[131053460020] |Because of this, any environment changes made in the script are lost.
[131053460030] |To run the script in the same process, you can 'source' the script like this
[131053470010] |Where on the Web can I find a comprehensive list of rolling-release Linux distros?
[131053470020] |I'm asking as I've looked for such &can't find a comprehensive list.
[131053470030] |Feel free to post any less-well-known rolling distros, but I'm really looking for a comprehensive list.
[131053470040] |I'm thinking of creating a Wikipedia page "List of rolling-release Linux distributions".
[131053470050] |If there's already a good list somewhere then would help produce a draft of the Wikipedia page.
[131053470060] |If there isn't such a list on the Web it highlights the need of gathering that information on a site where most people would look for it.
[131053470070] |Wikipedia seemed to me like the obvious choice but suggestions of a better site [preferably, but not necessarily, a wiki] are welcome.
[131053470080] |Please don't post Dev.-branches: eg Fedora-Rawhide, Mandriva-Cooker, OpenSuSE-Factory etc.
[131053470090] |Also, please don't post the following rolling-distros as I'm already aware of them: Aptosid, LMDE, AnitX; OpenSuSE-Tumbleweed; Yoper; Foresight; PCLinuxOS; Unity; Arch, ArchBang, Chakra, Kahel; Gentoo &Sabayon; Lunar, Sorcerer, SourceMage.
[131053470100] |If any I've mentioned aren't rolling do correct me.
[131053470110] |I know some might not call AntiX &LMDE rolling as Debian-Testing "cycles".
[131053470120] |I also know PCLOS &Sabayon can need to be reinstalled (eg when PCLOS re-forks the Mandriva base).
[131053470130] |BTW I know there's a similar question asking "What distributions have rolling releases?" that was asked in August -- however it seems "inactive" and I'm really after a comprehensive list not the usual suspects.
[131053470140] |My thanks to everyone who posts a comment or an answer.
[131053470150] |Also, thank you to anyone who up-votes (and even those who down-vote) this question.
[131053470160] |[Please leave a comment if you want to help create such a list or email my user-name at gmail.]
[131053470170] |PS If anyone objects to the question please post a comment so the moderator can choose whether to close it.
[131053470180] |I'm happy to re-edit the question if anyone wants so please let me know. :-)
[131053480010] |Here's what I use to get information about Linux and BSD distributions: http://distrowatch.com/
[131053480020] |Go to the Their search, while great, doesn't have a "rolling-release" option.
[131053480030] |I would suggest searching by the most recent Linux kernel version by selecting "linux" under their package search.
[131053480040] |The most recent version of any package is shown in parenthesis.
[131053480050] |Searching for linux 2.6.36.2 gave the following:
[131053480060] |Arch Linux: current
[131053480070] |Chakra GNU/Linux: 0.4-alpha2
[131053480080] |Gentoo Linux: unstable, stable
[131053480090] |Linux From Scratch: unstable
[131053480100] |Lunar Linux: moonbase
[131053480110] |Mandriva Linux: cooker
[131053480120] |Parted Magic: 5.8
[131053480130] |PLD Linux Distribution: 3.0
[131053480140] |Sorcerer: grimoire
[131053480150] |T2 SDE: snapshot
[131053480160] |Zenwalk Linux: 7.0-alpha
[131053490010] |kdialog --getsavefilename target/directory?
[131053490020] |When I use kdialog --getsavefilename /path/to/specific/folder/
, it opens folder/'s parent directory, not folder/ itself.
[131053490030] |How do I get it to start where I want it to?
[131053490040] |Thanks!
[131053500010] |One way is to provide a generic filename that the user can then replace, such as
[131053500020] |This will place the dialog in the correct folder with the "Name" field filled with "output". "output will be selected and thus the user can quickly change it.
[131053510010] |Prevent a USB external hard drive from sleeping
[131053510020] |Does anyone know if there is an elegant way to tell an external usb drive not to spin down after a period of inactivity?
[131053510030] |I've seen cron based solutions that write a file every minute, but nothing that smells of nice unixey elegance.
[131053510040] |There must be a hdparm, or scsi command that I can issue (usb drives are accessed via the sd driver in OpenBSD) to the drive to tell it to not sleep.
[131053510050] |I'm afraid that this is probably a feature built into the controller in the enclosure, and as such not much can change it aside from ripping the drive out of it's enclosure and plopping it directly in the machine, but I figured I would ask, on the off chance.
[131053510060] |Ideally, I'm looking for an OpenBSD solution, but I know there are others out there w/the same problem so any solutions will be considered for the answer.
[131053520010] |Yeah, it's generally built into the firmware.
[131053520020] |Some drive manufacturers provide an MS Windows based management tool that will allow you to modify various parameters, including disabling the "sleep" or spin down timer.
[131053520030] |If you have access to a Windows box it might be worth it to pursue that angle.
[131053530010] |The manual for the linux hdparm
provides the following info:
[131053530020] |That being said, it's still not guaranteed that your drive enclosure will support relaying these instructions to the drive.
[131053530030] |The same reference mentions that being able to use hdparm with an enclosure at all is only possible with certain "newer (2008 and later)" models that support the SCSI-ATA Command Translation system, aka "SAT".
[131053530040] |I've yet to try doing this on anything but a recent cheap backup drive, an HP SimpleSave model.
[131053530050] |It seems to provide some limited power management functionality.
[131053530060] |And of course this also assumes that OpenBSD's hdparm
works in the same way.
[131053530070] |I am not at all knowledgeable in the ways of OpenBSD, so I can't help you there.
[131053540010] |what is the difference between spin locks and semaphores?
[131053540020] |What are the basic differences between spin locks and semaphores in action?
[131053550010] |Here's my quick shot at an answer: a spin lock and a binary semaphore (which manages a resource that can only be used by one thing) are almost identical.
[131053550020] |Their distinction is that spin locks manage code to be run while binary semaphores manage some kind of singular resource (e.g. cpu time, display output)
[131053550030] |A regular semaphore, however is able to manage several threads accessing a resource that can be split among several, but is limited (e.g. memory, network bandwidth)
[131053550040] |In short, a spin-lock is likely to keep asking a semaphore if it can use a resource.
[131053550050] |(Imagine a child having to use the bathroom and waiting for someone else to finish.)
[131053550060] |Sources: Introduction to Systems Programming, Operating Systems, and wikipedia
[131053560010] |Both manage a limited resource.
[131053560020] |I'll first describe difference between binary semaphore (mutex) and spin lock.
[131053560030] |Spin locks perform a busy wait - i.e. it keeps running loop:
[131053560040] |It performs very lightweight locking/unlocking but if the locking thread will be preempted by other which will try to access the same resouce the second one will simply try to acquitre resource untill it run out of it CPU quanta.
[131053560050] |On the other hand mutex behave more like:
[131053560060] |Hence if the thread will try to acquire blocked resource it will be suspended till it will be avaible for it.
[131053560070] |Locking/unlocking is much more heavy but the waiting is 'free' and 'fair'.
[131053560080] |Semaphore is a lock that is allowed to be used multiple (known from initialization) number of times - for example 3 threads are allowed to simultainusly hold the resource but no more.
[131053560090] |It is used for example in producer/consumer problem or in general in queues:
[131053570010] |How to debug the input from an input-device (/dev/input/event*)
[131053570020] |I have a IR reciver that is using the imon-driver and i would like to get it working with the kernel.
[131053570030] |Right now half of the keys on the remote (image) works, but an all important think like the numeric keys doesnt!
[131053570040] |The weird think is that the kernel keymap module (rc-imon-pad) seems to be correct but it seems that it is not really used since excatly the same keys are working without that module.
[131053570050] |EDIT: It seems that the rc-imon-pad module always gets loaded when i load imon, and then i suspect that the keycodes are cached so it doesnt make a difference if i unload rc-imon-pad
[131053570060] |Now i am lost, if i do cat /dev/input/event5
or ir-keytable -t
there is data no matter what key i press, so the driver registers the buttons but it just seems that they are translated to the wrong keycodes.
[131053570070] |My kernels is an ubuntu stock kernel from Natty (Linux xbmc 2.6.37-11-generic #25-Ubuntu SMP Tue Dec 21 23:42:56 UTC 2010 x86_64 GNU/Linux)
[131053580010] |You may find useful xinput list
and xinput test
.
[131053580020] |For example,
[131053580030] |and I can monitor my keyboard (xinput test 10
) or touchpad (xinput test 11
, or even xinput test "SynPS/2 Synaptics TouchPad"
) for all kinds of input events, and they get pretty printed to console, and parameters get extracted and printed too.
[131053580040] |This won't solve your problem, but at least will help a bit by deciphering the clutter which e.g. cat /dev/input/event1
produces.
[131053590010] |What are ConsoleKit and PolicyKit? How do they work?
[131053590020] |I've seen that recent GNU/Linux are using ConsoleKit and PolicyKit.
[131053590030] |What are they for?
[131053590040] |How do they work?
[131053590050] |The best answer should explain what kind of problem each one tries to solve, and how they manage to solve it.
[131053590060] |I'm a long-time GNU/Linux user, from a time when such things didn't exist.
[131053590070] |I've been using Slackware and recently Gentoo.
[131053590080] |I'm an advanced user/admin/developer, so the answer can (and should!) be as detailed and as accurate as possible.
[131053590090] |I want to understand how these things work, so I can use them (as an user or as a developer) the best possible way.
[131053600010] |In short consolekit is service which tracks user sessions (i.e. where user is logged in).
[131053600020] |It allows switching users without logging out [many user can be logged in on the same hardware at the same time with one user active].
[131053600030] |It is also used to check if session is "local" i.e. if user have direct access to hardware (which may be considered more secure then remote access).
[131053600040] |ConsoleKit documentation.
[131053600050] |PolicyKit allows fine-tuned capabilities in desktop enviroment.
[131053600060] |Traditionally only privilaged user (root) was allowed to configure network.
[131053600070] |However while in server enviroment it is reasonable assumption it would be too limiting to not allowed to connect to hotspot on laptop.
[131053600080] |Still however you may not want to give full privilages to this person (like installing programs) or you want to limit options for some people (for example on your children laptops only 'trusted' networks with parential filters can be used).
[131053600090] |As far as I remember it works like:
[131053600100] |Program send message to daemon via dbus about action
[131053600110] |Daemon uses PolicyKit libraries/configuration (in fact PolicyKit daemon) to determine if user is allowed to perform action.
[131053600120] |It may happen that the certain confition must be fullfilled (like entering password or hardware access).
[131053600130] |Deamon performs action according to it (returns auth error or perform action)
[131053600140] |PolicyKit documentation.
[131053610010] |saving data from a failing drive
[131053610020] |An external 3½" HDD seems to be in danger of failing — it's making ticking sounds when idle.
[131053610030] |I've acquired a replacement drive, and want to know the best strategy to get the data off of the dubious drive with the best chance of saving as much as possible.
[131053610040] |There are some directories that are more important than others.
[131053610050] |However, I'm guessing that picking and choosing directories is going to reduce my chances of saving the whole thing.
[131053610060] |I would also have to mount it, dump a file listing, and then unmount it in order to be able to effectively prioritize directories.
[131053610070] |Adding in the fact that it's time-consuming to do this, I'm leaning away from this approach.
[131053610080] |I've considered just using dd
, but I'm not sure how it would handle read errors or other problems that might prevent only certain parts of the data from being rescued, or which could be overcome with some retries, but not so many that they endanger other parts of the drive from being saved.
[131053610090] |I guess ideally it would do a single pass to get as much as possible and then go back to retry anything that was missed due to errors.
[131053610100] |Is it possible that copying more slowly — e.g. pausing every x MB/GB — would be better than just running the operation full tilt, for example to avoid any overheating issues?
[131053610110] |For the "where is your backup" crowd: this actually is my backup drive, but it also contains some non-critical and bulky stuff, like music, that aren't backups, i.e. aren't backed up.
[131053610120] |The drive has not exhibited any clear signs of failure other than this somewhat ominous sound.
[131053610130] |I did have to fsck a few errors recently — orphaned inodes, incorrect free blocks/inodes counts, inode bitmap differences, zero dtime on deleted inodes; about 20 errors in all.
[131053610140] |The filesystem of the partition is ext3.
[131053620010] |There's no way of knowing the best of your options without knowing exactly what is going wrong with the drive.
[131053620020] |If it's a mechanical failure, avoiding heating it up can help, but if it's due to errors in the servo data, heat isn't likely to matter.
[131053620030] |I would immediately start copying the unique data to the new drive with rsync
. rsync
will let you pause, resume, and restart as necessary until you get all the data off.
[131053620040] |Then I would run a data scrub on the drive.
[131053620050] |I assume from the ext3
filesystem that you're running Linux, so try this:
[131053620060] |(Unmounting the drive first is important.)
[131053620070] |This will read every sector from the disk and write it back without change.
[131053620080] |That will force the drive firmware to check every sector for errors and to remap any bad sectors it finds.
[131053620090] |This is the most important part of what the expensive SpinRite program does.
[131053620100] |Step up to that only if badblocks
fails and you still haven't gotten all the unique data off the drive: SpinRite tries harder than badblocks
does.
[131053630010] |You can use ddrescue
or dd_rescue
or myrescue
to clone the failing disk, without aborting on any unreadable sector.
[131053630020] |(Myrescue is less configurable but has a better default strategy as it tries to skip over unreadable regions.)
[131053630030] |This will copy everything including blank space and won't let you set priorities.
[131053630040] |However, such a low-level approach has an advantage over filesystem-level tools: if a directory is unreadable, you might still recover the files it contains by searching the raw image with tools such as foremost, magicrescue, testdisk, …
[131053640010] |Add arguments from previous command to zsh completion
[131053640020] |In zsh (as well as bash) you can use some history word expansions to denote arguments from previous commands.
[131053640030] |This example shows getting the 2nd parameter from the previous command in history with !:#
expansion:
[131053640040] |I often forget exactly what # parameter a particular argument is and typing !:#
isn't always that quick when I do remember which arg it is.
[131053640050] |I know about meta-.
to replace the last argument, but sometimes it isn't the last arg that I want.
[131053640060] |I'd like to add the arguments from the previous command as suggestions to complete any command that I'm typing in zsh.
[131053640070] |I was able to figure out how to create a shell function that can create an array of arguments (0..N) from the last command and bind it to a particular command.
[131053640080] |This is what it looks like for completing just foo where I hit the tab key at
:
[131053640090] |This works great for completing the command "foo", but I'd like these to be options on any zsh expansion that I do.
[131053640100] |I think it's got something to do with the zstyle completer stuff, but after a few hours of hacking around I realized I'm out of my depth.
[131053640110] |How can I get the arguments from my previous command as suggested completions for any command in zsh?
[131053640120] |I've got my full zshrc compinstall file shared out on bitbucket if that helps.
[131053640130] |Lots of it is cribbed from a variety of sources and some of it I've hacked together myself.
[131053640140] |UPDATE:
[131053640150] |@Julien Nicoulaud's answer got me close, I'm marking it as accepted as it got me where I needed to get.
[131053640160] |With my particular config, using the suggested:
[131053640170] |Didn't quite work for me as it was causing tab completion to only display the list of arguments from the last command (though it'd actually complete with filenames as well, just not display them).
[131053640180] |Changing the order to _complete _last_command_args
did the reverse.
[131053640190] |It'd display the normal filenames, but not last_command_args
[131053640200] |I'm guessing this has something to do with the way completer works.
[131053640210] |I think it only displays the output from the first method that returns successfully, but I'm having trouble parsing the zsh source to understand fully what's going on.
[131053640220] |I was able to tweak my method to include a call to _complete
so that it showed both last argument commands as well as the regular autocomplete stuff.
[131053640230] |Not quite as separated, but works well enough for me.
[131053640240] |Here's the full function I used along with the other zstyle stuff I have:
[131053640250] |Other zstyle lines I have, not necessary for this to work, but could affect why this works for me:
[131053640260] |Now, if I'm in a directory with file1.txt
and file2.txt
, and my last command was echo foo bar baz
, I get this for autocomplete which is just what I wanted:
[131053650010] |You can add your completer to the list of completers used by default:
[131053660010] |How to have separate command history for different sessions for the same user?
[131053660020] |I face this issue on some of Unix servers.
[131053660030] |When I open different session for same user, command history is shared by all the session.
[131053660040] |This creates issues if by mistake I press Ctrl-P
or up arrow
and just press Enter.
[131053660050] |On one occasion I end up running rm -rf *
thankfully I was in directory where I don't have permissions to remove files.
[131053660060] |How to have separate command history for different sessions for the same user?
[131053660070] |Most of the time I am using ksh and tcsh shells.
[131053670010] |You could use screen.
[131053670020] |I also made a variable that was defined by my konsole profile and gave each of sessions a different history file, only ways I can think of.
[131053680010] |I'm assuming you are talking about simultaneous sessions; separating sessions that are after one another isn't very useful: you would never be able to use it's history anymore, because all sessions would be unique.
[131053680020] |If that were the case, you would probably be better off just disabling command history.
[131053680030] |If we are talking about separating simultaneous session, I'd recommend you use Bash.
[131053680040] |I'm pretty sure I'm not getting the command history from two simultaneous sessions mixed with it.
[131053680050] |Bash only writes history to .bash_history at logout, so command history won't get mixed until after logout.
[131053680060] |Provided that using Bash is acceptable, would that solve your issue?
[131053690010] |From the ksh faq:
[131053690020] |Q1.
[131053690030] |How do I get separate history files for shell?
[131053690040] |A1. ksh uses a shared history file for all shells that use the same history file name.
[131053690050] |This means that commands entered in one window will be seen by shells in other windows.
[131053690060] |To get separate windows, the HISTFILE variable needs to be set to different name before the first history command is created.
[131053700010] |You can add HISTFILE=~/.hist$$
to your .profile
.
[131053700020] |This should generate a unique file per session.
[131053700030] |You will end up with a large number of .hist*
so I suggest you remove them occasionally.
[131053710010] |How to give HTTP traffic higher priority?
[131053710020] |When I surf the web, I find that I have to pause my BitTorrent client, to help improve the painfully slow speed (I'm sadly on a 384kbps line).
[131053710030] |It's not too nice to have to do this manually every time.
[131053710040] |Please show me the magic button, the one which I only need to press once in order to be blessed with speedier, higher-priority surfing, where the torrents speed take a backseat, only to resume to full speed once my web-surfing is over.
[131053710050] |[FYI] NetworkManager manages my network, and Transmission is my BitTorrent client.
[131053720010] |As already said, there is no button "Give me fast surfing" somewhere on your desktop.
[131053720020] |What you want is traffic shaping which is possible with Linux.
[131053720030] |For the complete introduction, you can read these tutorials:
[131053720040] |Linux Advanced Routing &Traffic Control
[131053720050] |Traffic Control HOWTO
[131053720060] |tc: Linux HTTP Outgoing Traffic Shaping (Port 80 Traffic Shaping)
[131053720070] |But I think you are searching for something more like these:
[131053720080] |The Wonder Shaper
[131053720090] |MasterShaper
[131053720100] |These are scripts which will do the work for you.
[131053730010] |If you don't want to spend too much time configuring a traffic shaper, try the transmission
builtin temporary speed limit feature (which can also be scheduled).
[131053730020] |You can activate or deactivate it over the indicator applet.
[131053740010] |It's eays: don't saturate your upload, so limit your torrent client's upload limit to about 50% of your total upload bandwidth.
[131053750010] |An alternative simple solution could be to use the QoS of your router (Quality of Service), this may allow you to give higher priority to certain protocols (i.e. HTTP/HTTPS).
[131053750020] |If you don't have QoS on your router the only way are the one explained above.
[131053750030] |One more link:
[131053750040] |http://www.andybev.com/index.php/Fair_traffic_shaping_an_ADSL_line_for_a_local_network_using_Linux
[131053750050] |Be aware that if you give high priority to HTTP or HTTPS, then, almost probably, your torrent won't work anymore, this is because many applications use the HTTP protocol to exchange data over the network, so there will be always something matching the iptable rule.
[131053750060] |I'd rather advice to use a Command line version of bittorrent, like rtorrent, this way you can write a simple shell script that will change the torrent download throttle and then execute firefox (or whatever).
[131053750070] |You can also stop rtorrent downloading after certain conditions.
[131053750080] |https://wiki.archlinux.org/index.php/RTorrent http://superuser.com/questions/180866/configure-rtorrent-to-stop-downloading-after-a-certain-file-size
[131053750090] |http://libtorrent.rakshasa.no/
[131053750100] |Hope it helps...