Posted: 2007-04-26 11:30pm
You should be fine with /tmp as a normal directory.
Get your fill of sci-fi, science, and mockery of stupid ideas
http://stardestroyer.dyndns-home.com/
http://stardestroyer.dyndns-home.com/viewtopic.php?f=24&t=107931
There's no reason to do anything special with it. The /tmp directory is exactly what it seems: a directory for temporary items. The operating system will clean it out regularly (usually daily).Destructionator XIII wrote:I am poking at my system right now, and it seems my /tmp directory isn't anything special; it is just a regular directory on my mounted / partition.
Does anyone know if that is how it is supposed to be, or should there be some special way I should be mounting it? Google was rather useless here...
Why would the OS make /tmp a RAM disk? There's no telling how many things will write to it. There's no reason for it to suck up memory. If you need persistent storage, just write out to a file inside /tmp.Destructionator XIII wrote:What I was actually thinking is if it should be mounted as a RAM drive (or something similar - a flag to keep it in memory if possible, since it is temporary anyway). I have 2 gigs of RAM, and rarely use more than half that, so that would seem to make sense, but I imagined the OS would do that automatically.
Check your crontab. It should be automatic.There are files in my /tmp that haven't been accessed, changed, or deleted since I first turned the computer on; it doesn't seem to be autodeleting at all. I am half considering rm -R /tmp/* 'ing it next time I need to reboot to clean up.
Files in tmpfs will go to swap, and files in a regular /tmp can use up RAM (due to caching), so they both come out equal if you use the disk you would have used for /tmp as swap and set tmpfs's size max to that amount.Durandal wrote:Why would the OS make /tmp a RAM disk? There's no telling how many things will write to it. There's no reason for it to suck up memory. If you need persistent storage, just write out to a file inside /tmp.
Debian's default (at least in Sarge) is to not install tmpreaper (was tmpwatch) because of some subtle security-related race conditions. Ubuntu may be similar. Installing it should start clearing /tmp, if you want to take the risk. Otherwise, boot-up cleaning is probably fine---edit the TMPTIME var in /etc/default/rcS to adjust its cleaning age.Check your crontab. It should be automatic.Destructionator XIII wrote:There are files in my /tmp that haven't been accessed, changed, or deleted since I first turned the computer on; it doesn't seem to be autodeleting at all. I am half considering rm -R /tmp/* 'ing it next time I need to reboot to clean up.
Sure, but suppose that a program has a nice, big, juicy file in /tmp. Why let the kernel decide when that file should be in memory? Your program knows when that file should be in memory, not the kernel. Also, it knows what parts of that file should be in memory and when.Destructionator XIII wrote:If it is eating memory and memory comes short, it can just be swapped out to the page file, by the same usage algorithms as any other chunk of memory, meaning if it is useful to remain, it will, and if not, it gets sent to disk and the memory used for something else (thus, in both cases, freeing up the RAM that would otherwise be used caching the file from disk). Of course, this assumes the page file is large enough, but if it were allowed to grow (which it isn't by default on Linux for some reason), this wouldn't be a problem. Alternatively, have a regular /tmp directory instead of using the pagefile, but still keep it in RAM unless the algorithm says those pages are better used elsewhere, at which time it writes the temporary file to disk, and frees the RAM for something else.
You're deferring a decision to the kernel when there's really no need to. /tmp just isn't generally around for programs that are I/O-bound. It's a simple directory for temporary storage. If it's not important enough for persistent storage, it's probably not important enough to be critical enough that it needs to be read ultra-fast.In this sense, the only operational difference to the programmer between malloc()'ing it and /tmp'ing it would be the malloc is freed when the program terminates, whereas the /tmp is freed at some other time. And, of course, temp files can be used in scripts, or just randomly by the user, whereas malloc can't.
I believe the algorithm takes into account the last-opened date of the file. And it shouldn't be able to remove the file if it's in use anyway.Is cron really the best way of going about it? Cron just runs at a specified time (as I know it), regardless of if the files are in use. A bootup or shutdown script would make sense to me, but cron seems dangerous.
As I said, take into account the last opened/modified date of the file.Anyway, I checked it, and it isn't mentioned, so if you are sure that is the best way of going about it, I'll go ahead and add it there.
First, because the program has no choice in the matter. The notion that mallocing data instead of writing it to /tmp keeps the decision from the kernel is wrong. In high-memory situations, both stay in RAM, and in low-memory situations, both get written out and flushed from RAM.Durandal wrote:Sure, but suppose that a program has a nice, big, juicy file in /tmp. Why let the kernel decide when that file should be in memory? Your program knows when that file should be in memory, not the kernel. Also, it knows what parts of that file should be in memory and when.
That doesn't follow at all. Needing to be kept around for long periods of time and needing to be read quickly have little to do with each other. In fact, the short-lived stuff is generally what gets processed the most frequently in shortest order, which makes it most time critical. This is fortunate because the longer-lived the storage, the slower it tends to be, from registers to cache to RAM to disk.You're deferring a decision to the kernel when there's really no need to. /tmp just isn't generally around for programs that are I/O-bound. It's a simple directory for temporary storage. If it's not important enough for persistent storage, it's probably not important enough to be critical enough that it needs to be read ultra-fast.[Destructionator XIII] wrote:In this sense, the only operational difference to the programmer between malloc()'ing it and /tmp'ing it would be the malloc is freed when the program terminates, whereas the /tmp is freed at some other time. And, of course, temp files can be used in scripts, or just randomly by the user, whereas malloc can't.
In use files can be deleted in UNIX-like systems. In fact, the file can continue to be used after it is deleted through its file handle, so long as it remains open. Checking access time in the auto-delete algorithm normally avoids deleting a file that is in use, but there is a potential race condition.I believe the algorithm takes into account the last-opened date of the file. And it shouldn't be able to remove the file if it's in use anyway.Is cron really the best way of going about it? Cron just runs at a specified time (as I know it), regardless of if the files are in use. A bootup or shutdown script would make sense to me, but cron seems dangerous.
Of course the program has a choice. It has a choice just by running or exiting. When the program runs, the file will be in memory (whether it's paged out or not) and when the program exits, that memory will be put back on the heap. I guess if it's really performance critical, sure, but in that case, why not just have the program maintain its own RAM disk instead of using /tmp?Darth Holbytlan wrote:First, because the program has no choice in the matter. The notion that mallocing data instead of writing it to /tmp keeps the decision from the kernel is wrong. In high-memory situations, both stay in RAM, and in low-memory situations, both get written out and flushed from RAM.
Huh, you're right. I thought you needed root permissions to delete a file in use.In use files can be deleted in UNIX-like systems. In fact, the file can continue to be used after it is deleted through its file handle, so long as it remains open. Checking access time in the auto-delete algorithm normally avoids deleting a file that is in use, but there is a potential race condition.
That, I believe, is file system-dependent.Also, there is no "last-opened date". There is a last-accessed date, but that isn't updated when the file is opened, only when it's read.
The point is that the program doesn't have control over whether the data stays in RAM or has to be retrieved from disk. When you measure both combined, a program actually has more control over space used by a file, since it can delete it even without exiting; malloced memory usually isn't returned to the system until exit (although that varies with the malloc library).Durandal wrote:Of course the program has a choice. It has a choice just by running or exiting. When the program runs, the file will be in memory (whether it's paged out or not) and when the program exits, that memory will be put back on the heap.
Because that requires root privs and administrator configuration. Also, this is more about general system performance improvements, not tuning particular programs. Mounting /tmp as tmpfs has the same effect for everyone that uses it, automatically. The only real disadvantage is that VM pigs can hog space that would have been exclusive to /tmp otherwise, but that's trouble in any case.I guess if it's really performance critical, sure, but in that case, why not just have the program maintain its own RAM disk instead of using /tmp?
Probably, but only in the sense that some file systems don't do it right. The UNIX standard is to set atime on a successful read of bytes, not on file open, and this is definitely the behaviour of Linux with ext2/3, which is probably what Destructionator is using.That, I believe, is file system-dependent.[Darth Holbytlan] wrote:Also, there is no "last-opened date". There is a last-accessed date, but that isn't updated when the file is opened, only when it's read.