Example Any words: php_solr

Example All words: Simple PHP Agenda


This is a project for network communications. It will include a multi-threaded server and api interfaces for clients to easily implement proprietary network communication protocols.

cplusplus network multi-threaded


Zipmt is a command line utility that speeds up bzip2 compression by dividing the file into multiple parts and compressing them independently in separate threads and then combining them back into a single .bz2 file. It depends on glib and libbz2 and is written in C. Features:Compresses files much faster then bzip2 with similar compression rates. Uses multiple threads for multi-CPU efficiency gains. Handy -v (verbose) mode lets you see progress per thread. Can compress large (> 2GB) files. Can compress from an input stream for pipeline processing. Limitations:Can not decompress (use bunzip2 for that). Performance:See for yourself. It's easy to see the difference on a large file: /home/drusifer> ls -lh bigfile.txt -rw-r--r-- 1 drusifer drusifer 783M Mar 23 14:09 bigfile.txtFirst I'll use bzip2 to compress it: /home/drusifer> time bzip2 bigfile.txt 477.820u 1.080s 8:06.11 98.5% 0+0k 0+0io 102pf+0w /home/drusifer> ls -lh bigfile.txt.bz2 -rw-r--r-- 1 drusifer drusifer 59M Mar 23 14:09 bigfile.txt.bz2That took just over eight minutes and compressed my file to 59M. Now I'll try zipmt. My machine has four CPUs so I'll tell it to use four threads via the -t option: /home/drusifer> time zipmt -t 4 bigfile.txt.bz2 0.000u 0.400s 1:57.27 0.3% 0+0k 0+0io 152pf+0w /home/drusifer> ls -lh bigfile.txt.bz2 -rw-r--r-- 1 drusifer drusifer 59M Mar 23 14:26 bigfile.txt.bz2Zipmt only took two minutes and achieved the same compression ratio as bzip2! It's four times faster then regular bzip2 because it's using four CPUs instead of just one!

multi-threaded compression zip c bzip


A Java Agent that uses runtime analysis to detect multi-threaded access on objects. It is very useful when you need to inspect code written by others and it is too complex for static concurrency analysis. One of the big problems with static analysis is to figure out which objects are accessed by multiple threads, because objects often are used in all kinds of frameworks, and with a few lines of configuration/code everything changes. Every few seconds a report is written to file, and this report can be used to start manual inspection. The manual inspection still is a very daunting task, but with the report it is much easier to find problem areas. The Concurrency Detector only can be used on Java 5 and higher, Java 4 and older don't have support for Java Agents. The initial version used AspectJ as Java Agent, but it appears that custom configuration of AspectJ in combination with Spring projects that use AspectJ, is problematic. It also didn't provide the control needed. That is why it was replaced by ASM. Example configuration: #the base packages which fields are instrumented. #Multiple packages can be placed and need to be seperated with a ; basepackages=org.apache.catalina #if normal instance variables need to be instrumented (True/False) instance.normal=True #if instance volatiles need to be instrumented (True/False) instance.volatile=True #if instance finals need to be instrumented (True/False) instance.final=True #if normal static variables need to be instrumented (True/False) static.normal=True #if static volatiles need to be instrumented (True/False) static.volatile=True #if static finals need to be instrumented (True/False) static.final=True #For debugging purposes, if you want to have a look at #the instrumented classes (True/False) write.class=True


An experimental library to make multi-threaded code more testable,


Comming Soon. This Java project is an effort to create a lightweight multi-threaded framework for client applications.

multi-threaded java


Ender Lib creates an abstraction that emulates threads in Flex and Flash. While not physically possible to do in the Flash Player, much to our dismay, we can still roughly approximate them with a framework that provides yielding and scheduling. In many cases, such as socket processing, events replaces the need for threads in AS3. However, in cases where long-term processing needs to take place, Ender Lib can emulate threading that will avoid apps that appear to "freeze". There are just a small dash of rules for how to write your threaded routines, but if you follow it then all the special code needed to approximate threading will be done for you in a flexible framework!

flex as3 flash parallel threads multi-threading emulation threading


This add-on is now included in Io main repository. MySQL is a fast, multi-threaded, multi-user SQL database server. IoMySQL is a MySQL binding for Io, by Hong, MinHee. my := MySQL establish("localhost", "user", "password", "database") # Get rows by Map my queryThenMap("SELECT * FROM rel") foreach(at("col") println) # Get rows by List my query("SELECT * FROM rel") foreach(at(0) println) my close

binding db database mysql io

Simple Shutdown Scheduler

Simple Shutdown Scheduler (also named SSS) is a light and useful tool to schedule shutdowns/reboots/... for any computer in your network. - It is light, because a background application like this should not take 50 MB of memory or 1% of CPU ! - It is simple to use, because the best application are the easiest and more intuitive to use :) - It is secure (all passwords are saved encrypted, and passwords are never readable as text in memory) - It is multi-threaded, so the application will never hung


Massively Multiplayer Online Server Engine

MMOSE(Massively Multiplayer Online Server Engine) is MMORPG Server Engine. It is based on the .NET Framework 2.0/3.0/3.5, and is multi-threaded.


A complete web news management system written in PHP. All the content control is configurable with an web based administration section.Features include story moderation, threaded comments, templating/themes, polls, multi-language translations, RDF imp


This is a sample project to show how to go from single threaded code to multi threaded code and then to tasks (Parallel Extensions Framework). View my blog: http://blog.decarufel.net


DescriptionThis project was created to perform source code analysis on the various Linux kernels. The project was written fairly quickly and can be expanded upon to include new language support and new metrics to report on. Key FeaturesLanguage independent analysis Code normalization based on file's language Source code analysis based on file's language Capabilities to expand on metrics to report Source folders structure is maintained Multi-threaded operations for increased speed How to RunThe Java program is in an executable JAR file, which means it can be ran by executing it. The only problem is that currently there is no user interface which allows the data to be displayed after the analysis. The current implementation is to print to console a report on the root folder and the first level of sub folders. This means to actually get any results the JAR must be ran through terminal/console. Example running the JAR in Linux's terminal: java -jar static-source-anlaysis-0.1.jar >> output-file.txt This will print the output into the output.txt file, which contains the results. In cases where you are running out of memory (should only happen on extremely big folders) increase the JVM's memory by using an alternative run command: Example running the JAR with increased memory in Linux's terminal: java -Xms256m -Xmx1024m -XX:-UseGCOverheadLimit -jar static-source-anlaysis-0.1.jar >> output-file.txt The Xms specifies the starting memory and the Xmx specifies the max amount of memory, The later is what you need to alter depending on the hardware you have.

static staticanalysis analyzer sourceanalyzer code analysis source sourcecode


What is rrssdlThis is just a simple yet powerful ruby system that allows for downloading the link tag in an RSS feed item. The main trunk source is actually the TV Show variant (which is what i have built first), however, it would be easy to branch the code to handle other types of RSS feeds. Why rrssdlrrssdl was created with the intention of using it in combination with the rTorrent client (however this would work equally as well with transmission or any other client that can watch a directory). The configuration is very simple, and at the same time offers a lot of versatility. The system design was created with the idea in mind that it should be easy to implement new features. I will always be open to receiving new feature requests. rrssdl is designed to be very light weight, and use minimal resources, and at the same time be very easy to configure to specific needs. Current FeaturesLogging (now with log4r!) download to a specific path download to a different path if troubles parsing the title (season/ep validation) specify your own collection of regex's for title validation ( and season and ep parsing) choose which shows you want enabled choose which feeds you want enabled set a timeout for feeds (abort if feed is not responding or responding too slowly) multi-threaded, but thread safe (feed refreshing) Daemon mode (run in the background, linux/unix only) Config reload via SIGHUP Exit cleanly via SIGINT (Ctrl-C) Persist state in a state file (location is configurable) Advanced configuration file (easily extensible for new features) Post download commands for shows, feeds, and globally TodoTTL for feeds Download link referrer for feeds PID file when in daemon mode (plus config key for location) Attempt to create dirs if they don't exists Config option to keep original filename Append .torrent if not terminated by .torrent Try to download to review dir if download dir raises exception Save state file on show update (thread safe) Switch to yaml config file format News2009-Jan-14 starting the conversion process to using log4r, should be a day or two going to try and knock off a bunch of the todo items in the next few days rrssdl trunk is currently broken, so don't update yet! 2008-Oct-20 - Version 0.2 BETA released (SAME DAY RELEASES YEA!!!) I was bored so i fixed a ton of bugs tonight things should be pretty solid now No official download yet, as soon as i get some feedback on this release i will look into creating an official download 2008-Oct-20 - Version 0.1 BETA released initial release, functionality works, but don't expect it to be bug free Statusrrssdl is an active project, i intend to fully keep it up to date. I have a lot of other features I want to add to this project, and they will slowly get added as I find time (and more quickly if someone requests it) DependenciesRuby (not sure which version, but anything recent will probably work) Some Ruby libs (all should come standard with your ruby package) rss/1.0 rss/2.0 open-uri optparse timeout thread log4r (NEW!!!) this may not come with your ruby package. ubuntu people apt-get install liblog4r-ruby Beta TestingFeel like beta testing rrssdl? well you are more than welcome to do so. There are no official downloads at the moment, so the only way to get things running is through SVN. This means you'll need a subversion client, and one of the following command lines: Tester (trunk): svn checkout http://rrssdl.googlecode.com/svn/trunk/ rrssdl Tester (revision 30): svn -r 30 checkout http://rrssdl.googlecode.com/svn/trunk/ rrssdl Developer: svn checkout https://rrssdl.googlecode.com/svn/trunk/ rrssdl --username USER_NAME Right now i am the only one who is in the developer class, but if you have interest in helping me with rrssdl i will be happy to add you.

torrent bittorrent ruby downloader tv rtorrent rss


This project contains the following components: AVR109bootloader class library: A multi-threaded class for serial port communication with the AVR109 bootloader. AVR109Gui: A rudimentary but functional GUI to demonstrate the class library Bootloader: A slightly modified version of Atmel's basic AVR109 bootloader. I have a few more projects at red79.net

bootloader avr net c atmel

membase's WorkloadGenerator

A simple, Java sample multi-threaded workload generator.,


Introductionlucene-log4j solves a recurrent problem that production support team face whenever a live incident happens: filtering production log statements to match a session/transaction/user ID. MotivationIn production we often find distributed systems colaborate with each other in order to provide services. As messages travels through these systems, they will usually carry an unique ID that identifies the main transaction (which makes sense when a message results in several child messages to be fired, such as a distributed search). Now you can log this ID that you associated with the current thread in your server logs along with other information that you consider useful so you can later come back to the logs in order to find out what happened actually. The problemAs you are most probably aware, in a busy multi-threaded application server the log statements written in one thread quickly entangles with the ones written by other threads. So in order to filter the log to show only the log statements related to a certain ID you will need to write some tools. Some of them are: Sequential grep: This is probably the first thing that can come up. It turns out to be a non-trivial task since you will have to consider multi-line statements. All the same, this is a sequential operation which is slow and put unnecessary I/O on your servers. Replicating logs to a central location and indexing them: Log4j provides a JMS appender which allows you to send your log over the wire. Then you can store the logs in a central repository and index it according to the ID. The problem with this approach is that you need to have spare spaces on this repository for ALL your production systems which in big clusters means lots of space and network traffic. My approach To solve the problems stated above (sequetial operation and space/network requirements), I came up with this in-site solution which consists on building a searchable Lucene index in the application deployed on application server. It works by extending Log4j's RollingFileAppender with Lucene indexing routines. Then with a LuceneLogSearchServlet, you get access to your log using web frontend. This solves the former problems and has the benefit of distributing the load on search. Combined with a messaging middleware, e.g. Mule ESB, it's possible to combine the search results and present it all together. LimitationsThis approach is not perfect since on corner cases where the logs rotate at the moment of the search, it will mess up the results. This is reported on the results though with the message: WARNING: log file has been rolled over! Don't trust on the search results and re-run the queyFilePosTrackingRollingFileAppender writes index to disk every indexFlushInterval. If before the next write your JVM crashes, there won't be entries in the index for the log statements that were written after the last checkpoint. If you have changed your concrete implementation of FilePosTrackingRollingFileAppender#populateDocument(long, LoggingEvent, Document) then you should delete your existing Lucene index (which in Windows also means you should stop your application server to release the file locks). This might be changed in the future so as to support multiple versions of the index. This implies renaming the old index, using a new directory to store a newer version. Your index searcher application, e.g. a servlet should support this as well. TipsThe LuceneLogSearchServlet output can be gzipped to reduce network traffic. The sample lucene_log4j_sample_webapp project includes the setup to use pjl-comp-filter.

log lucene log4j


A simple multi-threaded, multi-protocol server and client,


The secure multi threaded chat server & P2P file transfer client supports following feature set: -> Multi user Server -> Chat Client -> P2P File Transfer -> Secure Communication {Diffie Hellman / AES (Rinjdael)} -> Password based authentication -> Access DB for users -> Passwords are saved in hashed form in data base -> A utility to hash the password

pakistan lahore csharp basit dotnet11 fast chat openchatserver multithreaded tanveer p2pfiletransfer chatserver secure c11

Managed Task Framework .NET Runtime

The MTF.NET Runtime is a multi-threaded scheduler designed to execute high-performance .NET applications efficiently across multiple CPU cores.


The Pentest Power Console project will provide penetration testers with a tool that provides an intutive interface from which to control manifold aspects of a penetration test. The tool currently exists as Neet - the Network Enumeration and Exploitation Tool, but since the acronym Neet has been used by the British Government to refer to people who are Not in Education, Employment or Training, the project owner has taken the decision to rename the tool, give it a substantial upgrade, and move it to a reliable hosting platform. The migration is not yet complete, but it is a work in progress and the aim is to have PPC up and working on Google Code by December 1st 2009. PPC provides a command-line-based interface to a comprehensive penetration testing control centre, which is underpinned by industry-standard tools such as Nmap, OpenVAS, Samba and the Metasploit Toolkit. Each penetration test can be finely tuned by means of settings in the rich configuration file, many of which have the option of being overridden on the command line. The many facilities of PPC will be listed on the Wiki as they are too numerous to be mentioned here. The main aims of the Pentest Power Console are as follows: Power One terminal window provides real-time updates on vulnerabilities or informational issues as they are discovered The Power Console provides a customised Bourne Again shell view of the results, so shell command pipelines can easily be used to sort data as required. However, the customisations mean that data is presented and accessed in a novel way, minimising the requirement for construction of complex shell commands or scripts. Multiple instances of the Power Console can be opened at any time, allowing operations to be carried out on multiple hosts simultaneously. Exploits are launched in separate xterms (or aterms or eterms or whichever is the user's preference), making it easy to control a number of exploited hosts at once. Built-in hash dumping and on-host auditing for compromised hosts. All exploitation and post-exploitation activities are controlled by the penetration tester, and auto-exploitation is available for explicitly-configured attack vectors only. Flexibility highly configurable via the configurations files and the command-line scan multiple networks (from multiple NICs) simultaneously multi-threaded and easily expanded via modules Traceability a rich audit trail is maintained, which details every network command executed raw results of all test tools are preserved in a text format Ease of use includes inline help comprehensive man pages aggregated, rapidly human-parsed results Colour-coded output makes it easy to quickly assess the security of a host or service.

pentesting penetration console testing