Labels

_fuxi (75) _IV (146) _misc (5) {610610 (30) algo (1) automatedTrading (8) banking/economy (3) book (14) c++misc (125) c++real (15) c++STL/java_container (7) cppTemplate (1) db (13) DB_tuning (4) deepUnder (1) dotnet (69) eTip (17) excelVBA (12) finance+sys (34) financeMisc (24) financeRisk (2) financeTechMisc (4) financeVol (21) finmath (17) fixedIncome (25) forex (16) IDE (24) invest (1) java (43) latency (4) LinearAlgebra (3) math (30) matlab (24) memoryMgmt (11) metaPrograming (2) MOM (15) msfm (1) murex (4) nofx (11) nosql (3) OO_Design (1) original_content (4) scriptUnixAutosys (19) SOA (7) socket/stream (15) sticky (1) subquery+join (2) swing (32) sybase (6) tech_orphan (12) tech+fin_career (30) telco (11) thread (21) timeSaver (13) tune (10) US_imm (2) US_misc (2) windoz (20) z_algo+dataStructure (4) z_arch (2) z_c#GUI (30) z_career (10) z_career]US^Asia (2) z_careerBig20 (1) z_careerFinanceTech (11) z_FIX (6) z_forex (31) z_hib (2) z_ikm (7) z_inMemDB (3) z_j2ee (10) z_oq (14) z_php (1) z_py (26) z_quant (4) z_skillist (3) z_spr (5)
Showing posts with label thread. Show all posts
Showing posts with label thread. Show all posts

Friday, January 24, 2014

threading - I/O^CPU intensive

See also -- [[linux sys programming]] has a one-pager on this topic.

 This is a common topic in IV and in the literature.

IO-intensive – may scale up to hundreds of threads, even with just 4 cores.
eg(?): network server
eg: GUI – multiple threads could be waiting for disk or user input

CPU-intensive on multi-core machines – don't need too many threads, but single-thread is non-optimal. Each core is effectively dedicated to a single thread.

Thursday, August 15, 2013

dotnet thread pool, tips from Pravin

http://www.howdoicode.net/2013/04/net-thread-pool.html has some good tips

Pool threads are always background. Un-Configurable.

Consider creating your own thread if Synchronization needed -- If we rely on synchronization to start, wait and stop the work done by the thread, then it is very likely that the task duration will increase and could lead to starvation very soon. If we have to use WaitHandles, locks or other synchronization techniques - then creating our own threads would be a better option.

Consider creating your own thread if Longer duration of work -- Thread pool is to be used only when the work is of short duration. The faster the thread completes its work and goes back to the pool, the better performance. If threads are assigned to work for something very long, after some time starvation could occur. New threads are not created in the pool after a limit is reached, and if all threads are busy - the work may not be performed at all.

Friday, July 5, 2013

## most popular base threading libraries

win32 - on windows
Pthreads - on *nix

Traditionally, these are the 2 most important among the base thread libraries. Unsurprisingly, all the base thread libraries are in C. As explained elsewhere on my blog, there's no reason to package these libraries in any other language. C is the clear, overwhelming favorite.

Wednesday, June 12, 2013

How many is too many threads in a process?

By default the dotnet thread pool has 250 threads per core[1]. In dotnet, I believe all threads are native, not "green" threads manufactured by the thread library.

I read in other places that 400 threads per core is considered a soft upper limit.

Remember each core, like a car, can only have one driver thread at any time. So all other threads must be inactive -- blocked (i.e. context switched out, or preempted)

[1] My friend told me it's just 25 in older versions.

"blocked" means preempted and context switched

Whenever we say a thread is blocked or an operation is a blocking operation, we mean the thread is immediately context-switched and
preempted by the scheduler.

Friday, May 17, 2013

pthread-based thread pool - full source code

On my Ubuntu laptop, i tested the source code bundled with [[pthreads programming]].

* uses malloc/free, not new/delete
* uses structs not classes
* I have problem compiling it in g++, but gcc worked.

Which files have "substance"? Only the tpool.c. The header file contains no implementation. The test code is like unit test. The
bugtest is optional.

Sunday, May 12, 2013

this.setDaemon ^ this.IsBackground

No difference between java and c#.

Non-Daemon threads (the default) will run its course and can block host process exit after main thread exit.

Daemon threads will NOT block host process exit after main thread exit.

Wednesday, December 19, 2012

mutex in boost IPC library - briefly

Shared mutex is supported in many operating systems such as ....?

A mutex in boost::thread is effective only within a single process [1]. To share a mutex between 2 processes, you can use either

A) an anonymous mutex in shm (i.e. shared mem) managed by your processes
B) a named mutex managed by the kernel ("hotel service desk"). They don't live in shm. In this case, I guess the mutex is treated
like a modem or any other hardware device, whereby any access is regulated by the kernel.

In B, the mutex is a boost::interprocess::named_mutex
In A, the mutex is a boost::interprocess::interprocess_mutex

If you want recursive,
In B, you can use a boost::interprocess::named_recursive_mutex
in A, you can use a boost::interprocess::interprocess_recursive_mutex

But Why bother with a shared mutex in the first place? Usually to guard a shared resource such as an object (say object K) in shared
mem. By definition K is accessible from 2 (or more) processes P1 and P2. They can both ignore and bypass the mutex and access K
directly. Therefore the guardian mutex lock is advisory.

[1] because the mutex object lives in the private memory of the process.

Friday, December 14, 2012

c# get unique thread id

I find the thread id number typically too low and not "realistic" enough. I am looking for a unique-looking identifier, one which I can search in log files.

Here's my attempt to get it.


            /*According to MSDN:
             * An operating-system ThreadId has no fixed relationship to a managed thread, because an unmanaged host
             * can control the relationship between managed and unmanaged threads. Specifically, a sophisticated host
             * can use the CLR Hosting API to schedule many managed threads against the same operating system thread,
             * or to move a managed thread between different operating system threads.
             */

            var winThrId = AppDomain.GetCurrentThreadId();
            var thId = "/T" + Thread.CurrentThread.ManagedThreadId + "/" + winThrId + "/ ";

Tuesday, September 6, 2011

what method is blocking in a blocking queue

Only a handful methods in java can block -- wait, join, get() in Interface Future, lock(), synchronized, socket accept(), socket read(), sleep... so which of these is the blocking method in a blocking queue?

I guess it's wait()

Friday, September 2, 2011

how does weblogic or a thread pool detect a thread is stuck?

Q: how does weblogic or a thread pool detect a thread is stuck?

We know weblogic won't kill any stuck thread. Weblogic only detect stuck threads that are managed by weblogic, never a user-created thread.

Can you create your own Runnable object and pass to the weblogic executor pool? Very uncommon, perhaps unsupported.

Saturday, August 6, 2011

Stamford equity derivative IV

Q: describe some of the GC algorithms?
(Now I think) A: ref counting + root object /descent/. Ref-counting alone isn't reliable due to islands.

Q: For a given specification, a Java version uses more memory than C and many languages. Does this impact performance, and how?
%%A: more swapping

Q: what if enough free memory?
%%A: then no performance penalty
Now i think reading/writing to 20% more memory takes more time.

Q: Is JVM a performance problem, with the new JIT compilers?

Q: Beside JVM and extra mem usage, what else would you say to criticize java performance among competing languages?

Q: How many KB for a java Object?

Q: Why do you say java userland threads are light-weight relative to kernel threads?
%%A: memory. One address space per JVM

Q: does Object.java have any field?
%%A: serial number
A: none

Q: How does the GC decide who should live in old-generation area and who in young-generation area.

Does new objects live on young or old generation area?
%%A: young, except static fields

Q: what's hashmap load factor?

Q: What can you tune in JVM
%%A: heap size, native/green thread, young generation size

Q: We agree that threads, statics and JNI are 3 types of root objects, but I've not heard of JNDI as a 4th type. Are you sure?

Q: What's so good about java's thread feature compared to other languages?
I only know 2 comparable languages -- c# and c++.
%%A(2013): memory model; concurrent collections

Q: key challenges of large java projects in your past?

Thursday, March 10, 2011

boost thread copyable? no

boost thread objects are non-copyable, but callable objects are somewhat copyable. See boost docs

Sunday, January 30, 2011

start a bunch of boost threads

   boost::thread_group grp;

   for (int i = 0; i < 90; ++i){

      grp.create_thread(worker_func);
// worker_func is a regular function name, implicitly converted to a func ptr
   }

   grp.join_all( );

Saturday, November 27, 2010

## boost thread tutorials

http://flylib.com/books/en/2.131.1.186/1/ -- [[c++ cookbook]]

http://blog.emptycrate.com/node/277

boost class documentations are no tutorials

Tuesday, October 5, 2010

statement reorder for atomic variables

http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#volatile

Atomic package javadoc says:

The memory effects for accesses and updates of atomics generally follow the rules for volatiles, as stated in  The Java Language Specification, Third Edition (17.4 Memory Model):

   * get has the memory effects of reading a volatile variable.
   * set has the memory effects of writing (assigning) a volatile variable.
   * lazySet ... is weaker
   * compareAndSet and all other read-and-update operations such as getAndIncrement have the memory effects of both reading and writing volatile variables.

Saturday, September 25, 2010

boost thread to run a method of my object (full source

#include <iostream>
#include <boost/thread.hpp>
class Worker{
public:
    Worker()    {
        // the thread is not-a-thread ie a dummy thread object, until we assign a real thread object
    }
    void start(int N)    {
        // pass "this" as first arg to processQueue(), which runs not as a method but as a free func in its own thread
        m_Thread = boost::thread(&Worker::processQueue, this, N);
    }
    void join(){m_Thread.join();}
    void processQueue(unsigned N)    {
        float ms = N * 1e3;
        boost::posix_time::milliseconds workTime(ms);
        std::cout << "Worker: started, will work for "
                  << ms << "ms"
                  << std::endl;
        boost::this_thread::sleep(workTime);
        std::cout << "Worker: completed" << std::endl;
    }
private:
    boost::thread m_Thread;
};
int main(int argc, char* argv[]){
    std::cout << "main: startup" << std::endl;
    Worker worker;
    worker.start(3);
    std::cout << "main: waiting for thread" << std::endl;
    worker.join();
    std::cout << "main: done" << std::endl;
    return 0;
}

Sunday, September 19, 2010

RAII lock auto-release with boost scoped_lock


#include <iostream>
#include <boost/thread.hpp>
#include <string>
#include <iostream>
#include <sstream>
using namespace std;
boost::posix_time::milliseconds sleepTime(1);

template<typename T>
class MyQueue {
public:
    void enqueue(const T& x) {
        cout << "\t\t\t > enqueuing ... " << x << "\n";
        boost::mutex::scoped_lock myScopedLock(mutex_);
        cout << "\t\t\t >> just got lock ... " << x << "\n";
        list_.push_back(x);
        // A scoped_lock is destroyed (and thus unlocked) when it goes out of scope
    }
    T dequeue() {
        boost::mutex::scoped_lock lock(mutex_);
        if (list_.empty()) {
            throw "empty"; // unlock
        }
        T tmp = list_.front();
        list_.pop_front();
        cout << "< dequeu " << tmp << "\n";
        return (tmp);
    }
private:
    std::list<T> list_;
    boost::mutex mutex_;
};
MyQueue<std::string> queueOfStrings;
int reps = 5;
void sendSomething() {
    std::string s;
    for (int i = 0; i < reps; ++i) {
        stringstream st;
        st << i;
        s = "item_" + st.str();
        queueOfStrings.enqueue(s);
        boost::this_thread::sleep(sleepTime);
    }
}
void recvSomething() {
    std::string s;
    for (int i = 0; i < reps*3; ++i) {
        try {
            s = queueOfStrings.dequeue();
        } catch (...) {
            cout << "<- - (    ) after releasing lock \n";
            boost::this_thread::sleep(sleepTime*2);
        }
    }
}
int main() {
    boost::thread thr1(sendSomething);
    boost::thread thr2(recvSomething);
    thr1.join();
    thr2.join();
}

Saturday, September 18, 2010

simple boost thread program

#include <iostream>
#include <boost/thread.hpp>
#include <boost/date_time.hpp>

void workerFunc() {
    boost::posix_time::seconds workTime(3);
    std::cout << "Worker: starting up then going to sleep" << std::endl;
    boost::this_thread::sleep(workTime); // Thread.sleep
    std::cout << "Worker: finished" << std::endl;
}
int main(int argc, char* argv[]) {
    std::cout << "main: startup" << std::endl;
    boost::thread workerThread(workerFunc); // pass a func ptr to thread ctor
    boost::posix_time::seconds workTime(1);
    std::cout << "main: sleeping" << std::endl;
    boost::this_thread::sleep(workTime); // Thread.sleep
    std::cout << "main: waking up and waiting for thread" << std::endl;
    workerThread.join();
    std::cout << "main: done" << std::endl;
    return 0;
}

Thursday, November 5, 2009

java thread pool(phrasebook

* Runnable.java -- the tasks/jobs lined up are objects implementing Runnable.java. It's counter intuitive to me that the threads in the pool by definition implement Runnable.java, and the jobs also implement Runnable.java -- see blog post [[ wrapper object of "me" as a field in me ]]

* queues -- i think thread pools need a FIFO queue to hold the jobs/tasks. Usually it's a blocking queue.