Recherche avancée

Médias (33)

Mot : - Tags -/creative commons

Autres articles (41)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Encoding and processing into web-friendly formats

    13 avril 2011, par

    MediaSPIP automatically converts uploaded files to internet-compatible formats.
    Video files are encoded in MP4, Ogv and WebM (supported by HTML5) and MP4 (supported by Flash).
    Audio files are encoded in MP3 and Ogg (supported by HTML5) and MP3 (supported by Flash).
    Where possible, text is analyzed in order to retrieve the data needed for search engine detection, and then exported as a series of image files.
    All uploaded files are stored online in their original format, so you can (...)

  • Contribute to documentation

    13 avril 2011

    Documentation is vital to the development of improved technical capabilities.
    MediaSPIP welcomes documentation by users as well as developers - including : critique of existing features and functions articles contributed by developers, administrators, content producers and editors screenshots to illustrate the above translations of existing documentation into other languages
    To contribute, register to the project users’ mailing (...)

Sur d’autres sites (9145)

  • Video streaming through HAProxy

    21 janvier 2015, par n00bie

    I want to stream video from my webcam to many clients (all clients use html 5 video player).

    Now i have this :

    Server :

    sudo gst-launch-0.10 tcpserversrc port = 1234 ! oggparse ! tcpserversink port = 1235

    Sender :

    ffmpeg -f video4linux2 -s 320x240 -i /dev/mycam -f alsa -i hw:1 -codec:v libtheora -qscale:v 5 -codec:a libvorbis -qscale:a 5 -f ogg http://localhost:1234

    Receiver :

    <video width="320" height="240" autoplay="autoplay">
     <source src="http://localhost:1235" type="video/ogg">
     Your browser does not support the video tag.
    </source></video>

    It works.

    Now i want to increase count of web cameras. Therefore, i need to increase count of gstreamer’s. But, i want to use only port 80 to communucate between server and clients, therefore i try to use HAProxy.

    HAProxy config : (only one web camera)

    global
           maxconn 4096
           user workshop-staff
           group workshop-staff
           daemon
           log 127.0.0.1 local0 debug

    defaults
           log     global
           mode    http
           option  httplog
           option  dontlognull
           retries 3
           option redispatch
           option http-server-close
           option forwardfor
           maxconn 2000
           timeout connect 5s
           timeout client  15min
           timeout server  15min
           option http-no-delay

    frontend public
           bind *:80
           use_backend stream_input if { path_beg /stream_input }        
           use_backend stream_output if { path_beg /stream_output }

    backend stream_input
           server stream_input1 localhost:1234

    backend stream_output
           server stream_output1 localhost:1235

    Server :

    sudo gst-launch-0.10 tcpserversrc port = 1234 ! oggparse ! tcpserversink port = 1235

    Sender :

    ffmpeg -f video4linux2 -s 320x240 -i /dev/mycam -f alsa -i hw:1 -codec:v libtheora -qscale:v 5 -codec:a libvorbis -qscale:a 5 -f ogg http://localhost/stream_input

    Receiver :

    <video width="320" height="240" autoplay="autoplay">
     <source src="http://localhost/stream_output" type="video/ogg">
     Your browser does not support the video tag.
    </source></video>

    But, in this case, HTML5 video player shows nothing.

    If i change receiver to : (i.e. use localhost:1235 instead of localhost/stream_output)

    <video width="320" height="240" autoplay="autoplay">
     <source src="http://localhost:1235" type="video/ogg">
     Your browser does not support the video tag.
    </source></video>

    It works. Could someone help me ?

  • Optical Drive Value Proposition

    28 août 2010, par Multimedia Mike — General

    I have the absolute worst luck in the optical drive department. Ever since I started building my own computers in 1995 — close to the beginning of the CD-ROM epoch — I have burned through a staggering number of optical drives. Seriously, especially in the time period between about 1995-1998, I was going through a new drive every 4-6 months or so. This was also during that CD-ROM speed race where the the drive packages kept advertising loftier ‘X’ speed ratings. I didn’t play a lot of CD-ROM games during that timeframe, though I did listen to quite a few audio CDs through the computer.



    I use “optical drive” as a general term to describe CD-ROM drives, CD-R/RW drives, DVD-ROM drives, DVD-R/RW drives, and drives capable of doing any combination of reading and writing CDs and DVDs. In my observation, optical media seems to be falling out of favor somewhat, giving way to online digital distribution for things like games and software, as well as flash drives and external hard drives vs. recordable or rewritable media for backup and sneakernet duty. Somewhere along the line, I started to buy computers that didn’t even have optical drives. That’s why I have purchased at least 2 external USB drives (seen in the picture above). I don’t have much confidence that either works correctly. My main desktop until recently, a Mac Mini, has an internal optical drive that grew flaky and unreliable a few months after the unit was purchased.

    I just have really rotten luck with optical drives. The most reliable drive in my house is the one on the headless machine that, until recently, was the main workhorse on the FATE farm. The eject switch didn’t work correctly so I have to log in remotely, 'sudo eject', walk to the other room, pop in the disc, walk back to the other room, and work with the disc.

    Maybe optical media is on its way out, but I still have many hundreds of CD-ROMs. Perhaps I should move forward on this brainstorm to archive all of my optical discs on hard drives (and then think of some data mining experiments, just for the academic appeal), before it’s too late ; optical discs don’t last forever.

    So if I needed a good optical drive, what should I consider ? I’ve always been the type to go cheap, I admit. Many of my optical drives were on the lower end of the cost spectrum, which might have played some role in their rapid replacement. However, I’m not sold on the idea that I’m getting quality just because I’m paying a higher price. That LG unit at the top of the pile up there was relatively pricey and still didn’t fare well in the long (or even medium) term.

    Come to think of it, I used to have a ridiculous stockpile of castoff (but somehow still functional) optical drives. So many, in fact, that in 2004 I had a full size PC tower that I filled with 4 working drives, just because I could. Okay, I admit that there was a period where I had some reliable drives.

    That might be an idea, actually– throw together such a computer for heavy duty archival purposes. I visited Weird Stuff Warehouse today (needed some PC100 RAM for an old machine and they came through) and I think I could put together such a box rather cheaply.

    It’s a dirty job, but… well, you know the rest.

  • Multiprocess FATE Revisited

    26 juin 2010, par Multimedia Mike — FATE Server, Python

    I thought I had brainstormed a simple, elegant, multithreaded, deadlock-free refactoring for FATE in a previous post. However, I sort of glossed over the test ordering logic which I had not yet prototyped. The grim, possibly deadlock-afflicted reality is that the main thread needs to be notified as tests are completed. So, the main thread sends test specs through a queue to be executed by n tester threads and those threads send results to a results aggregator thread. Additionally, the results aggregator will need to send completed test IDs back to the main thread.



    But when I step back and look at the graph, I can’t rationalize why there should be a separate results aggregator thread. That was added to cut down on deadlock possibilities since the main thread and the tester threads would not be waiting for data from each other. Now that I’ve come to terms with the fact that the main and the testers need to exchange data in realtime, I think I can safely eliminate the result thread. Adding more threads is not the best way to guard against race conditions and deadlocks. Ask xine.



    I’m still hung up on the deadlock issue. I have these queues through which the threads communicate. At issue is the fact that they can cause a thread to block when inserting an item if the queue is "full". How full is full ? Immaterial ; seeking to answer such a question is not how you guard against race conditions. Rather, it seems to me that one side should be doing non-blocking queue operations.

    This is how I’m planning to revise the logic in the main thread :

    test_set = set of all tests to execute
    tests_pending = test_set
    tests_blocked = empty set
    tests_queue = multi-consumer queue to send test specs to tester threads
    results_queue = multi-producer queue through which tester threads send results
    while there are tests in tests_pending :
      pop a test from test_set
      if test depends on any tests that appear in tests_pending :
        add test to tests_blocked
      else :
        add test to tests_queue in a non-blocking manner
        if tests_queue is full, add test to tests_blocked
    

    while there are results in the results_queue :
    get a result from result_queue in non-blocking manner
    remove the corresponding test from tests_pending

    if tests_blocked is non-empty :
    sleep for 1 second
    test_set = tests_blocked
    tests_blocked = empty set
    else :
    insert n shutdown signals, one from each thread

    go to the top of the loop and repeat until there are no more tests

    while there are results in the results_queue :
    get a result from result_queue in a blocking manner

    Not mentioned in the pseudocode (so it doesn’t get too verbose) is logic to check whether the retrieved test result is actually an end-of-thread signal. These are accounted and the whole test process is done when one is received for each thread.

    On the tester thread side, it’s safe for them to do blocking test queue retrievals and blocking result queue insertions. The reason for the 1-second delay before resetting tests_blocked and looping again is because I want to guard against the situation where tests A and B are to be run, A depends of B running first, and while B is running (and happens to be a long encoding test), the main thread is spinning about, obsessively testing whether it’s time to insert A into the tests queue.

    It all sounds just crazy enough to work. In fact, I coded it up and it does work, sort of. The queue gets blocked pretty quickly. Instead of sleeping, I decided it’s better to perform the put operation using a 1-second timeout.

    Still, I’m paranoid about the precise operation of the IPC queue mechanism at work here. What happens if I try to stuff in a test spec that’s a bit too large ? Will the module take whatever I give it and serialize it through the queue as soon as it can ? I think an impromptu science project is in order.

    big-queue.py :

    PYTHON :
    1. # !/usr/bin/python
    2.  
    3. import multiprocessing
    4. import Queue
    5.  
    6. def f(q) :
    7.   str = q.get()
    8.   print "reader function got a string of %d characters" % (len(str))
    9.  
    10. q = multiprocessing.Queue()
    11. p = multiprocessing.Process(target=f, args=(q,))
    12. p.start()
    13. try :
    14.   q.put_nowait(’a’ * 100000000)
    15. except Queue.Full :
    16.   print "queue full"
    $ ./big-queue.py
    reader function got a string of 100000000 characters
    

    Since 100 MB doesn’t even make it choke, FATE’s little test specs shouldn’t pose any difficulty.