Recherche avancée

Médias (3)

Mot : - Tags -/image

Autres articles (48)

  • L’espace de configuration de MediaSPIP

    29 novembre 2010, par

    L’espace de configuration de MediaSPIP est réservé aux administrateurs. Un lien de menu "administrer" est généralement affiché en haut de la page [1].
    Il permet de configurer finement votre site.
    La navigation de cet espace de configuration est divisé en trois parties : la configuration générale du site qui permet notamment de modifier : les informations principales concernant le site (...)

  • Récupération d’informations sur le site maître à l’installation d’une instance

    26 novembre 2010, par

    Utilité
    Sur le site principal, une instance de mutualisation est définie par plusieurs choses : Les données dans la table spip_mutus ; Son logo ; Son auteur principal (id_admin dans la table spip_mutus correspondant à un id_auteur de la table spip_auteurs)qui sera le seul à pouvoir créer définitivement l’instance de mutualisation ;
    Il peut donc être tout à fait judicieux de vouloir récupérer certaines de ces informations afin de compléter l’installation d’une instance pour, par exemple : récupérer le (...)

  • Possibilité de déploiement en ferme

    12 avril 2011, par

    MediaSPIP peut être installé comme une ferme, avec un seul "noyau" hébergé sur un serveur dédié et utilisé par une multitude de sites différents.
    Cela permet, par exemple : de pouvoir partager les frais de mise en œuvre entre plusieurs projets / individus ; de pouvoir déployer rapidement une multitude de sites uniques ; d’éviter d’avoir à mettre l’ensemble des créations dans un fourre-tout numérique comme c’est le cas pour les grandes plate-formes tout public disséminées sur le (...)

Sur d’autres sites (4093)

  • What's the best FFMPEG method for frequent, automated compilation of timelapse videos ?

    5 août 2020, par GoOutside

    I have a web application running on a not-particularly beefy Ubuntu Amazon Lightsail instance that uses FFMPEG to build a timelapse video generated from downloaded .jpg webcam photos taken every 2 minutes throughout the day (720 total images each day, which grows throughout the day as new images are downloaded).

    


    The code I'm running every 20 minutes is this :

    


    ffmpeg -y -r 24 -pattern_type glob -I 'picturefolder/*.jpg' -s 1024x576 -vcodec libx264 picturefolder/timelapse.mp4

    


    This mostly works, but it is often quite slow, taking 30-60 seconds to run and getting slower as the day goes on, of course.

    


    Recently, I tried to use concat instead of globbing the entire folder over and over. I did not see a noticeable performance improvement, ass it appears the concat processes the entire video in order to add even just a few frames to the end of it.

    


    My question for any FFMPEG experts out there : what is the most efficient way to handle this kind of automated timelapse creation, given my setup ? Is there a flag I'm missing ? Perhaps a different, more efficient method ? Or maybe a way to have the FFMPEG process just crawl through this at a more 'slow and steady' pace instead of big bursts of CPU usage.

    


    Or am I stuck with this and should just deal with it ? My ultimate goal would be to continue using my current tier (2 GB RAM, 1 vCPU) without the expense of upgrading. Thank you very kindly for your help !

    


  • Understanding a script which uses ffmpeg to send rtmp input to node.js script

    4 juin 2022, par Arpit Shukla

    I was trying to understand this shell script which uses ffmpeg to take an rtmp input stream and send it to a node.js script. But I am having trouble understanding the syntax. What is going on here ?

    


    The script :

    


    while :
do
  echo "Loop start"

  feed_time=$(ffprobe -v error -show_entries format=start_time -of default=noprint_wrappers=1:nokey=1 $RTMP_INPUT)
  printf "feed_time value: ${feed_time}"

  if [ ! -z "${feed_time}" ]
  then
  ffmpeg -i $RTMP_INPUT -tune zerolatency -muxdelay 0 -af "afftdn=nf=-20, highpass=f=200, lowpass=f=3000" -vn -sn -dn -f wav -ar 16000 -ac 1 - 2>/dev/null | node src/transcribe.js $feed_time

  else
  echo "FFprobe returned null as a feed time."
  
  fi

  echo "Loop finish"
  sleep 3
done


    


      

    • What is feed_time here ? What does it represent ?
    • 


    • What is this portion doing - 2>/dev/null | node src/transcribe.js $feed_time ?
    • 


    • What is the use of sleep 3 ? Does this mean that we are sending audio stream to node.js in chuncks of 3 seconds ?
    • 


    


  • Overthinking My Search Engine Problem

    31 décembre 2013, par Multimedia Mike — General

    I wrote a search engine for my Game Music Appreciation website, because the site would have been significantly less valuable without it (and I would eventually realize that the search feature is probably the most valuable part of this endeavor). I came up with a search solution that was a bit sketchy, but worked… until it didn’t. I thought of a fix but still searched for more robust and modern solutions (where ‘modern’ is defined as something that doesn’t require compiling a C program into a static CGI script and hoping that it works on a server I can’t debug on).

    Finally, I realized that I was overthinking the problem– did you know that a bunch of relational database management systems (RDBMSs) support full text search (FTS) ? Okay, maybe you did, but I didn’t know this.

    Problem Statement
    My goal is to enable users to search the metadata (title, composer, copyright, other tags) attached to various games. To do this, I want to index a series of contrived documents that describe the metadata. 2 examples of these contrived documents, interesting because both of these games have very different titles depending on region, something the search engine needs to account for :

    system : Nintendo NES
    game : Snoopy’s Silly Sports Spectacular
    author : None ; copyright : 1988 Kemco ; dumped by : None
    additional tags : Donald Duck.nsf Donald Duck
    

    system : Super Nintendo
    game : Arcana
    author : Jun Ishikawa, Hirokazu Ando ; copyright : 1992 HAL Laboratory ; dumped by : Datschge
    additional tags : card.rsn.gamemusic Card Master Cardmaster

    The index needs to map these documents to various pieces of game music and the search solution needs to efficiently search these documents and find the various game music entries that match a user’s request.

    Now that I’ve been looking at it for long enough, I’m able to express the problem surprisingly succinctly. If I had understood that much originally, this probably would have been simpler.

    First Solution & Breakage
    My original solution was based on SWISH-E. The CGI script was a C program that statically linked the SWISH-E library into a binary that miraculously ran on my web provider. At least, it ran until it decided to stop working a month ago when I added a new feature unrelated to search. It was a very bizarre problem, the details of which would probably bore you to tears. But if you care, the details are all there in the Stack Overflow question I asked on the matter.

    While no one could think of a direct answer to the problem, I eventually thought of a roundabout fix. The problem seemed to pertain to the static linking. Since I couldn’t count on the relevant SWISH-E library to be on my host’s system, I uploaded the shared library to the same directory as the CGI script and used dlopen()/dlsym() to fetch the functions I needed. It worked again, but I didn’t know for how long.

    Searching For A Hosted Solution
    I know that anything is possible in this day and age ; while my web host is fairly limited, there are lots of solutions for things like this and you can deploy any technology you want, and for reasonable prices. I figured that there must be a hosted solution out there.

    I have long wanted a compelling reason to really dive into Amazon Web Services (AWS) and this sounded like a good opportunity. After all, my script works well enough ; if I could just find a simple Linux box out there where I could install the SWISH-E library and compile the CGI script, I should be good to go. AWS has a free tier and I started investigating this approach. But it seems like a rabbit hole with a lot of moving pieces necessary for such a simple task.

    I had heard that AWS had something in this area. Sure enough, it’s called CloudSearch. However, I’m somewhat discouraged by the fact that it would cost me around $75 per month to run the smallest type of search instance which is at the core of the service.

    Finally, I came to another platform called Heroku. It’s supposed to be super-scalable while having a free tier for hobbyists. I started investigating FTS on Heroku and found this article which recommends using the FTS capabilities of their standard hosted PostgreSQL solution. However, the free tier of Postgres hosting only allows for 10,000 rows of data. Right now, my database has about 5400 rows. I expect it to easily overflow the 10,000 limit as soon as I incorporate the C64 SID music corpus.

    However, this Postgres approach planted a seed.

    RDBMS Revelation
    I have 2 RDBMSs available on my hosting plan– MySQL and SQLite (the former is a separate service while SQLite is built into PHP). I quickly learned that both have FTS capabilities. Since I like using SQLite so much, I elected to leverage its FTS functionality. And it’s just this simple :

    CREATE VIRTUAL TABLE gamemusic_metadata_fts USING fts3
    ( content TEXT, game_id INT, title TEXT ) ;
    

    SELECT id, title FROM gamemusic_metadata_fts WHERE content MATCH "arcana" ;
    479|Arcana

    The ‘content’ column gets the metadata pseudo-documents. The SQL gets wrapped up in a little PHP so that it queries this small database and turns the result into JSON. The script is then ready as a drop-in replacement for the previous script.