Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (15805)

  • FFMPEG : Properly sidechain_compress stereo background with stereo sidechain into stereo output

    6 décembre 2023, par Eduard Sukharev

    I'm doing voiceover and since Sony Vegas does not support sidechaining, I render voiceover into voices.wav and then use sidechain_compress filter, as per ffmpeg documentation :

    



    ffmpeg -y -i background.m4a -i voices.wav -filter_complex \
    "[1:a]asplit=2[sc][mix];\
    [0:a][sc]sidechaincompress=threshold=0.015:ratio=2:level_sc=0.8:release=500:attack=1[compr];\
    [compr][mix]amerge" sidechain_1.wav


    



    voices.wav is a stereo audio file, as well as background.m4a. But here's how the result file looks like when loaded into Sony Vegas :

    



    enter image description here

    



    This shows that in channels 1/2 I get the compressed background, while in channel 3 and 4 I get two mono tracks that somehow differ (probably, that's the original voices input and somewhat altered voices input, both in mono). UPD : I don't want to further process resulting tracks in Sony Vegas, I'd prefer ffmpeg to be the last step in my production process. The screenshot above is for illustration purposes only.

    



      

    1. Is the background gets sidechain compressed with only left or right channel of voices ? If so, how to change that to make it compressed by both channels (some voices are panned into left or right, so there might be actual difference in compressed result)
    2. 


    3. What are those channels 3 and 4 ? Why are they mono ?
    4. 


    5. How do I get single 1/2 stereo track in the output wav file instead of this weird 4 channels in 3 tracks ? (I've looked at pan complex filter, but didn't figure out how to set it up in my case).
    6. 


    


  • Unable to find correct bash syntax for comparing numbers

    17 octobre 2024, par kali

    Part of a long script I'm simply trying to compare two numbers, one is constant and the other one is retrieved via ffprobe (The 474 in the error messages is the height found by ffprobe)

    


    CURRENT_RES=$(ffprobe -v quiet -select_streams v:0 -show_entries stream=height -of csv=s=x:p=0 "${fn}")
  if [[ $CURRENT_RES -gt 1080 ]]; then
    echo "leaving preset normal"
    SLOW_PRESET=("" "")
  else
    echo "setting preset to slow"
    SLOW_PRESET=(-preset slow)
  fi;


    


    produces :

    


    /usr/local/bin/startScreen.sh: line 95: [[: 474

474: syntax error in expression (error token is "474")
setting preset to slow


    


    I also tried arithmetic operators like :

    


      if (($CURRENT_RES>1080)); then
    echo "leaving preset normal"
    SLOW_PRESET=("" "")
  else
    echo "setting preset to slow"
    SLOW_PRESET=(-preset slow)
  fi;


    


    got a slightly different, but essentially same error message like :

    


    /usr/local/bin/startScreen.sh: line 95: ((: 474

474>1080: syntax error in expression (error token is "474>1080")
setting preset to slow


    


    What's even more baffling is there 15 lines below this block there is another comparison which works perfectly fine !

    


    if [[ $CURRENT_RES -gt $CONVERT_HEIGHT ]]; then


    


    I thought may be the inline written 1080 number is confusing the if expression so I tried assigning 1080 to variable and reusing it, which changed nothing.

    


    edit : (dropping this here in case someone else falls into same mistake)
following up on Shawn's advice from comments using cat -v showed 474 was repeated twice.
For debugging purposes ran :

    


    ffprobe -v quiet -select_streams v:0 -show_entries stream=height -of json "filename.mp4"


    


    to find this weird format :

    


    {
    "programs": [
        {
            "streams": [
                {
                    "height": 472
                }
            ]
        }
    ],
    "streams": [
        {
            "height": 472
        }
    ]
}


    


    finally changed the ffprobe command to :

    


    CURRENT_RES=$(ffprobe -v quiet -select_streams v:0 -show_entries stream=height -of json "${fn}" | jq .streams[0].height)


    


    to resolve the issue.

    


  • Seeking Ideas : How Can I Automatically Generate a TikTok Video from a Custom Song Using C# [closed]

    14 mai 2024, par Jamado

    Im creating a c# program which creates a video of the from a song and posts it on tiktok.

    


    Right now my program

    


      

    1. Uses spleeter to split the song into stems

      


    2. 


    3. uses a script of GitHub to create waveform images of the stems

      


    4. 


    


    I want my end video to look like this :

    


    https://vm.tiktok.com/ZMM7CDmUt/ - only one song will play per video

    


    https://vm.tiktok.com/ZMM7Xdw8b/

    


    https://vm.tiktok.com/ZMM7CcGtE/ - no webcam or that hitting animations

    


    basically I want the stems of the songs to be placed on top of a FL studio timeline, synced to the song, then I want to overlay a image on top of the video. and then to contribute for todays gen's 3 second attention span, add some audio virtualisations ontop of the fl studio recording (the music making app in the video) and a little shake to the image

    


    I've tinkered with ffmpeg before, and I reckon it could do the trick here. I'd use the waveform pictures and mix them with a pre-recorded FL Studio video using ffmpeg's filters, like VStack to stack images, Scroll to slide them around and Blend. And then tweak the overlay filter for that shake effect. Plus, I found out ffmpeg can whip up some basic audio visualizations, which is neat. (https://gist.github.com/Neurogami/aeed8693f7ac375d5e013b8432d04d3f)

    


    But my main issue with this approach is, how the waveform images will look weird/out of place ontop of the fl studio video, because FL studio has a really spesific "theme". I could manually create a template and then use some other library to merge the template image and the waveform image. But, it feels a bit janky and would probably be a hassle to set up and implement.

    


    So, I'm curious if you folks have any nifty libraries, GitHub gems, or ideas to help me nail this video ?