Recherche avancée

Médias (0)

Mot : - Tags -/navigation

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (61)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • ANNEXE : Les plugins utilisés spécifiquement pour la ferme

    5 mars 2010, par

    Le site central/maître de la ferme a besoin d’utiliser plusieurs plugins supplémentaires vis à vis des canaux pour son bon fonctionnement. le plugin Gestion de la mutualisation ; le plugin inscription3 pour gérer les inscriptions et les demandes de création d’instance de mutualisation dès l’inscription des utilisateurs ; le plugin verifier qui fournit une API de vérification des champs (utilisé par inscription3) ; le plugin champs extras v2 nécessité par inscription3 (...)

  • Gestion générale des documents

    13 mai 2011, par

    MédiaSPIP ne modifie jamais le document original mis en ligne.
    Pour chaque document mis en ligne il effectue deux opérations successives : la création d’une version supplémentaire qui peut être facilement consultée en ligne tout en laissant l’original téléchargeable dans le cas où le document original ne peut être lu dans un navigateur Internet ; la récupération des métadonnées du document original pour illustrer textuellement le fichier ;
    Les tableaux ci-dessous expliquent ce que peut faire MédiaSPIP (...)

Sur d’autres sites (6457)

  • How do I convert a .wav file to 16bit 44.1kz using ffmpeg or other utility [closed]

    26 mai 2023, par Seth Edwards

    A preface :
I am building an environment for a my own streaming box. Since building the UI. I turned to the now obsolete MSNTV box to find its UI sound effects.

    


    I found the dump on GitHub. I downloaded and located where the sounds where located.

    


    I listened to them one by one. I noticed that they are wave files. But they sound like they were low quality and may have been compressed before being turned into a wave file.

    


    I was using the Apple Files app on an iPhone 6s running iOS 15.7.1.

    


    They play back fine.

    


    I try importing them into GarageBand for iOS and it gives me an error saying that it only allows 16bit 44.1khz files. This confirmed my suspicion of it being low quality.

    


    I then tried playing them on a Dell Chromebook 3100 running ChromeOS. Chrome’s player would also not play the files.

    


    I need to find out how to convert them to 16bit 44.1khz wave files.

    


    My guess is that since the MSNTV had a small amount of storage space that they compressed the audio.

    


    I tried converting the files to mp3. And they are Noticeably worse.

    


    Does anyone know how to convert these files so they can be played back normally.

    


    In the end I plan to use these files and play them using the pygame library.

    


    I have tried changing the metadata

    


    Converting to mp3

    


  • ffmpeg concat demuxer dropping audio after first clip

    6 septembre 2020, par marcman

    I'm trying to concatenate a video collection that all should be the same type and format. It seems to work like expected when the original sources are the same type, but when they're different types the demuxer drops audio. I understand that the demux requires all inputs to have the same codecs, but I believe I am doing that already.

    


    This is my workflow (pseudocode with Python-like for loop) :

    


    for i, video in enumerate(all_videos):&#xA;    # Call command for transcoding and filtering&#xA;    # I allow this command to be called on mp4, mov, and avi files&#xA;    # The point of this filter is:&#xA;    # (1) to superimpose a timestamp on the bottom right of the video&#xA;    # (2) to scale and pad the videos to a common output resolution (the specific numbers below are just copied from a video I ran, but they are filled in automatically for each given video by the rest of my script)&#xA;    # (3) To transcode all videos to the same common format&#xA;    ffmpeg \&#xA;        -y \&#xA;        -loglevel quiet \&#xA;        -stats \&#xA;        -i video_<i>.{mp4, mov, avi} \&#xA;        -vcodec libx264 \&#xA;        -acodec aac \&#xA;        -vf "scale=607:1080, pad=width=1920:height=1080:x=656:y=0:color=black, drawtext=expansion=strftime: basetime=$(date &#x2B;%s -d&#x27;2020-08-27 16:42:26&#x27;)000000 : fontcolor=white : text=&#x27;%^b %d, %Y%n%l\\:%M%p&#x27; : fontsize=36 : y=1080-4*lh : x=1263-text_w-2*max_glyph_w" \&#xA;        tmpdir/video_<i>.mp4&#xA;&#xA;&#xA;# create file_list.txt, e.g.&#xA;#&#xA;# file &#x27;/abspath/to/tmpdir/video_1.mp4&#x27;&#xA;# file &#x27;/abspath/to/tmpdir/video_2.mp4&#x27;&#xA;# file &#x27;/abspath/to/tmpdir/video_3.mp4&#x27;&#xA;# ...&#xA;&#xA;&#xA;ffmpeg \&#xA;    -f concat \&#xA;    -safe 0 \&#xA;    -i file_list.txt \&#xA;    -c copy \&#xA;    all_videos.mp4&#xA;</i></i>

    &#xA;

    In my test case, my inputs are 3 videos in this order :

    &#xA;

      &#xA;
    1. a camcorder video output in H.264+aac in a mp4
    2. &#xA;

    3. an iphone video in mov format
    4. &#xA;

    5. an iphone video in mp4 format
    6. &#xA;

    &#xA;

    When I review each of the intermediate mp4-transcoded videos in tmpdir they all playback audio and video just fine and the filtering works as expected. However, when I review the final concatenated output, only the first clip (the camcorder video) has sound. When all the videos are from the camcorder, there is no audio issue—they all have sound.

    &#xA;

    When I output ffmpeg warnings and errors, the only thing that shows up is an expected warning about the timestamp :

    &#xA;

    [mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909504, current: 5430298; changing to 5909505. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909505, current: 5431322; changing to 5909506. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909506, current: 5432346; changing to 5909507. This may result in incorrect timestamps in the output file.&#xA;[mp4 @ 0x55997ce85300] Non-monotonous DTS in output stream 0:1; previous: 5909507, current: 5433370; changing to 5909508. This may result in incorrect timestamps in the output file.&#xA;...&#xA;

    &#xA;

    What might I be doing wrong here ? I'm testing in both the Ubuntu 20.04 "Videos" applications as well as VLC Player and both demonstrate the same problem. I'd prefer to use the demuxer if possible for speed as re-encoding during concatenation is quite expensive.

    &#xA;

    NOTE : This is a different issue than laid out here, in which some of the videos had no audio. In my case, all videos have both video and audio.

    &#xA;

  • Displaying YUV420 data using Opengles shader is too slow

    28 novembre 2012, par user1278982

    I have a child thread called A to decode video using ffmpeg on iPhone 3GS, another thread called B to display yuv data, in thread B, I used glSubTexImage2D to upload Y U V textures, and then convert yuv data to RGB in shader, but the frame rate in the decode thread is only 15fps.Why ?

    Update :
    The frame size is 720 * 576.
    I also found something interesting that if I didn't start the thread displaying the YUV data, the frame rate calculated in the decode thread is 22 fps,otherwise 15 fps.So I think that my displaying method must be inefficient.the code as below.

    I have a callback in the decode thread :

    typedef struct _DVDVideoPicture
    {
      char *plane[4];
      int iLineSize[4];
    }DVDVideoPicture;

    void YUVCallBack(void *pYUVData, void *pContext)
    {
      VideoView *view = (VideoView *)pContext;
      [view.glView copyYUVData:(DVDVideoPicture *)pData];
      [view calculateFrameRate];
    }

    The copyYUVData method extract the y u v planes separately. The following is displaying thread method.