Recherche avancée

Médias (1)

Mot : - Tags -/net art

Autres articles (89)

  • Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur

    8 février 2011, par

    La visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
    Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
    Configuration de la boite multimédia
    Dès (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • D’autres logiciels intéressants

    12 avril 2011, par

    On ne revendique pas d’être les seuls à faire ce que l’on fait ... et on ne revendique surtout pas d’être les meilleurs non plus ... Ce que l’on fait, on essaie juste de le faire bien, et de mieux en mieux...
    La liste suivante correspond à des logiciels qui tendent peu ou prou à faire comme MediaSPIP ou que MediaSPIP tente peu ou prou à faire pareil, peu importe ...
    On ne les connais pas, on ne les a pas essayé, mais vous pouvez peut être y jeter un coup d’oeil.
    Videopress
    Site Internet : (...)

Sur d’autres sites (9491)

  • Using JavaCV/FFMPEG to push byte buffer image via RTMP

    6 mai 2022, par ljnoah

    I have a USB camera that returns frames as a byte buffer type to which I would like to push/send via rtmp using JavaCV library to be viewed using VLC. According to this link : http://bytedeco.org/javacpp-presets/ffmpeg/apidocs/org/bytedeco/ffmpeg/ffmpeg.html. It should be possible, but I don't really have any experience with ffmpeg, but from what I've read in their github it should be possible, only, thus far I was not able to find an example on how to do it. Basically, I would like to store a byte buffer and send an image that is in variable instead of grabbing frames from a camera as my USB camera is not detected by android and needs external libraries to work.

    


    private byte[] FrameData = new byte[384 * 288 * 4];
  //private final Bitmap bitmap = Bitmap.createBitmap(PREVIEW_WIDTH, PREVIEW_HEIGHT, // Bitmap.Config.ARGB_8888); // creates a bitmap with the proper format in-case its needed
  private final IFrameCallback mIFrameCallback = new IFrameCallback() {
      @Override
      public void onFrame(final ByteBuffer frameData) {
          frameData.clear();
          frameData.get(FrameData, 0, frameData.capacity());
          }
  };


    


    This callback gives me the frames from my camera on the fly and writes it to FrameData, which I can compress to a bitmap if needed.

    


    My camera preview where I call the callback above :

    


      public void handleStartPreview(Object surface) {
      if ((mUVCCamera == null)) return;
      try {
          mUVCCamera.setPreviewSize(mWidth, mHeight, 1, 26, UVCCamera.DEFAULT_PREVIEW_MODE, UVCCamera.DEFAULT_BANDWIDTH, 0);
      } catch (IllegalArgumentException e) {
          try {
              mUVCCamera.setPreviewSize(mWidth, mHeight, 1, 26, UVCCamera.DEFAULT_PREVIEW_MODE, UVCCamera.DEFAULT_BANDWIDTH, 0);
              Log.e(TAG, "handleStartPreview4");
          } catch (IllegalArgumentException e1) {
              callOnError(e1);
              return;
          }
      }
      int result = mUVCCamera.startPreview();
      mUVCCamera.setFrameCallback(mIFrameCallback, UVCCamera.PIXEL_FORMAT_RGBX);
      mUVCCamera.startCapture();
      startRecording();
  }


    


    My question is How can I use this example :

    


     String ffmpeg = Loader.load(org.bytedeco.ffmpeg.ffmpeg.class);
 ProcessBuilder pb = new ProcessBuilder(ffmpeg, "-i", "/path/to/input.mp4", "-vcodec", "h264", "/path/to/output.mp4");
 pb.inheritIO().start().waitFor();


    


    to push my frames from the camera that are stored FrameData byte buffer via RTMP/RTSP to my server IP, even If I need to compress it to a bitmap before...

    


  • Cannot include ffmpeg.exe using auto-py-to-exe

    4 avril 2024, par YYY

    I'm trying to generate an exe for my PyQT application that uses ffmpeg using auto-py-to-exe.

    


    Within the application, there are calls to ffmpeg as an os command (like 'ffmpeg -i file ... output) .
As I have ffmpeg installed on my machine and accessible from the Windows Path, when I run the app exe on my machine, I don't have any issue.

    


    However, when it's someone that doesn't have ffmpeg installed, he/she encounters an error like "'ffmpeg' is not recognized as an internal or external command, operal program or batch file"

    


    I don't want to put ffmpeg within a folder of the app as I call local modules that already call ffmpeg as an os command.

    


    I have tried first to add ffmpeg.exe as a file (as recommended in one stackoverflow post) and then as a binary but without success
autopytoexe

    


    I've already tried with the configuration stated in this

    


    Can anyone help me on this ?

    


    EDIT N°2

    


    It turns out that when I export the exe on my machine by including ffmpeg as a binary either with pyinstaller or auto-py-to-exe, the final exe doesn't recognize the ffmpeg command even if it's in the folder.

    


    __internal folder

    


    However, I've tried the same configuration on another machine and it worked without any error.

    


    I've tried to install another ffmpeg version and link this version on the final exe but without success

    


  • How use FFMPEG to merege multiple audios and videos with delay and offset for any streams

    9 juillet 2020, par Morak

    I want to use FFMPEG to merge multiple audios and videos.

    


    Materials are :

    


      

    1. three short audio clips (S1.mp4, S2.mp4, S3.mp4) without video(files
only have audio stream),
    2. 


    3. three video clips (V1.mp4, V2.mp4, V3.mp4) without sound(files only have video stream).
    4. 


    


    I have start-time of all materials(i.e :

    


    start-time of "S1.mp4" is 0 sec...
start-time of "S2.mp4" is 1220.5 sec...
start-time of "S3.mp4" is 2500.12 sec...


    


    and

    


    start-time of "V1.mp4" is 15.22 sec...
start-time of "V2.mp4" is 853.99 sec...
start-time of "V3.mp4" is 2901.37 sec...)


    


    My goal is :

    


    Merge all videos and audios to single file with entering in their start time.

    


    The requirement is depicted like below(D=delay).

    


    D <—V1.mp4—> D <----------V2.mp4-------------> D <-V3.mp4->blank

    &#xA;

    <------------S1.mp4-----> D <-------S2.mp4—> D <-------S3.mp4------->

    &#xA;

    The command I use is as below, but it does not work as expected.

    &#xA;

    1st:prepare V.mp4(merging all videos) :

    &#xA;

    ffmpeg -itsoffset {OFFSET.V1} -i V1.mp4  -itsoffset {OFFSET.V2} -i V2.mp4  -itsoffset {OFFSET.V3} -i V3.mp4 -map 0:v -map 1:v -map 2:v -c:v copy V.mp4

    &#xA;

    2nd:prepare S.mp4(merging all audios) :

    &#xA;

    ffmpeg -i S1.mp4 -i S2.mp4 -i S3.mp4 -filter_complex "[0]adelay=0[aud1];[1]adelay=1220.5[aud2];[2]adelay=2500.12[aud3];[aud1][aud2][aud3]amix=3[a]" -map "[a]" -c:a copy S.mp4

    &#xA;

    final:merging V.mp4 with S.mp4 :

    &#xA;

    ffmpeg -i V.mp4 -i S.mp4 -map 0:v -map 1:a -vcodec copy -acodec copy final.mp4

    &#xA;

    Any hint is appreciated !

    &#xA;