Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (83)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Le profil des utilisateurs

    12 avril 2011, par

    Chaque utilisateur dispose d’une page de profil lui permettant de modifier ses informations personnelle. Dans le menu de haut de page par défaut, un élément de menu est automatiquement créé à l’initialisation de MediaSPIP, visible uniquement si le visiteur est identifié sur le site.
    L’utilisateur a accès à la modification de profil depuis sa page auteur, un lien dans la navigation "Modifier votre profil" est (...)

Sur d’autres sites (10505)

  • Java - RTSP save snapshot from Stream Packets

    9 août 2016, par Guerino Rodella

    I’m developing an application which requests snapshots to DVR and IP Cameras. The device I’m working on just offer RTSP requests to do so. Then I implemented the necessary RTSP methods to start receiving the stream packets and I started receiving then via UDP connection established. My doubt is, how can I save the received data to a jpeg file ? Where’s the begging and end of the image bytes received ?

    I searched a lot libraries which implement this type of service in Java, like Xuggler ( which it’s maintained no more ), javacpp-presets - has ffmpeg and opencv libraries included - I had some environment problems with it. If someone know an easy and good one which saves snapshots from the streams, let me know.

    My code :

    final long timeout = System.currentTimeMillis() + 3000;

    byte[] fullImage = new byte[ 1024 * 1024 ];
    DatagramSocket udpSocket = new DatagramSocket( 8000 );
    int lastByte = 0;

    // Skip first 2 packets because I think they are HEADERS
    // Since I don't know what they mean, I just print then in hexa
    for( int i = 0; i < 2; i++ ){

       byte[] buffer = new byte[ 1024 ];
       DatagramPacket dataPacket = new DatagramPacket( buffer, buffer.length );
       udpSocket.receive( dataPacket );

       int dataLenght = dataPacket.getLength();
       buffer = Arrays.copyOf( buffer, dataLenght );

       System.out.println( "RECEIVED[" + DatatypeConverter.printHexBinary( buffer ) + " L: " + dataLenght );

    }

    do{

       byte[] buffer = new byte[ 1024 ];
       DatagramPacket dataPacket = new DatagramPacket( fullImage, fullImage.length );
       udpSocket.receive( dataPacket );

       System.out.println( "RECEIVED: " + new String( fullImage ) );

       for( int i = 0; i < buffer.length; i++ ){
           fullImage[ i + lastByte ] = buffer[ i ];
           lastByte ++;

       }

    } while( System.currentTimeMillis() < timeout );
    // I know this timeout is wrong, I should stop after getting full image bytes

    The output :

    RECEIVED : 80E0000100004650000000006742E01FDA014016C4 L : 21
    RECEIVED : 80E00002000046500000000068CE30A480 L : 17
    RECEIVED : Tons of data from the streaming...
    RECEIVED : Tons of data from the streaming...
    RECEIVED : Tons of data from the streaming...
    [...]

    As you might suppose, the image I’m saving into a file is not readable because I’m doing it wrong. I think the header provide me some info about the next packets the server will sent me telling the start and the end of the image from the streaming. But I don’t understood them. Someone know how to solve it ? Any tips are welcome !

  • Save FFMPEG mp4 output to tempfile using Paperclip

    31 mars 2021, par aminhs

    I am trying to save the mp4 output result to a tempfile when ffmpeg concatenates two mp4 input files using the Paperclip run command. After I run the command the tempfile is always empty with text/plain format.

    


    This is my code :

    


    output = Tempfile.new(["join_video_", ".mp4"], binmode: true)
Paperclip.run("ffmpeg -i /path/to/Video/one.mp4 -i /path/to/Video/two.mp4 -filter_complex \"[0:v][1:v]concat=n=2:v=1:a=0[outv]\" -map \"[outv]\" -f mp4 #{File.expand_path(output.path)}")
output.rewind
output.close


    


    I am interested in knowing how to save the mp4 output result to a tempfile from the ffmpeg concatenation rather than knowing how to concatenate two mp4 video files.

    


    Thank you in advance.

    


  • Save movie about open AI gym

    30 novembre 2019, par Andres

    I make a Reinforcement learning code on python with open AI gym.After training I want save the movie about reinforcement learning but on windows this is impossible.

    There is another way to save the movie of reinforcement learning MountainCar-v0 on windows ??

    if __name__ == "__main__":
       environment = gym.make("MountainCar-v0")
       agent = QLearner(environment)
       learned_policy = train(agent, environment)
       monitor_path = "./movie_output"
       environment = gym.wrappers.Monitor(environment, monitor_path, force=True)

    this is the error message

    DependencyNotInstalled : Found neither the ffmpeg nor avconv executables. On OS X, you can install ffmpeg via brew install ffmpeg. On most Ubuntu variants, sudo apt-get install ffmpeg should do it. On Ubuntu 14.04, however, you’ll need to install avconv with sudo apt-get install libav-tools.