Recherche avancée

Médias (1)

Mot : - Tags -/belgique

Autres articles (59)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Les autorisations surchargées par les plugins

    27 avril 2010, par

    Mediaspip core
    autoriser_auteur_modifier() afin que les visiteurs soient capables de modifier leurs informations sur la page d’auteurs

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (8062)

  • How to resize an mp4 video and reduce frame rate while keeping quality ?

    1er décembre 2019, par Jules

    I’m trying to resize (keeping quality) and reduce frame rate to 30, I’ve seen various command but I’m having difficulty.

    This seems to resize nicely

    ffmpeg -i final-video.mp4 -aspect 886:1920 -c copy final-resized.mp4

    I’ve also see -r 30 and -filter:v fps=fps=30

    But neither seem to work in-conjunction with the resize command.

    I’ve seen posts like this
    Re-sampling H264 video to reduce frame rate while maintaining high image quality
    But this takes a long time.

  • Stream h264 to javafx possibly using javacv/ffmpeg

    4 octobre 2018, par cagney

    I’m really stuck on getting a video stream to play on a java fx project.

    — Short version :

    I’m streaming h264/avcc flavor video from an android phone to a desktop computer. However javafx doesn’t have an easy solution for displaying stream. I’m attempting to use javacv / ffmpeg in an attempt to make this work. However I am getting errors from ffmpeg.

    1) Is there a better way to display streaming video on javafx ?

    2) Do you have a sample project or good tutorial for javacv ffmpegframegrabber ?

    3) I think I may be missing some small detail in mycode but Im not sure what i would be.

    — Longer Version :

    1) On the android end Im getting video using mediarecorder. In order to get the sps/pps info I record and save a small movie to the device and then parse the sps and pps data.

    2) Next, on the android, I split up the nalus to meet MTU req and send them over a udp connection to my desktop

    3)On my desktop I reassmble the nalus( or trash them if they loose data) and feed those to an input stream that I gave to the framegreabber constructor.

    — The Code and Logs :

    The errors are long and numerous depending on the flavor I feed it. Here are two separate examples which are usually repeated at great length

    [h264 @ 0000020225907a40] non-existing PPS 0 referenced
    [h264 @ 0000020225907a40] decode_slice_header error
    [h264 @ 0000020225907a40] no frame!

    [h264 @ 00000163d8637a40] illegal aspect ratio
    [h264 @ 00000163d8637a40] pps_id 3412 out of range
    [AVBSFContext @ 00000163e28a0e00] Invalid NAL unit 0, skipping.

     !! One big caveat that I am aware off is that I have not implemented timestamps
    which I created on the android device when feeding ffmpeg. I think it should still show distorted images without this though

    Because I have spent all day guessing and trying I have several "flavors" of data I have shoved through. I am only showing the first section of each nal which I believe if correct would at least show a garbage image as long as my sps and pps are right

    sps: 67 80 80 1E E9 01 68 22 FD C0 36 85 09 A8
    pps: 68 06 06 E2

    Below is annex B style.
    These were each prefixed with either 00 00 01 and 00 00 00 01

    Debug transfer 65 B8 40 0B E5 B8 7B 80 5B 85
    Debug transfer 41 E2 20 7A 74 34 3B D6 BE FA
    Debug transfer 41 E4 40 2F 01 E0 0C 06 EE 91
    Debug transfer 41 E6 60 3E A1 20 5A 02 3C 6D
    Debug transfer 41 E8 80 13 B0 B9 82 C3 03 F4
    Debug transfer 41 EC C0 1B A3 0C 28 F1 B0 C8
    Debug transfer 41 EE E0 1F CE 07 30 EE 05 06
    Debug transfer 41 F1 00 08 ED 80 9C 20 09 73
    Debug transfer 41 F3 20 09 E9 00 86 60 21 C3
    VideoDecoderaddPacket type: 24
    Debug transfer 67 80 80 1E E9 01 68 22 FD C0
    Debug transfer 68 06 06 E2
    Debug transfer 65 B8 20 00 9F 80 78 00 12 8A
    Debug transfer 41 E2 20 09 F0 1E 40 7B 0C E0
    Debug transfer 41 E4 40 09 F0 29 30 D6 00 AE
    Debug transfer 41 E6 60 09 F1 48 31 80 99 40
    [h264 @ 000001c771617a40] non-existing PPS 0 referenced

    Here I tried Avcc style. You can see the first line is the combination of the sps pps followed by idr and then repeated non idr

    Debug transfer 18 00 0E 67 80 80 1E E9 01 68
    Debug transfer 00 02 4A 8F 65 B8 20 00 9F C5
    Debug transfer 00 02 2F DA 41 E2 20 09 E8 0F
    Debug transfer 00 02 2C 34 41 E4 40 09 F4 20
    Debug transfer 00 02 4D 92 41 E6 60 09 FC 2B
    Debug transfer 00 02 47 02 41 E8 80 09 F0 72
    Debug transfer 00 02 52 50 41 EA A0 09 EC 0F
    Debug transfer 00 02 58 8A 41 EC C0 09 FC 6F
    Debug transfer 00 02 55 F9 41 EE E0 09 FC 6E
    Debug transfer 00 02 4D 79 41 F1 00 09 F0 3E
    Debug transfer 00 02 4D B6 41 F3 20 09 E8 64

    The following class is where I try to get javacv/ffmpeg to show the video. I dont think its an ideal solution and am researching canvasfram as a replacement to the image view.

       public class ImageDecoder {

       private final static String TAG = "ImageDecoder ";

       private ImageDecoder(){

       }


       public static void streamImageToImageView(
               final ImageView view,
               final InputStream inputStream,
               final String format,
               final int frameRate,
               final int bitrate,
               final String preset,
               final int numBuffers
       )
       {
           System.out.println("Image Decoder Starting...");


           try(    final FrameGrabber grabber = new
       FFmpegFrameGrabber(inputStream))
           {

               final Java2DFrameConverter converter = new Java2DFrameConverter();

               grabber.setFrameNumber(frameRate);
               grabber.setFormat(format);
               grabber.setVideoBitrate(bitrate);
               grabber.setVideoOption("preset", preset);
               grabber.setNumBuffers(numBuffers);

               System.out.println("Image Decoder waiting on grabber.start...");
               grabber.start();   //---- this call is blocking the loop

               System.out.println("Image Decoder Looping---------------------------
      -------- hit stop");
               while(!Thread.interrupted()){
                   //System.out.println("Image Decoder Looping");
                   final Frame frame = grabber.grab();
                   if (frame != null){
                       final BufferedImage bufferedImage =
       converter.convert(frame);
                       if (bufferedImage != null){

                           Platform.runLater(() ->
       view.setImage(SwingFXUtils.toFXImage(bufferedImage, null)));
                       }else{
                           System.out.println("no buf im");
                       }
                   }else{
                       System.out.println("no fr");
                       Thread.currentThread().interrupt();
                   }

               }



           }catch (Exception e){
               System.out.print(TAG + e);
           }


       }






       }

    Any help is greatly appreciated.

  • Visualizing Call Graphs Using Gephi

    1er septembre 2014, par Multimedia Mike — General

    When I was at university studying computer science, I took a basic chemistry course. During an accompanying lab, the teaching assistant chatted me up and asked about my major. He then said, “Computer science ? Well, that’s just typing stuff, right ?”

    My impulsive retort : “Sure, and chemistry is just about mixing together liquids and coming up with different colored liquids, as seen on the cover of my high school chemistry textbook, right ?”


    Chemistry fun

    In fact, pure computer science has precious little to do with typing (as is joked in CS circles, computer science is about computers in the same way that astronomy is about telescopes). However, people who study computer science often pursue careers as programmers, or to put it in fancier professional language, software engineers.

    So, what’s a software engineer’s job ? Isn’t it just typing ? That’s where I’ve been going with this overly long setup. After thinking about it for long enough, I like to say that a software engineer’s trade is managing complexity.

    A few years ago, I discovered Gephi, an open source tool for graph and data visualization. It looked neat but I didn’t have much use for it at the time. Recently, however, I was trying to get a better handle on a large codebase. I.e., I was trying to manage the project’s complexity. And then I thought of Gephi again.

    Prior Work
    One way to get a grip on a large C codebase is to instrument it for profiling and extract details from the profiler. On Linux systems, this means compiling and linking the code using the -pg flag. After running the executable, there will be a gmon.out file which is post-processed using the gprof command.

    GNU software development tools have a reputation for being rather powerful and flexible, but also extremely raw. This first hit home when I was learning how to use the GNU tool for code coverage — gcov — and the way it outputs very raw data that you need to massage with other tools in order to get really useful intelligence.

    And so it is with gprof output. The output gives you a list of functions sorted by the amount of processing time spent in each. Then it gives you a flattened call tree. This is arranged as “during the profiled executions, function c was called by functions a and b and called functions d, e, and f ; function d was called by function c and called functions g and h”.

    How can this call tree data be represented in a more instructive manner that is easier to navigate ? My first impulse (and I don’t think I’m alone in this) is to convert the gprof call tree into a representation suitable for interpretation by Graphviz. Unfortunately, doing so tends to generate some enormous and unwieldy static images.

    Feeding gprof Data To Gephi
    I learned of Gephi a few years ago and recalled it when I developed an interest in gaining better perspective on a large base of alien C code. To understand what this codebase is doing for a particular use case, instrument it with gprof, gather execution data, and then study the code paths.

    How could I feed the gprof data into Gephi ? Gephi supports numerous graphing formats including an XML-based format named GEXF.

    Thus, the challenge becomes converting gprof output to GEXF.

    Which I did.

    Demonstration
    I have been absent from FFmpeg development for a long time, which is a pity because a lot of interesting development has occurred over the last 2-3 years after a troubling period of stagnation. I know that 2 big video codec developments have been HEVC (next in the line of MPEG codecs) and VP9 (heir to VP8’s throne). FFmpeg implements them both now.

    I decided I wanted to study the code flow of VP9. So I got the latest FFmpeg code from git and built it using the options "--extra-cflags=-pg --extra-ldflags=-pg". Annoyingly, I also needed to specify "--disable-asm" because gcc complains of some register allocation snafus when compiling inline ASM in profiling mode (and this is on x86_64). No matter ; ASM isn’t necessary for understanding overall code flow.

    After compiling, the binary ‘ffmpeg_g’ will have symbols and be instrumented for profiling. I grabbed a sample from this VP9 test vector set and went to work.

    ./ffmpeg_g -i vp90-2-00-quantizer-00.webm -f null /dev/null
    gprof ./ffmpeg_g > vp9decode.txt
    convert-gprof-to-gexf.py vp9decode.txt > /bigdisk/vp9decode.gexf
    

    Gephi loads vp9decode.gexf with no problem. Using Gephi, however, can be a bit challenging if one is not versed in any data exploration jargon. I recommend this Gephi getting starting guide in slide deck form. Here’s what the default graph looks like :


    gprof-ffmpeg-gephi-1

    Not very pretty or helpful. BTW, that beefy arrow running from mid-top to lower-right is the call from decode_coeffs_b -> iwht_iwht_4x4_add_c. There were 18774 from the former to the latter in this execution. Right now, the edge thicknesses correlate to number of calls between the nodes, which I’m not sure is the best representation.

    Following the tutorial slide deck, I at least learned how to enable the node labels (function symbols in this case) and apply a layout algorithm. The tutorial shows the force atlas layout. Here’s what the node neighborhood looks like for probing file type :


    gprof-ffmpeg-gephi-2

    Okay, so that’s not especially surprising– avprobe_input_format3 calls all of the *_probe functions in order to automatically determine input type. Let’s find that decode_coeffs_b function and see what its neighborhood looks like :


    gprof-ffmpeg-gephi-3

    That’s not very useful. Perhaps another algorithm might help. I select the Fruchterman–Reingold algorithm instead and get a slightly more coherent representation of the decoding node neighborhood :


    gprof-ffmpeg-gephi-4

    Further Work
    Obviously, I’m just getting started with this data exploration topic. One thing I would really appreciate in such a tool is the ability to interactively travel the graph since that’s what I’m really hoping to get out of this experiment– watching the code flows.

    Perhaps someone else can find better use cases for visualizing call graph data. Thus, I have published the source code for this tool at Github.