Recherche avancée

Médias (0)

Mot : - Tags -/interaction

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (61)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

  • Gestion des droits de création et d’édition des objets

    8 février 2011, par

    Par défaut, beaucoup de fonctionnalités sont limitées aux administrateurs mais restent configurables indépendamment pour modifier leur statut minimal d’utilisation notamment : la rédaction de contenus sur le site modifiables dans la gestion des templates de formulaires ; l’ajout de notes aux articles ; l’ajout de légendes et d’annotations sur les images ;

Sur d’autres sites (8498)

  • matplotlib ArtistAnimation returns a blank video

    28 mars 2017, par Mpaull

    I’m trying to produce an animation of a networkx graph changing over time. I’m using the networkx_draw utilities to create matplotlib figures of the graph, and matplotlib’s ArtistAnimation module to create an animation from the artists networkx produces. I’ve made a minimum reproduction of what I’m doing here :

    import numpy as np
    import networkx as nx
    import matplotlib.animation as animation
    import matplotlib.pyplot as plt

    # Instantiate the graph model
    G = nx.Graph()
    G.add_edge(1, 2)

    # Keep track of highest node ID
    G.maxNode = 2

    fig = plt.figure()
    nx.draw(G)
    ims = []

    for timeStep in xrange(10):

       G.add_edge(G.maxNode,G.maxNode+1)
       G.maxNode += 1

       pos = nx.drawing.spring_layout(G)
       nodes = nx.drawing.draw_networkx_nodes(G, pos)
       lines = nx.drawing.draw_networkx_edges(G, pos)

       ims.append((nodes,lines,))
       plt.pause(.2)
       plt.cla()

    im_ani = animation.ArtistAnimation(fig, ims, interval=200,            repeat_delay=3000,blit=True)
    im_ani.save('im.mp4', metadata={'artist':'Guido'})

    The process works fine while displaying the figures live, it produces exactly the animation I want. And it even produces a looping animation in a figure at the end of the script, again what I want, which would suggest that the animation process worked. However when I open the "im.mp4" file saved to disk, it is a blank white image which runs for the expected period of time, never showing any of the graph images which were showed live.

    I’m using networkx version 1.11, and matplotlib version 2.0. I’m using ffmpeg for the animation, and am running on a Mac, OSX 10.12.3.

    What am I doing incorrectly ?

  • Files created with "ffmpeg hevc_nvenc" do not play on TV. (with video codec SDK 9.1 of nvidia)

    29 janvier 2020, par Dashhh

    Problem

    • Files created with hevc_nvenc do not play on TV. (samsung smart tv, model unknown)
      Related to my ffmpeg build is below.

    FFmpeg build conf

    $ ffmpeg -buildconf
       --enable-cuda
       --enable-cuvid
       --enable-nvenc
       --enable-nonfree
       --enable-libnpp
       --extra-cflags=-I/path/cuda/include
       --extra-ldflags=-L/path/cuda/lib64
       --prefix=/prefix/ffmpeg_build
       --pkg-config-flags=--static
       --extra-libs='-lpthread -lm'
       --extra-cflags=-I/prefix/ffmpeg_build/include
       --extra-ldflags=-L/prefix/ffmpeg_build/lib
       --enable-gpl
       --enable-nonfree
       --enable-version3
       --disable-stripping
       --enable-avisynth
       --enable-libass
       --enable-libfontconfig
       --enable-libfreetype
       --enable-libfribidi
       --enable-libgme
       --enable-libgsm
       --enable-librubberband
       --enable-libshine
       --enable-libsnappy
       --enable-libssh
       --enable-libtwolame
       --enable-libwavpack
       --enable-libzvbi
       --enable-openal
       --enable-sdl2
       --enable-libdrm
       --enable-frei0r
       --enable-ladspa
       --enable-libpulse
       --enable-libsoxr
       --enable-libspeex
       --enable-avfilter
       --enable-postproc
       --enable-pthreads
       --enable-libfdk-aac
       --enable-libmp3lame
       --enable-libopus
       --enable-libtheora
       --enable-libvorbis
       --enable-libvpx
       --enable-libx264
       --enable-libx265
       --disable-ffplay
       --enable-libopenjpeg
       --enable-libwebp
       --enable-libxvid
       --enable-libvidstab
       --enable-libopenh264
       --enable-zlib
       --enable-openssl

    ffmpeg Command

    • Command about FFmpeg encoding
    ffmpeg -ss 1800 -vsync 0 -hwaccel cuvid -hwaccel_device 0 \
    -c:v h264_cuvid -i /data/input.mp4 -t 10 \
    -filter_complex "\
    [0:v]hwdownload,format=nv12,format=yuv420p,\
    scale=iw*2:ih*2" -gpu 0 -c:v hevc_nvenc -pix_fmt yuv444p16le -preset slow -rc cbr_hq -b:v 5000k -maxrate 7000k -bufsize 1000k -acodec aac -ac 2 -dts_delta_threshold 1000 -ab 128k -flags global_header ./makevideo_nvenc_hevc.mp4

    Full log about This Command - check this full log

    The reason for adding "-color_ " in the command is as follows.

    • HDR video after creating bt2020 + smpte2084 video using nvidia hardware accelerator. (I’m studying to make HDR videos. I’m not sure if this is right.)

    How can I make a video using ffmpeg hevc_nvenc and have it play on TV ?


    Things i’ve done

    Here’s what I’ve researched about why it doesn’t work.
    - The header information is not properly included in the resulting video file. So I used a program called nvhsp to add SEI and VUI information inside the video. See below for the commands and logs used.

    nvhsp is open source for writing VUI and SEI bitstrings in raw video. nvhsp link

    # make rawvideo for nvhsp
    $  ffmpeg -vsync 0 -hwaccel cuvid -hwaccel_device 0 -c:v h264_cuvid \
    -i /data/input.mp4 -t 10 \
    -filter_complex "[0:v]hwdownload,format=nv12,\
    format=yuv420p,scale=iw*2:ih*2" \
    -gpu 0 -c:v hevc_nvenc -f rawvideo output_for_nvhsp.265

    # use nvhsp
    $ python nvhsp.py ./output_for_nvhsp.265 -colorprim bt2020 \
    -transfer smpte-st-2084 -colormatrix bt2020nc \
    -maxcll "1000,300" -videoformat ntsc -full_range tv \
    -masterdisplay "G (13250,34500) B (7500,3000 ) R (34000,16000) WP (15635,16450) L (10000000,1)" \
    ./after_nvhsp_proc_output.265

    Parsing the infile:

    ==========================

    Prepending SEI data
    Starting new SEI NALu ...
    SEI message with MaxCLL = 1000 and MaxFall = 300 created in SEI NAL
    SEI message Mastering Display Data G (13250,34500) B (7500,3000) R (34000,16000) WP (15635,16450) L (10000000,1) created in SEI NAL
    Looking for SPS ......... [232, 22703552]
    SPS_Nals_addresses [232, 22703552]
    SPS NAL Size 488
    Starting reading SPS NAL contents
    Reading of SPS NAL finished. Read 448 of SPS NALu data.

    Making modified SPS NALu ...
    Made modified SPS NALu-OK
    New SEI prepended
    Writing new stream ...
    Progress: 100%
    =====================
    Done!

    File nvhsp_after_output.mp4 created.

    # after process
    $ ffmpeg -y -f rawvideo -r 25 -s 3840x2160 -pix_fmt yuv444p16le -color_primaries bt2020 -color_trc smpte2084  -colorspace bt2020nc -color_range tv -i ./1/after_nvhsp_proc_output.265 -vcodec copy  ./1/result.mp4 -hide_banner

    Truncating packet of size 49766400 to 3260044
    [rawvideo @ 0x40a6400] Estimating duration from bitrate, this may be inaccurate
    Input #0, rawvideo, from './1/nvhsp_after_output.265':
     Duration: N/A, start: 0.000000, bitrate: 9953280 kb/s
       Stream #0:0: Video: rawvideo (Y3[0][16] / 0x10003359), yuv444p16le(tv, bt2020nc/bt2020/smpte2084), 3840x2160, 9953280 kb/s, 25 tbr, 25 tbn, 25 tbc
    [mp4 @ 0x40b0440] Could not find tag for codec rawvideo in stream #0, codec not currently supported in container
    Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
    Stream mapping:
     Stream #0:0 -> #0:0 (copy)
       Last message repeated 1 times

    Goal

    • I want to generate matadata normally when encoding a video through hevc_nvenc.

    • I want to create a video through hevc_nvenc and play HDR Video on smart tv with 10bit color depth support.


    Additional

    • Is it normal for ffmpeg hevc_nvenc not to generate metadata in the resulting video file ? or is it a bug ?

    • Please refer to the image below. (*’알 수 없음’ meaning ’unknown’)

      • if you need more detail file info, check this Gist Link (by ffprobe)
        hevc_nvenc metadata
    • However, if you encode a file in libx265, the attribute information is entered correctly as shown below.

      • if you need more detail file info, check this Gist Link
        libx265 metadata

    However, when using hevc_nvenc, all information is missing.

    • i used option -show_streams -show_programs -show_format -show_data -of json -show_frames -show_log 56 at ffprobe
  • Convert video into mp3 using ffmpeg in angular and nodejs and saving audio into the database

    2 septembre 2021, par Amir Shahzad

    This is nodejs server side code

    


    const express = require('express');
const ffmpeg  = require('fluent-ffmpeg');
const fileUpload = require('express-fileupload');
const mongoose = require('mongoose');
const cors   = require('cors')
const app = express();
const Video = require('./models/video');
mongoose.connect('mongodb://localhost:27017/YoutubeApp', {
    useNewUrlParser: true,
    useUnifiedTopology: true,
});
const db = mongoose.connection;
db.on('error', console.error.bind(console, 'connection error'));
db.once('open', () => {
   console.log('Data Base Connected Successfully!');
});

app.use(fileUpload({
   useTempFiles: true,
   tempFileDir: 'temp/'
}));
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
app.use(cors({ origin: 'http://localhost:4200' }));


ffmpeg.setFfmpegPath('/usr/bin/ffmpeg');

app.post('/mp4tomp3', (req, res) => {
const data = new Video({
     mp4: req.body.mp4
});
res.contentType('video/avi');
res.attachment('output.mp3');
req.files.mp4val.mv("temp/" + req.body, function(err) {
    if(err){
        res.sendStatus(500).send(err)
    }else{
        console.log("Fiel Uploaded Successfully.!");
    }
});
// Convertin Mp4 To Avi
ffmpeg('temp/' + req.files.mp4val.mp4)
.toFormat('mp3')
.on('end', function() {
    console.log('Done');
})
.on('error', function(err){
    console.log('An Error Occured' + err.message)
})
 .pipe(res, {end: true})
 })

 app.listen(3000, () => {
    console.log('Server Start On Port 3000')
 })


    


    Here i want to get input from the user with input tag and then want to convert video into audio and save into the database but i not know how i can do this

    


    This is video model file

    


    const mongoose = require("mongoose");
const Schema = mongoose.Schema;

const videoSchema = new Schema({
    mp4: String,
});
module.exports = mongoose.model("Videos", videoSchema);


    


    This is Typescript code in angular client side where i want to create logic to convert video into audio from the server side code

    


    import { ThrowStmt } from '@angular/compiler';
import { Component, OnInit } from '@angular/core';
import { FormBuilder, FormGroup, Validators } from '@angular/forms';
import { VideoConversionService } from 'src/services/video-conversion.service';

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.css']
})
export class AppComponent implements OnInit {

   submitted =false;
   form! : FormGroup
   data:any

   constructor(private formBuilder: FormBuilder,
       private videoService: VideoConversionService){}

   creatForm(){
    this.form = this.formBuilder.group({
    mp4: ['', Validators.required],
  });
  }
   ngOnInit(): void {
   this.creatForm();

  }


  convertVideo(){
    this.submitted = true
    this.videoService.conversion(this.form.value).subscribe(res => {
    this.data = res;
 })
 }

 }


    


    I do not know how to create logic to do that (convert video into audio using angular framework)

    


    This is app.component.html file where i want to get video from the user using input field

    


    <div class="container">&#xA;   <h1>Video Proccessing App</h1>&#xA;   <form>&#xA;     <input type="file" formcontrolname="mp4" />&#xA;     <input type="submit" value="Convert" />&#xA;  </form>&#xA;</div>&#xA;

    &#xA;

    &#xA;

    But my code is not working and video is not converting into audio

    &#xA;

    This is my video service file where i calling nodejs api to perform the task

    &#xA;

    import { Injectable } from &#x27;@angular/core&#x27;;&#xA;import { HttpClient  } from &#x27;@angular/common/http&#x27;;&#xA;@Injectable({&#xA;   providedIn: &#x27;root&#x27;&#xA;})&#xA;export class VideoConversionService {&#xA;&#xA;constructor(private httpClient: HttpClient) { }&#xA;&#xA;conversion(data: any){&#xA;   return this.httpClient.post(&#x27;http://localhost:3000/mp4tomp3&#x27;, data)&#xA;}&#xA;}&#xA;

    &#xA;

    Please anyone can solve my problem Thanks in advance

    &#xA;