Recherche avancée

Médias (0)

Mot : - Tags -/metadatas

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (112)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

Sur d’autres sites (14984)

  • Unable to split audio using easy_audio_trimmer

    27 juillet 2023, par Sana Wasim

    can we use the easy_audio_trimmer package to split an audio ? I tried using the ffmpeg but it is conflicting with the above package and not work.

    


    I tried splitting by using these functions and it gave an error at the FlutterFFmpeg() method and i cant find an alternative also the duration(filePath) in the command final durationResult = await flutterSound.duration(filePath) ; shows an error

    


    Future<void> _splitAudio() async {&#xA;    setState(() {&#xA;      _progressVisibility = true;&#xA;    });&#xA;&#xA;    // Get the application documents directory&#xA;    final appDocumentsDirectory = await getApplicationDocumentsDirectory();&#xA;&#xA;    // Get the input audio file path&#xA;    final inputAudioPath = widget.file.path;&#xA;&#xA;    // Get the output file names for the two parts&#xA;    final outputFileName1 = &#x27;split_audio_part1.mp3&#x27;;&#xA;    final outputFileName2 = &#x27;split_audio_part2.mp3&#x27;;&#xA;&#xA;    // Get the output file paths for the two parts&#xA;    final outputPath1 = &#x27;${appDocumentsDirectory.path}/$outputFileName1&#x27;;&#xA;    final outputPath2 = &#x27;${appDocumentsDirectory.path}/$outputFileName2&#x27;;&#xA;&#xA;    // Calculate the duration of the original audio&#xA;    final originalDuration = await _getAudioDuration(inputAudioPath);&#xA;&#xA;    // Calculate the durations of the two parts&#xA;    final part1Duration = _startValue;&#xA;    final part2Duration = originalDuration - _endValue;&#xA;&#xA;    // Construct the FFmpeg command to split the audio&#xA;    final ffmpeg = FlutterFFmpeg();&#xA;    final splitCommand = &#x27;-i $inputAudioPath -ss 0 -t $part1Duration -c copy $outputPath1 -ss $_endValue -t $part2Duration -c copy $outputPath2&#x27;;&#xA;&#xA;    try {&#xA;      // Execute the FFmpeg command to split the audio&#xA;      final int result = await ffmpeg.execute(splitCommand);&#xA;&#xA;      if (result == 0) {&#xA;        setState(() {&#xA;          _progressVisibility = false;&#xA;        });&#xA;        debugPrint(&#x27;Audio split successfully.&#x27;);&#xA;      } else {&#xA;        setState(() {&#xA;          _progressVisibility = false;&#xA;        });&#xA;        debugPrint(&#x27;Failed to split audio.&#x27;);&#xA;      }&#xA;    } catch (error) {&#xA;      setState(() {&#xA;        _progressVisibility = false;&#xA;      });&#xA;      debugPrint(&#x27;Error while splitting audio: $error&#x27;);&#xA;    }&#xA;  }&#xA;&#xA;  Future<int> _getAudioDuration(String filePath) async {&#xA;    final flutterSound = FlutterSound();&#xA;    final durationResult = await flutterSound.duration(filePath);&#xA;    return durationResult.inMilliseconds;&#xA;  }&#xA;</int></void>

    &#xA;

    Dependencies

    &#xA;

     path_provider: ^2.0.15&#xA;  ffmpeg_kit_flutter: ^5.1.0&#xA;  audioplayers: ^4.1.0&#xA;  flutter_sound: ^9.2.13&#xA;

    &#xA;

  • FFmpeg : canvas and crop work separately but result in black screen when combined

    25 janvier, par didi00

    I'm working on a video processing pipeline with FFmpeg, where I :

    &#xA;

      &#xA;
    • Create a black canvas using the color filter.
    • &#xA;

    • Crop a region from my video input.
    • &#xA;

    • Overlay the cropped region onto the black canvas.
    • &#xA;

    &#xA;

    Both the canvas and the crop display correctly when tested individually. However, when I attempt to combine them (overlay the crop onto the canvas), the result is a black screen.&#xA;What Works :

    &#xA;

    Black Canvas Alone :

    &#xA;

    ffmpeg -filter_complex "color=c=black:s=1920x1080[out]" -map "[out]" -f nut - | ffplay &#xA;-&#xA;

    &#xA;

    This shows a plain black screen, as expected.

    &#xA;

    Cropped Region Alone :

    &#xA;

    ffmpeg -f v4l2 -input_format yuyv422 -framerate 60 -video_size 1920x1080 -i /dev/video0 &#xA;\ -vf "crop=1024:192:0:0" -f nut - | ffplay -&#xA;

    &#xA;

    This shows the cropped region of the video correctly.

    &#xA;

    When I combine these steps to overlay the crop onto the black canvas, I get a black screen :

    &#xA;

    ffmpeg -f v4l2 -input_format yuyv422 -framerate 60 -video_size 1920x1080 -i /dev/video0 &#xA;\-filter_complex "color=c=black:s=1920x1080,format=yuv420p[background]; \&#xA;[0:v]crop=1024:192:0:0,format=yuv420p[region0]; \&#xA;[background][region0]overlay=x=0:y=0[out]" \&#xA;-map "[out]" -f nut - | ffplay -&#xA;

    &#xA;

    Environment :

    &#xA;

      &#xA;
    • OS : Linux (Debian-based)
    • &#xA;

    • FFmpeg Version : [Insert version, e.g., 4.x or 5.x]
    • &#xA;

    • Capture Card Format : yuyv422
    • &#xA;

    &#xA;

    Question :

    &#xA;

    Why does the pipeline result in a black screen when combining the canvas and the crop, even though both work separately ? Is this an issue with pixel format compatibility, or is there something I'm overlooking in the overlay filter setup ?

    &#xA;

  • dnn/vf_dnn_detect.c : add tensorflow output parse support

    6 mai 2021, par Ting Fu
    dnn/vf_dnn_detect.c : add tensorflow output parse support
    

    Testing model is tensorflow offical model in github repo, please refer
    https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md
    to download the detect model as you need.
    For example, local testing was carried on with 'ssd_mobilenet_v2_coco_2018_03_29.tar.gz', and
    used one image of dog in
    https://github.com/tensorflow/models/blob/master/research/object_detection/test_images/image1.jpg

    Testing command is :
    ./ffmpeg -i image1.jpg -vf dnn_detect=dnn_backend=tensorflow:input=image_tensor:output=\
    "num_detections&detection_scores&detection_classes&detection_boxes":model=ssd_mobilenet_v2_coco.pb,\
    showinfo -f null -

    We will see the result similar as below :
    [Parsed_showinfo_1 @ 0x33e65f0] side data - detection bounding boxes :
    [Parsed_showinfo_1 @ 0x33e65f0] source : ssd_mobilenet_v2_coco.pb
    [Parsed_showinfo_1 @ 0x33e65f0] index : 0, region : (382, 60) -> (1005, 593), label : 18, confidence : 9834/10000.
    [Parsed_showinfo_1 @ 0x33e65f0] index : 1, region : (12, 8) -> (328, 549), label : 18, confidence : 8555/10000.
    [Parsed_showinfo_1 @ 0x33e65f0] index : 2, region : (293, 7) -> (682, 458), label : 1, confidence : 8033/10000.
    [Parsed_showinfo_1 @ 0x33e65f0] index : 3, region : (342, 0) -> (690, 325), label : 1, confidence : 5878/10000.

    There are two boxes of dog with cores 94.05% & 93.45% and two boxes of person with scores 80.33% & 58.78%.

    Signed-off-by : Ting Fu <ting.fu@intel.com>
    Signed-off-by : Guo, Yejun <yejun.guo@intel.com>

    • [DH] libavfilter/vf_dnn_detect.c