
Recherche avancée
Médias (1)
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
Autres articles (72)
-
Mise à jour de la version 0.1 vers 0.2
24 juin 2013, parExplications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...) -
Personnaliser en ajoutant son logo, sa bannière ou son image de fond
5 septembre 2013, parCertains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;
-
Ecrire une actualité
21 juin 2013, parPrésentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
Vous pouvez personnaliser le formulaire de création d’une actualité.
Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)
Sur d’autres sites (6993)
-
Building a shared library from static libraries for ffmpeg 2.5.2
7 janvier 2015, par abijinxI am currently building a shared library for ffmpeg as myffmpeg.so using ffmpeg 0.8.6. I am achieving by combining static libs of individual modules of ffmpeg, along with a few additional static libraries. The main makefile used is as follows :
LOCAL_PATH := $(call my-dir)
include $(LOCAL_PATH)/Android_Files.mk
include $(CLEAR_VARS)
LOCAL_MODULE:= libavcodec
LOCAL_SRC_FILES:= $(MY_AVCODEC_FILES)
include $(LOCAL_PATH)/Android_Common.mk
include $(BUILD_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE:= libavfilter
LOCAL_SRC_FILES:= $(MY_AVFILTER_FILES)
include $(LOCAL_PATH)/Android_Common.mk
include $(BUILD_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE:= libavformat
LOCAL_SRC_FILES:= $(MY_AVFORMAT_FILES)
include $(LOCAL_PATH)/Android_Common.mk
include $(BUILD_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE:= libavutil
LOCAL_SRC_FILES:= $(MY_AVUTIL_FILES)
include $(LOCAL_PATH)/Android_Common.mk
include $(BUILD_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE:= libswscale
LOCAL_SRC_FILES:= $(MY_SWSCALE_FILES)
include $(LOCAL_PATH)/Android_Common.mk
include $(BUILD_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE:= ffmpeg_myc
LOCAL_SRC_FILES:= ffmpeg.c cmdutils.c
include $(LOCAL_PATH)/Android_Common.mk
include $(BUILD_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_MODULE := myffmpeg
LOCAL_WHOLE_STATIC_LIBRARIES := libavcodec libavfilter libavformat libavutil libswscale ffmpeg_c
LOCAL_LDFLAGS += -lz -lm -llog
ifeq ($(FF_ENABLE_AMR),yes)
LOCAL_WHOLE_STATIC_LIBRARIES += opencore-amrnb opencore-amrwb
endif
ifeq ($(FF_ENABLE_AAC),yes)
LOCAL_WHOLE_STATIC_LIBRARIES += vo-aacenc
endif
include $(BUILD_SHARED_LIBRARY)The above makefile uses only few selected source files to get the individual static libs as mentioned in the FF_SOURCE_FILES parameter.Then combines all of those with the additional libs to create a shared library myffmpeg.so.
Now I am trying to get a similar output with ffmpeg 2.5.2 for 64-bit arm architecture. This time I have built the latest ffmpeg using the following configure script and generated the static libraries of different ffmpeg modules like libavcodec.a, libavfilter.a, etc
#!/bin/bash
ABI=aarch64-linux-android
NDK=
SYSROOT=$NDK/platforms/android-21/arch-arm64/
TOOLCHAIN=$NDK/toolchains/aarch64-linux-android-4.9/prebuilt/linux-x86_64
CPU=arm64
PREFIX=$(pwd)/android/$CPU
./configure \
--prefix=$PREFIX \
--cc=$TOOLCHAIN/bin/$ABI-gcc \
--enable-static \
--disable-doc \
--disable-ffmpeg \
--disable-ffplay \
--disable-ffprobe \
--disable-network \
--disable-ffserver \
--disable-devices \
--disable-avdevice \
--disable-swscale-alpha \
--disable-doc \
--disable-symver \
--disable-neon \
--enable-optimizations \
--cross-prefix=$TOOLCHAIN/bin/aarch64-linux-android- \
--target-os=linux \
--arch=arm64 \
--enable-cross-compile \
--sysroot=$SYSROOT
--enable-libopencore-amrnb \
--enable-libopencore-amrwb \
--enable-libvo-aacenc \
$ADDITIONAL_CONFIGURE_FLAG
make clean
make
make installNow I am trying to combine the generated static libraries i.e
-
libavcodec.a
libavfilter.a
libavformat.a
libavutil.a
libswresample.a
libswscale.a
with -
ffmpeg_myc opencore-amrnb opencore-amrwb
and generate a myffmpeg.so shared library by giving ndk-build.
I tried to paste the generated ffmpeg static libraries from ffmpeg/android folder to obj/local/arm64-v8a (path where are all the static libs were previously created in 0.8.6 according the makefile). I also made changes to the makefile thinking that it will use the already generated ffmpeg libraries. But I found that the size of the generated shared library was too low and that it has not included the generated ffmpeg libs.
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE := myffmpeg
LOCAL_WHOLE_STATIC_LIBRARIES := libavcodec libavfilter libavformat libavutil libswscale ffmpeg_c
LOCAL_LDFLAGS += -lz -lm -llog
ifeq ($(FF_ENABLE_AMR),yes)
LOCAL_WHOLE_STATIC_LIBRARIES += opencore-amrnb opencore-amrwb
endif
ifeq ($(FF_ENABLE_AAC),yes)
LOCAL_WHOLE_STATIC_LIBRARIES += vo-aacenc
endif
include $(BUILD_SHARED_LIBRARY)I would like to know whether I am going in a right way and have to do some minor changes. Or if I should start with an entirely different approach to achieve this. Any suggestions would be helpful.
Thanks in advance
-
-
output of ffmpeg comes out like yamborghini high music video
19 janvier, par chipI do this procedure when I edit a long video


- 

- segment to 3 second videos, so I come up with a lot of short videos
- I randomly pick videos and put them in a list
- then I join these short videos together using concat
- now I get a long video again. next thing I do is segment the video 4 minute videos










After processing, the videos look messed up. I don't know how to describe it but it looks like the music video yamborghini high


For some reason, this only happens to videos I capture at night. I do the same process for day time footage, no problem.


is there a problem with slicing, merging and then slicing again ?


or is it an issue that I run multiple ffmpeg scripts at the same time ?


here's the script


for FILE in *.mp4; do ffmpeg -i ${FILE} -vcodec copy -f segment -segment_time 00:10 -reset_timestamps 1 "part_$( date '+%F%H%M%S' )_%02d.mp4"; rm -rf $FILE; done; echo 'slicing completed.' && \ 
for f in part_*[13579].mp4; do echo "file '$f'" >> mylist.txt; done
ffmpeg -f concat -safe 0 -i mylist.txt -c copy output.mp4 && echo 'done merging.' && \ 
ffmpeg -i output.mp4 -threads 7 -vcodec copy -f segment -segment_time 04:00 -reset_timestamps 1 "Video_Title_$( date '+%F%H%M%S' ).mp4" && echo 'individual videos created'





-
Merging input Streams with nodejs/ffmpeg
14 septembre 2020, par jAndyI'm creating a very basic and rudimentary Video-Web-Chat. On the client side, I'm going to use a simple
getUserMedia
API call to capture the webcam data and send video-data asdata-blob
to my server.

From There, I'm planning to either use the
fluent-ffmpeg
library or just spawnffmpeg
myself and pipe that raw data toffmpeg
, which in turn, does some magic and pushes that out asHLS
stream to an Amazon AWS Service (for instance), which then gets actually displayed on a Web Browser for all participating people in the video chat.

So far, I think all of this should be fairly easy to implement, but I keep my head spinning around the question, how I can create a "combined" or "merged" frame and stream, so the output HLS data from my server to the distributing cloud service has only to be one combined data stream to receive.


If there are 3 people in that video chat, my server receives 3 data streams from those clients and combines these data streams (from the individual web-cam data sources) into one output stream.


How could that be accomplished ?
Can I "create" a new frame with
ffmpeg
, so to speak ? I would be very thankful if anybody could give me a heads up here, maybe I'm thinking in a complete wrong direction.

Another question which arises to me is, if I really can just "dump" any data, which I'm receiving from a binary blob created from
getUserMedia
orMultiStreamRecorder
toffmpeg
or if I have to specify somewhere and somehow the exact codecs being used etc.?