
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (68)
-
MediaSPIP version 0.1 Beta
16 avril 2011, parMediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...)
Sur d’autres sites (10404)
-
Android : how to film a video before extracting its audio
20 février 2017, par MrOrgonDespite many searches, I haven’t been able to develop a Android prototype able to film a video before extracting its audio as .wav in a separate activity.
I have developed so far a simple filming activity which relies on Android’s Camera application. My strategty was to put the video’s Uri as Extra to the next activity, before using FFMPEG, but I can’t make the transition between Uri and FFMPEG. Indeed, I’m a fresh Android Studio beginner, so I still am not sure about what concept to use.
Here’s my code for the video recording activity.
import android.net.Uri;
import android.os.Build;
import android.os.Bundle;
import android.provider.MediaStore;
import android.widget.Toast;
import android.widget.VideoView;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.channels.FileChannel;
import static java.security.AccessController.getContext;
public class RecordActivity extends Activity{
static final int REQUEST_VIDEO_CAPTURE = 0;
VideoView mVideoView = null;
Uri videoUri = null;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
mVideoView = (VideoView) findViewById(R.id.videoVieww);
setContentView(R.layout.activity_record);
Intent takeVideoIntent = new Intent(MediaStore.ACTION_VIDEO_CAPTURE);
Toast.makeText(RecordActivity.this, String.valueOf(Build.VERSION.SDK_INT) , Toast.LENGTH_SHORT).show();
takeVideoIntent.putExtra(MediaStore.EXTRA_OUTPUT, videoUri);
if (takeVideoIntent.resolveActivity(getPackageManager()) != null) {
startActivityForResult(takeVideoIntent, REQUEST_VIDEO_CAPTURE);
}
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent intent) {
if (requestCode == REQUEST_VIDEO_CAPTURE && resultCode == RESULT_OK) {
videoUri = intent.getData();
Intent intentForFilterActivity = new Intent(RecordActivity.this, FilterActivity.class);
intentForFilterActivity.putExtra("VideoToFilter", videoUri.getPath());
startActivity(intentForFilterActivity);
}
}
}Here’s the the code for the audio extraction activity. It is called "FilterActivity", as its final aim is to filter outdoor noise using additional functions. I’m using WritingMinds’ implementation of FFMPEG.
https://github.com/WritingMinds/ffmpeg-android-javaimport android.app.Activity;
import android.content.Intent;
import android.os.Bundle;
import android.test.ActivityUnitTestCase;
import android.widget.Toast;
import com.github.hiteshsondhi88.libffmpeg.ExecuteBinaryResponseHandler;
import com.github.hiteshsondhi88.libffmpeg.FFmpeg;
import com.github.hiteshsondhi88.libffmpeg.exceptions.FFmpegCommandAlreadyRunningException;
public class FilterActivity extends Activity {
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_filter);
Intent intentVideo = getIntent();
String pathIn = intentVideo.getStringExtra("VideoToFilter");
FFmpeg ffmpeg = FFmpeg.getInstance(FilterActivity.this);
try {
String[] cmdExtract = {"-i " + pathIn + " extracted.wav"};
ffmpeg.execute(cmdExtract, new ExecuteBinaryResponseHandler() {
@Override
public void onStart() {}
@Override
public void onProgress(String message) {}
@Override
public void onFailure(String message) {
Toast.makeText(FilterActivity.this, "Failure !", Toast.LENGTH_SHORT).show();
}
@Override
public void onSuccess(String message) {}
@Override
public void onFinish() {}
});
} catch (FFmpegCommandAlreadyRunningException e) {
}
}
}and I always get the "Failure !" message.
Some parts of the code may look extremely bad. As as written previously, I’m a real Android Studio beginner.
Do you have any correction that could work ? Or even just a strategy ?
Thank you in advance !
-
Difference in size of Mpegts to sum of ts files
9 juin 2017, par Biraj B ChoudhuryI am trying to find the size of the transcoded asset, but I find that the sum of the sizes of the .ts files doesn’t match that of the mpegts output file created by ffmpeg.
For example :
With source of 5Mb I get the following output
/ffmpeg -y -i big_buck_bunny_720p_5mb.mp4 -s 854x480 -ss 0 -vcodec libx264 -level:v 3.0 -profile:v baseline -f mpegts -async 2 -acodec libmp3lame -ar 44100 -r 24.00 -b:v 703k -maxrate 703k -bufsize 703k -r 24.00 -b:a 96.0k test.mpegts -hls_time 10 -hls_segment_filename test-%03d.ts -hls_playlist_type vod test.m3u8
Size of test.mpegts -> 3.1Mb,
Sum of the size of ts files -> 5.5MbWith source of 30Mb I get the following output
./ffmpeg -y -i big_buck_bunny_720p_30mb.mp4 -s 854x480 -ss 0 -vcodec libx264 -level:v 3.0 -profile:v baseline -f mpegts -async 2 -acodec libmp3lame -ar 44100 -r 24.00 -b:v 703k -maxrate 703k -bufsize 703k -r 24.00 -b:a 96.0k test1.mpegts -hls_time 10 -hls_segment_filename test-%03d.ts -hls_playlist_type vod test.m3u8
Size of test.mpegts -> 19Mb,
Sum of the size of the ts files -> 17MbWith source of 63Mb I get the following output
./ffmpeg -y -i BigBuckBunny_320x180.mp4 -s 854x480 -ss 0 -vcodec libx264 -level:v 3.0 -profile:v baseline -f mpegts -async 2 -acodec libmp3lame -ar 44100 -r 24.00 -b:v 703k -maxrate 703k -bufsize 703k -r 24.00 -b:a 96.0k test2.mpegts -hls_time 10 -hls_segment_filename test-%03d.ts -hls_playlist_type vod test.m3u8
Size of test.mpegts -> 62.21Mb
Sum of the size of the ts files -> 26MbWith source of 397Mb I get the following output
./ffmpeg -y -i big_buck_bunny_720p_h264.mov -s 640x360 -ss 0 -vcodec libx264 -level:v 3.0 -profile:v baseline -f mpegts -async 2 -acodec libmp3lame -ar 44100 -r 24.00 -b:v 703k -maxrate 703k -bufsize 703k -r 24.00 -b:a 96.0k test3.mpegts -hls_time 10 -hls_segment_filename test-%03d.ts -hls_playlist_type vod test.m3u8
Size of test.mpegts -> 62Mb
Sum of the size of the ts files -> 142MbSource locations of files ->
http://www.sample-videos.com/
http://download.blender.org/peach/bigbuckbunny_movies/Can anybody point me to any documentation which explains why there is so huge variance in the difference of size between the .mpegts file and the sum of the .ts files.
-
How to obtain time markers for video splitting using python/OpenCV
30 mars 2016, par Bleddyn Raw-ReesHi I’m new to the world of programming and computer vision so please bare with me.
I’m working on my MSc project which is researching automated deletion of low value content in digital file stores. I’m specifically looking at the sort of long shots that often occur in natural history filming whereby a static camera is left rolling in order to capture the rare snow leopard or whatever. These shots may only have some 60s of useful content with perhaps several hours of worthless content either side.
As a first step I have a simple motion detection program from Adrian Rosebrock’s tutorial [http://www.pyimagesearch.com/2015/05/25/basic-motion-detection-and-tracking-with-python-and-opencv/#comment-393376]. Next I intend to use FFMPEG to split the video.
What I would like help with is how to get in and out points based on the first and last points that motion is detected in the video.
Here is the code should you wish to see it...
# import the necessary packages
import argparse
import datetime
import imutils
import time
import cv2
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video", help="path to the video file")
ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
args = vars(ap.parse_args())
# if the video argument is None, then we are reading from webcam
if args.get("video", None) is None:
camera = cv2.VideoCapture(0)
time.sleep(0.25)
# otherwise, we are reading from a video file
else:
camera = cv2.VideoCapture(args["video"])
# initialize the first frame in the video stream
firstFrame = None
# loop over the frames of the video
while True:
# grab the current frame and initialize the occupied/unoccupied
# text
(grabbed, frame) = camera.read()
text = "Unoccupied"
# if the frame could not be grabbed, then we have reached the end
# of the video
if not grabbed:
break
# resize the frame, convert it to grayscale, and blur it
frame = imutils.resize(frame, width=500)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (21, 21), 0)
# if the first frame is None, initialize it
if firstFrame is None:
firstFrame = gray
continue
# compute the absolute difference between the current frame and
# first frame
frameDelta = cv2.absdiff(firstFrame, gray)
thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]
# dilate the thresholded image to fill in holes, then find contours
# on thresholded image
thresh = cv2.dilate(thresh, None, iterations=2)
(_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# loop over the contours
for c in cnts:
# if the contour is too small, ignore it
if cv2.contourArea(c) < args["min_area"]:
continue
# compute the bounding box for the contour, draw it on the frame,
# and update the text
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
text = "Occupied"
# draw the text and timestamp on the frame
cv2.putText(frame, "Room Status: {}".format(text), (10, 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
(10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)
# show the frame and record if the user presses a key
cv2.imshow("Security Feed", frame)
cv2.imshow("Thresh", thresh)
cv2.imshow("Frame Delta", frameDelta)
key = cv2.waitKey(1) & 0xFF
# if the `q` key is pressed, break from the lop
if key == ord("q"):
break
# cleanup the camera and close any open windows
camera.release()
cv2.destroyAllWindows()Thanks !