Author: 672jxhf5jng4

  • ncedc-earthquakes

    Earthquakes Demo

    Datasets

    The earthquake datasets are gathered from the Northern California Earthquake Data Center through the ANSS Composite Catalog Search.

    Acknowledgement

    “Waveform data, metadata, or data products for this study were accessed through the Northern California Earthquake Data Center (NCEDC), doi:10.7932/NCEDC.”

    Earthquakes

    Filename: earthquakes-full.txt
    Search parameters: catalog=ANSS, start_time=1989/01/01,00:00:00, end_time=2017/11/01,00:00:00, minimum_magnitude=0, maximum_magnitude=10, event_type=E
    Size:

    Blasts (Quarry or Nuclear)

    Filename: blasts-full.txt
    Search parameters: catalog=ANSS, start_time=1989/01/01,00:00:00, end_time=2017/11/01,00:00:00, minimum_magnitude=0, maximum_magnitude=10, event_type=B
    Size: 17976 lines (1364546 bytes)

    Setting up

    Ingesting Data

    Download and extract the dataset archive with tar zxf ncedc-earthquakes-dataset.tar.gz from the terminal. Run the below commands to ingest the data sets to your Elasticsearch cluster. Please note, you may need to configure ncedc-earthquakes-logstash.conf file in case your are not running Elasticsearch node on your local host.

    tail -n +2 earthquakes.txt| EVENT="earthquake" logstash/bin/logstash -f ncedc-earthquakes-logstash.conf
    tail -n +2 blasts.txt| EVENT="blast" logstash/bin/logstash -f ncedc-earthquakes-logstash.conf
    

    Importing Kibana Visuals and Dashboards

    1. Open Kibana and go to Management > Index Patterns. Type in ncedc-earthquakes as the index name and create the index pattern.
    2. Go to Saved Objects tab and click on Import, and select ncedc-earthquakes-dashboard.json by the file chooser.
    3. Select ncedc-earthquakes as the new index pattern when the Index Pattern Conflicts dialog prompted.
    4. Go to Dashboard and click on Earthqueke from the list of the dashboards.

    Visit original content creator repository
    https://github.com/kosho/ncedc-earthquakes

  • zeebetron

    zeebetron

    GitHub repo size GitHub contributors GitHub stars GitHub forks Twitter Follow

    zeebetron is a small frontend to manage different profiles for zeebe instances (cloud or local).

    gif

    Usually I use zbctl or my own small starter project to interact with zeebe. But since I had a lot of zeebe instances and I was switching between them, I wanted to build a small tool to manage profiles.

    The added value of zeebetron is

    • Manage different profiles including addresses and if necessary oAuth information
    • Manage different workflows for deploying or creating new instances

    The tool itself is built with Electron and Angular. The communication with zeebe is done via zeebe-node, the rendering of BPMN diagrams is done using bpmn-js.

    Releases

    Find all releases here.

    Prerequisites

    Before you begin, ensure you have met the following requirements:

    • You have installed node.

    That’s it 😉 Admittedly, I didn’t test the project on Windows, but it runs fine on Ubuntu and Mac.

    Development

    Install all dependencies with

    npm install

    Run the development version of the app including hot reload:

    npm run start

    Build a release:

    npm run electron:linux # (or mac or windows)

    Contributing to zeebetron

    To contribute to zeebetron, follow these steps:

    1. Fork this repository.
    2. Create a branch: git checkout -b <branch_name>.
    3. Make your changes and commit them: git commit -m '<commit_message>'
    4. Create the pull request to this project.

    Alternatively see the GitHub documentation on creating a pull request.

    Contact

    If you want to contact me you can reach me at adam.urban@gmail.com.

    License

    This project uses the following license: MIT.

    Visit original content creator repository https://github.com/urbanisierung/zeebetron
  • zeebetron

    zeebetron

    GitHub repo size GitHub contributors GitHub stars GitHub forks Twitter Follow

    zeebetron is a small frontend to manage different profiles for zeebe instances (cloud or local).

    gif

    Usually I use zbctl or my own small starter project to interact with zeebe. But since I had a lot of zeebe instances and I was switching between them, I wanted to build a small tool to manage profiles.

    The added value of zeebetron is

    • Manage different profiles including addresses and if necessary oAuth information
    • Manage different workflows for deploying or creating new instances

    The tool itself is built with Electron and Angular. The communication with zeebe is done via zeebe-node, the rendering of BPMN diagrams is done using bpmn-js.

    Releases

    Find all releases here.

    Prerequisites

    Before you begin, ensure you have met the following requirements:

    • You have installed node.

    That’s it 😉 Admittedly, I didn’t test the project on Windows, but it runs fine on Ubuntu and Mac.

    Development

    Install all dependencies with

    npm install

    Run the development version of the app including hot reload:

    npm run start

    Build a release:

    npm run electron:linux # (or mac or windows)

    Contributing to zeebetron

    To contribute to zeebetron, follow these steps:

    1. Fork this repository.
    2. Create a branch: git checkout -b <branch_name>.
    3. Make your changes and commit them: git commit -m '<commit_message>'
    4. Create the pull request to this project.

    Alternatively see the GitHub documentation on creating a pull request.

    Contact

    If you want to contact me you can reach me at adam.urban@gmail.com.

    License

    This project uses the following license: MIT.

    Visit original content creator repository https://github.com/urbanisierung/zeebetron
  • Hotels_tg_bot

    AngerranTravelbot

    Телеграм бот для подбора отелей. Анализирует текущие предложения по отелям и с учетом критериев, введенных пользователем, выдает наиболее подходящие.
    В работе использует Rapid Api

    Особенности

    • Возможность настройки выводимых результатов, например, количества искомых отелей, необходимость вывода фотографий отелей и их количества
    • Сохранение поисковых запросов в базу данных и возможность вывода на экран истории поиска с разбивкой для каждого пользователя

    Доступные команды

    • /start – запуск бота
    • /help – вывод основных команд
    • /lowprice – вывод топа самых дешевых отелей в городе
    • /highprice – вывод топа самых дорогих отелей в городе
    • /bestdeal – вывод топа отелей, находящихся в заданном диапазоне цен и расстояний от центра города
    • /history – вывод истории поиска отелей

    Использованные технологии

    • Python 3
    • PyTelegramBotApi(telebot)

    Подготовка и запуск

    1. Установить интерпретатор Python версии 3.10 и пакеты, перечисленные в файле requirements.txt
    2. Создать своего бота с помощью бота @BotFather и сохранить свой токен
    3. Создать учетную запись на сайте rapidapi.com и сохранить X-RapidAPI-Key, X-RapidAPI-Host
    4. Создать файл .env в директории программы, сохранить туда токен, X-RapidAPI-Key, X-RapidAPI-Host по примеру в файле .env.template
    5. Запустить файл main.py в директории программы

    Visit original content creator repository
    https://github.com/AlexanderBarbashin/Hotels_tg_bot

  • create-adobe-extension

    Create Adobe Extension

    This package provides a CLI tool to create Adobe CEP Extensions.

    Features

    • Create Adobe CEP Extension Bundles without needing to download them from the CEP-Resources Github repository.
    • Add new extensions to existing Adobe CEP Extension Bundles.
    • Automatically generate .debug files based on your inputs.

    Usage

    Navigate to the Folder you want to create your extension in.

    By default the tool checks to see if you’re creating an extension in the official Adobe extensions folder. If not, it will give you a warning asking if you want to continue.

    These extension folders are found at:

    • System extension folder

      • Win(x64): C:\Program Files (x86)\Common Files\Adobe\CEP\extensions, and C:\Program Files\Common Files\Adobe\CEP\extensions (since CEP 6.1)
      • macOS: /Library/Application Support/Adobe/CEP/extensions
    • Per-user extension folder

      • Win: C:\Users\<USERNAME>\AppData\Roaming\Adobe\CEP/extensions
      • macOS: ~/Library/Application Support/Adobe/CEP/extensions

    Now run command:

    $ npx create-adobe-extension.

    The tool will request information about your extension, then create the necessary files for your extension.

    Flags

    • --options: Display a list of all flags available for create-adobe-extension.

    • --add: Add a new extension to an existing extension bundle (make sure you’re inside the folder of the bundle you’d like to add to before running).

    • --folder-check: Enable/disable a warning when creating an extension outside of standard Adobe Extensions folder.

    User Responses

    • Project Name: Name of the extension Folder and the first extension in this bundle.

    • Bundle ID: Must begin with “com.” i.e.: “com.test”.

    • Extension ID: Must begin with your Bundle ID. i.e.: “com.test.panel”.

    • Extension Version: Extension version identifier. Set to 1.0.0 by default.

    • CSXS Version: Sets the version of CSXS this extension will use. Set by default to the newest version.

    • Extension Type: Indicates whether the this is a Panel, ModalDialog, Modless or Custom Extension.

    • Program(s): Defines which program or programs this extension will open in.

    • Program Version(s): Sets the version or versions this extension will run in. To set a range, seperate two numbers with a comma. Set by default to “1.0,99.0”.

    • Enable Node.js: Enable or disable the use of Node.js in your extension.

    • Enable Debugging: Set whether or not to enable debugging. If enabled, this tool will create a .debug file with the appropriate extension info and adds a function to your javascript file that reloads your jsx script every time your javascript file is initialized (use in conjunction with Adobe Live Reload for an optimal debugging experience).

    Changelog

    v1.2.0 (2022-04-25)

    Added

    • Added –add flag which allows for the creation of new extensions within an existing extension bundle.
    • Added –options flag which displays all flags available in this package.
    • Added –folder-check flag which enables/disables a check to warn users when they are creating an extension outside of the official Adobe extensions folders.

    Changes

    • Updated README to clarify where the extension folders are located, what the package checks for and how to disable the checking.

    Removed

    • Removed Ascii art title because it was annoying me when I used this package.

    v1.1.3 (2022-04-19)

    Changes

    • Changed Ascii art title to match npm package title.

    v1.1.2 (2022-04-19)

    Changes

    • Changed method of detecting the current working folder for CEP Extension fold check.

    v1.1.0 (2022-04-19)

    Added

    • Added README file
    • Added check for whether user is in CEP Extensions folder
    • Added Adobe Live Reload Ascii art

    Visit original content creator repository
    https://github.com/duncanlutz/create-adobe-extension

  • alpine-mariadb

    Docker + Alpine + MariaDB(Mysql)

    This is an upgrade to the leafney/docker-alpine-mysql project and renamed to leafney/alpine-mariadb.

    Parameters

    • MYSQL_ROOT_PWD : root Password default is “mysql”
    • MYSQL_USER : new UserName
    • MYSQL_USER_PWD : new User Password
    • MYSQL_USER_DB : new Database for new User

    Get Image from DockerHub

    Download from DockerHub for latest tags or specific version tags:

    $ docker pull leafney/alpine-mariadb:latest
    

    Run a default contaier

    $ docker run --name mysql -v /mysql/data/:/var/lib/mysql -d -p 3306:3306 leafney/alpine-mariadb:latest
    

    Run a container with new User and Password

    $ docker run --name mysql -v /mysql/data/:/var/lib/mysql -d -p 3306:3306 -e MYSQL_ROOT_PWD=123 -e MYSQL_USER=dev -e MYSQL_USER_PWD=dev leafney/alpine-mariadb:latest
    

    Run a container with new Database for new User and Password

    $ docker run --name mysql -v /mysql/data/:/var/lib/mysql -d -p 3306:3306 -e MYSQL_ROOT_PWD=123 -e MYSQL_USER=dev -e MYSQL_USER_PWD=dev -e MYSQL_USER_DB=userdb leafney/alpine-mariadb:latest
    

    Run with docker-compose file

    $ docker-compose up -d
    

    Build with docker-compose file

    $ docker-compose -f docker-compose.build.yml up -d
    

    Note: Please get the latest version of mysql from the website https://pkgs.alpinelinux.org/packages if you encounter problems in building.

    Visit original content creator repository
    https://github.com/leafney/alpine-mariadb

  • poseidon

    Poseidon

    License Build Status codecov Docker Hub Downloads

    Software Defined Network Situational Awareness

    POSEIDON is now BlackDuck 2016 OpenSource Rookie of the year

    Poseidon began as a joint effort between two of the IQT Labs: Cyber Reboot and Lab41. The project’s goal is to explore approaches to better identify what nodes are on a given (computer) network and understand what they are doing. The project utilizes Software Defined Networking and machine learning to automatically capture network traffic, extract relevant features from that traffic, perform classifications through trained models, convey results, and provide mechanisms to take further action. While the project works best leveraging modern SDNs, parts of it can still be used with little more than packet capture (pcap) files.

    Table of Contents

    Background

    The Poseidon project originally began as an experiment to test the merits of leveraging SDN and machine learning techniques to detect abnormal network behavior. (Please read our blogs posts linked below for several years of background) While that long-term goal remains, the unfortunate reality is that the state of rich, labelled, public, and MODERN network data sets for ML training is pretty poor. Our lab is working on improving the availability of network training sets, but in the near term the project remains focused on 1) improving the accuracy of identifying what a node IS (based on captured IP header data) and 2) developing Poseidon into a “harness” of sorts to house machine learning techniques for additional use cases. (Read: Not just ours!)

    Prerequisites

    • Docker – Poseidon and related components run on top of Docker, so understanding the fundamentals will be useful for troubleshooting as well. Note: installing via Snap is currently unsupported. A Good Ubuntu Docker Quick-Start
    • Compose – Poseidon is orchestrated with docker-compose. You will need a version that supports compose file format version 3 and health check conditions (minimum 1.29.2).
    • Curl – command-line for transferring data with URLs.
    • git – distributed version control system.
    • jq – command-line JSON processor.
    • An SDN Controller – specifically Faucet
    • ~10GB of free disk space

    Note: Installation on OS X is possible but not supported.

    Installing

    Permissions for Docker

    To simplify using commands with Docker, we recommend allowing the user that will be executing Poseidon commands be part of the docker group so they can execute Docker commands without sudo. Typically, this can be done with:

    sudo usermod -aG docker $USER
    

    Followed by closing the existing shell and starting a new one.

    Getting the bits

    NOTE: If you have previously installed Poseidon from a .deb package, please remove it first. Installation from .deb is no longer supported.

    Install the poseidon script which we will use to install and manage Poseidon.

    curl -L https://raw.githubusercontent.com/IQTLabs/poseidon/main/bin/poseidon -o /usr/local/bin/poseidon
    chmod +x /usr/local/bin/poseidon
    

    Faucet Configuration

    NOTE: Poseidon requires at least Faucet version 1.9.46 or higher.

    Poseidon uses a faucetconfrpc server, to maintain Faucet configuration. Poseidon starts its own server for you by default, and also by default Poseidon and Faucet have to be on the same machine. To run Faucet on a separate machine, you will need to start faucetconfrpc on that other machine, and update faucetconfrpc_address to point to where the faucetconfrpc is running. You may also need to update faucetconfrpc_client, if you are not using the provided automatically generated keys.

    If you have Faucet running already, make sure Faucet is started with the following environment variables, which allow Poseidon to change its config, and receive Faucet events:

    export FAUCET_EVENT_SOCK=1
    export FAUCET_CONFIG_STAT_RELOAD=1
    

    Faucet is now configured and ready for use with Poseidon.

    Faucet stacking

    Faucet supports stacking (distributed switching – multiple switches acting together as one). Poseidon also supports this – Poseidon’s mirroring interface should be connected to a port on the root switch. You will need to allocate a port on each non-root switch also, and install a loopback plug (either Ethernet or fiber) in that port. Poseidon will detect stacking and take care of the rest of the details (using Faucet’s tunneling feature to move mirror packets from the non-root switches to the root switch’s mirror port). The only Poseidon config required is to add the dedicated port on each switch to the controller_mirror_port dictionary.

    Configuring Poseidon

    You will need to create a directory and config file on the server where Poseidon will run.

    sudo mkdir /opt/poseidon
    sudo cp config/poseidon.config /opt/poseidon
    

    Now, edit this file. You will need to set at minimum:

    • controller_type, as appropriate to the controller you are running (see above).
    • collector_nic: must be set to the interface name on the server, that is connected to the switch mirror port.
    • controller_mirror_ports: must be set to the interface on the switch that will be used as the mirror port.

    Optionally, you may also set controller_proxy_mirror_ports (for switches that don’t have their own mirror ports, and can be mirrored with another switch).

    Updating Poseidon

    From v0.10.0, you can update an existing Poseidon installation with poseidon -u (your configuration will be preserved). Updating from previous versions is not supported – please remove and reinstall as above. You can also give poseidon -u a specific git hash if you want to update to an unreleased version.

    Usage

    After installation you’ll have a new command poseidon available for looking at the configuration, logs, and shell, as well as stopping and starting the service.

    $ poseidon help
    Poseidon, an application that leverages software defined networks (SDN) to acquire and then feed network traffic to a number of machine learning techniques. For more info visit: https://github.com/IQTLabs/poseidon
    
    Usage: poseidon [option]
    Options:
        -a,  api           get url to the Poseidon API
        -c,  config        display current configuration info
        -d,  delete        delete Poseidon installation (uses sudo)
        -e,  shell         enter into the Poseidon shell, requires Poseidon to already be running
        -h,  help          print this help
        -i,  install       install Poseidon repo (uses sudo)
        -l,  logs          display the information logs about what Poseidon is doing
        -r,  restart       restart the Poseidon service (uses sudo)
        -s,  start         start the Poseidon service (uses sudo)
        -S,  stop          stop the Poseidon service (uses sudo)
        -u,  update        update Poseidon repo, optionally supply a version (uses sudo)
        -V,  version       get the version installed
    

    Step 0:

    Optionally specify a prefix location to install Poseidon by setting an environment variable, if it is unset, it will default to /opt and Poseidon. (If using Faucet, it will also override /etc locations to this prefix.)

    export POSEIDON_PREFIX=/tmp
    

    Step 1:

    poseidon install
    

    Step 2:

    Configure Poseidon for your preferred settings. Open /opt/poseidon/poseidon.config (add the Poseidon prefix if you specified one).

    For using Faucet, make sure to minimally change the controller_mirror_ports to match the switch name and port number of your mirror port. You will also need to update the collector_nic in the poseidon section to match the interface name of the NIC your mirror port is connected to.

    Step 3:

    If you don’t have Faucet already and/or you want to Poseidon to spin up Faucet for you as well, simply run the following command and you will be done:

    poseidon start
    

    Step 4:

    If you are using your own installation of Faucet, you will need to enable communication between Poseidon and Faucet. Poseidon needs to change Faucet’s configuration, and Faucet needs to send events to Poseidon. This configuration needs to be set with environment variables (see https://docs.faucet.nz/). For example, if running Faucet with Docker, you will need the following environment configuration in the faucet service in your docker-compose file:

            environment:
                FAUCET_CONFIG: '/etc/faucet/faucet.yaml'
                FAUCET_EVENT_SOCK: '/var/run/faucet/faucet.sock'
                FAUCET_CONFIG_STAT_RELOAD: '1'
    

    If Faucet and Poseidon are running on the same machine, you can start Poseidon and you will be done:

    poseidon start --standalone
    

    Step 5:

    If you are running Faucet and Poseidon on different machines, configuration is more complex (work to make this easier is ongoing): execute Step 4 first. Then you will need to run event-adapter-rabbitmq and faucetconfrpc services on the Faucet host, and change Poseidon’s configuration to match.

    First start all services from helpers/faucet/docker-compose.yaml on the Faucet host, using a Docker network that has network connectivity with your Poseidon host. Set FA_RABBIT_HOST to be the address of your Poseidon host. faucet_certstrap will generate keys in /opt/faucetconfrpc which will need to be copied to your Poseidon host. Then modify faucetconfrpc_address in /opt/poseidon/config/poseidon.config to point to your Faucet host.

    You can now start Poseidon:

    poseidon start --standalone
    

    Troubleshooting

    Poseidon by its nature depends on other systems. The following are some common issues and troubleshooting steps.

    Poseidon doesn’t detect any hosts.

    The most common cause of this problem, with the FAUCET controller, is RabbitMQ connectivity.

    • Check that the RabbitMQ event adapter (faucet/event-adapter-rabbitmq) is running and not restarting.
    # docker ps|grep faucet/event-adapter-rabbitmq
    4a7509829be0        faucet/event-adapter-rabbitmq           "/usr/local/bin/entr…"   3 days ago          Up 3 days
    
    • Check that FAUCET.Event messages are being received by Poseidon.

    This command reports the time that the most recent FAUCET.Event message was received by Poseidon.

    If run repeatedly over a couple of minutes this timestamp should increase.

    docker exec -it poseidon_poseidon_1 /bin/sh
    /poseidon # wget -q -O- localhost:9304|grep -E ^poseidon_last_rabbitmq_routing_key_time.+FAUCET.Event
    poseidon_last_rabbitmq_routing_key_time{routing_key="FAUCET.Event"} 1.5739482267393966e+09
    /poseidon # wget -q -O- localhost:9304|grep -E ^poseidon_last_rabbitmq_routing_key_time.+FAUCET.Event
    poseidon_last_rabbitmq_routing_key_time{routing_key="FAUCET.Event"} 1.5739487978768678e+09
    /poseidon # exit
    

    Poseidon doesn’t report any host roles.

    • Check that the mirror interface is up and receiving packets (should be configured in collector_nic. The interface must be up before Posiedon starts.
    # ifconfig enx0023559c2781
    enx0023559c2781: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::223:55ff:fe9c:2781  prefixlen 64  scopeid 0x20<link>
            ether 00:23:55:9c:27:81  txqueuelen 1000  (Ethernet)
            RX packets 82979981  bytes 77510139268 (77.5 GB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 202  bytes 15932 (15.9 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    • Check that there is disk space available and pcaps are being accumulated in /opt/poseidon_files (add POSEIDON_PREFIX in front if it was used.)
    # find /opt/poseidon_files -type f -name \*pcap |head -5
    /opt/poseidon_files/trace_d3f3217106acd75fe7b5c7069a84a227c9e48377_2019-11-15_03_10_41.pcap
    /opt/poseidon_files/tcprewrite-dot1q-2019-11-15-06_26_48.529473-UTC/pcap-node-splitter-2019-11-15-06_26_50.192570-UTC/clients/trace_0a6ce9490c193b65c3cad51fffbadeaed4ed5fdd_2019-11-15_06_11_24-client-ip-216-58-196-147-192-168-254-254-216-58-196-147-vssmonitoring-frame-eth-ip-icmp.pcap
    /opt/poseidon_files/tcprewrite-dot1q-2019-11-15-06_26_48.529473-UTC/pcap-node-splitter-2019-11-15-06_26_50.192570-UTC/clients/trace_0a6ce9490c193b65c3cad51fffbadeaed4ed5fdd_2019-11-15_06_11_24-miscellaneous-192-168-254-1-192-168-254-254-vssmonitoring-frame-eth-arp.pcap
    /opt/poseidon_files/tcprewrite-dot1q-2019-11-15-06_26_48.529473-UTC/pcap-node-splitter-2019-11-15-06_26_50.192570-UTC/clients/trace_0a6ce9490c193b65c3cad51fffbadeaed4ed5fdd_2019-11-15_06_11_24-client-ip-192-168-254-254-192-168-254-254-74-125-200-189-udp-frame-eth-ip-wsshort-port-443.pcap
    /opt/poseidon_files/tcprewrite-dot1q-2019-11-15-06_26_48.529473-UTC/pcap-node-splitter-2019-11-15-06_26_50.192570-UTC/servers/trace_0a6ce9490c193b65c3cad51fffbadeaed4ed5fdd_2019-11-15_06_11_24-server-ip-74-125-68-188-192-168-254-254-74-125-68-188-frame-eth-ip-tcp-port-5228.pcap
    

    Developing

    Modifying Code that Runs in a Docker Container

    If installed as described above, poseidon’s codebase will be at /opt/poseidon. At this location, make changes, then run poseidon restart.

    Network Data Logging

    Poseidon logs some data about the network it monitors. Therefore it is important to secure Poseidon’s own host (aside from logging, Poseidon can of course change FAUCET’s network configuration).

    There are two main types of logging at the lowest level. The first is FAUCET events – FAUCET generates an event when it learns on which port a host is present on the network, and the event includes source and destination Ethernet MAC and IP addresses (if present). For example:

    2019-11-21 20:18:41,909 [DEBUG] faucet - got faucet message for l2_learn: {'version': 1, 'time': 1574367516.3555572, 'dp_id': 1, 'dp_name': 'x930', 'event_id': 172760, 'L2_LEARN': {'port_no': 22, 'previous_port_no': None, 'vid': 254, 'eth_src': '0e:00:00:00:00:99', 'eth_dst': '0e:00:00:00:00:01', 'eth_type': 2048, 'l3_src_ip': '192.168.254.3', 'l3_dst_ip': '192.168.254.254'}}
    

    The second type of logging is host based pcap captures, with most of the application (L4) payload removed. Poseidon causes the ncapture component (https://github.com/IQTLabs/network-tools/tree/main/network_tap/ncapture) to capture traffic, which is logged in /opt/poseidon_files. These are used in turn to learn host roles, etc.

    Related Components

    Additional Info

    Visit original content creator repository https://github.com/faucetsdn/poseidon
  • poseidon

    Poseidon

    License Build Status codecov Docker Hub Downloads

    Software Defined Network Situational Awareness

    POSEIDON is now BlackDuck 2016 OpenSource Rookie of the year

    Poseidon began as a joint effort between two of the IQT Labs: Cyber Reboot and Lab41. The project’s goal is to explore approaches to better identify what nodes are on a given (computer) network and understand what they are doing. The project utilizes Software Defined Networking and machine learning to automatically capture network traffic, extract relevant features from that traffic, perform classifications through trained models, convey results, and provide mechanisms to take further action. While the project works best leveraging modern SDNs, parts of it can still be used with little more than packet capture (pcap) files.

    Table of Contents

    Background

    The Poseidon project originally began as an experiment to test the merits of leveraging SDN and machine learning techniques to detect abnormal network behavior. (Please read our blogs posts linked below for several years of background) While that long-term goal remains, the unfortunate reality is that the state of rich, labelled, public, and MODERN network data sets for ML training is pretty poor. Our lab is working on improving the availability of network training sets, but in the near term the project remains focused on 1) improving the accuracy of identifying what a node IS (based on captured IP header data) and 2) developing Poseidon into a “harness” of sorts to house machine learning techniques for additional use cases. (Read: Not just ours!)

    Prerequisites

    • Docker – Poseidon and related components run on top of Docker, so understanding the fundamentals will be useful for troubleshooting as well. Note: installing via Snap is currently unsupported. A Good Ubuntu Docker Quick-Start
    • Compose – Poseidon is orchestrated with docker-compose. You will need a version that supports compose file format version 3 and health check conditions (minimum 1.29.2).
    • Curl – command-line for transferring data with URLs.
    • git – distributed version control system.
    • jq – command-line JSON processor.
    • An SDN Controller – specifically Faucet
    • ~10GB of free disk space

    Note: Installation on OS X is possible but not supported.

    Installing

    Permissions for Docker

    To simplify using commands with Docker, we recommend allowing the user that will be executing Poseidon commands be part of the docker group so they can execute Docker commands without sudo. Typically, this can be done with:

    sudo usermod -aG docker $USER
    

    Followed by closing the existing shell and starting a new one.

    Getting the bits

    NOTE: If you have previously installed Poseidon from a .deb package, please remove it first. Installation from .deb is no longer supported.

    Install the poseidon script which we will use to install and manage Poseidon.

    curl -L https://raw.githubusercontent.com/IQTLabs/poseidon/main/bin/poseidon -o /usr/local/bin/poseidon
    chmod +x /usr/local/bin/poseidon
    

    Faucet Configuration

    NOTE: Poseidon requires at least Faucet version 1.9.46 or higher.

    Poseidon uses a faucetconfrpc server, to maintain Faucet configuration. Poseidon starts its own server for you by default, and also by default Poseidon and Faucet have to be on the same machine. To run Faucet on a separate machine, you will need to start faucetconfrpc on that other machine, and update faucetconfrpc_address to point to where the faucetconfrpc is running. You may also need to update faucetconfrpc_client, if you are not using the provided automatically generated keys.

    If you have Faucet running already, make sure Faucet is started with the following environment variables, which allow Poseidon to change its config, and receive Faucet events:

    export FAUCET_EVENT_SOCK=1
    export FAUCET_CONFIG_STAT_RELOAD=1
    

    Faucet is now configured and ready for use with Poseidon.

    Faucet stacking

    Faucet supports stacking (distributed switching – multiple switches acting together as one). Poseidon also supports this – Poseidon’s mirroring interface should be connected to a port on the root switch. You will need to allocate a port on each non-root switch also, and install a loopback plug (either Ethernet or fiber) in that port. Poseidon will detect stacking and take care of the rest of the details (using Faucet’s tunneling feature to move mirror packets from the non-root switches to the root switch’s mirror port). The only Poseidon config required is to add the dedicated port on each switch to the controller_mirror_port dictionary.

    Configuring Poseidon

    You will need to create a directory and config file on the server where Poseidon will run.

    sudo mkdir /opt/poseidon
    sudo cp config/poseidon.config /opt/poseidon
    

    Now, edit this file. You will need to set at minimum:

    • controller_type, as appropriate to the controller you are running (see above).
    • collector_nic: must be set to the interface name on the server, that is connected to the switch mirror port.
    • controller_mirror_ports: must be set to the interface on the switch that will be used as the mirror port.

    Optionally, you may also set controller_proxy_mirror_ports (for switches that don’t have their own mirror ports, and can be mirrored with another switch).

    Updating Poseidon

    From v0.10.0, you can update an existing Poseidon installation with poseidon -u (your configuration will be preserved). Updating from previous versions is not supported – please remove and reinstall as above. You can also give poseidon -u a specific git hash if you want to update to an unreleased version.

    Usage

    After installation you’ll have a new command poseidon available for looking at the configuration, logs, and shell, as well as stopping and starting the service.

    $ poseidon help
    Poseidon, an application that leverages software defined networks (SDN) to acquire and then feed network traffic to a number of machine learning techniques. For more info visit: https://github.com/IQTLabs/poseidon
    
    Usage: poseidon [option]
    Options:
        -a,  api           get url to the Poseidon API
        -c,  config        display current configuration info
        -d,  delete        delete Poseidon installation (uses sudo)
        -e,  shell         enter into the Poseidon shell, requires Poseidon to already be running
        -h,  help          print this help
        -i,  install       install Poseidon repo (uses sudo)
        -l,  logs          display the information logs about what Poseidon is doing
        -r,  restart       restart the Poseidon service (uses sudo)
        -s,  start         start the Poseidon service (uses sudo)
        -S,  stop          stop the Poseidon service (uses sudo)
        -u,  update        update Poseidon repo, optionally supply a version (uses sudo)
        -V,  version       get the version installed
    

    Step 0:

    Optionally specify a prefix location to install Poseidon by setting an environment variable, if it is unset, it will default to /opt and Poseidon. (If using Faucet, it will also override /etc locations to this prefix.)

    export POSEIDON_PREFIX=/tmp
    

    Step 1:

    poseidon install
    

    Step 2:

    Configure Poseidon for your preferred settings. Open /opt/poseidon/poseidon.config (add the Poseidon prefix if you specified one).

    For using Faucet, make sure to minimally change the controller_mirror_ports to match the switch name and port number of your mirror port. You will also need to update the collector_nic in the poseidon section to match the interface name of the NIC your mirror port is connected to.

    Step 3:

    If you don’t have Faucet already and/or you want to Poseidon to spin up Faucet for you as well, simply run the following command and you will be done:

    poseidon start
    

    Step 4:

    If you are using your own installation of Faucet, you will need to enable communication between Poseidon and Faucet. Poseidon needs to change Faucet’s configuration, and Faucet needs to send events to Poseidon. This configuration needs to be set with environment variables (see https://docs.faucet.nz/). For example, if running Faucet with Docker, you will need the following environment configuration in the faucet service in your docker-compose file:

            environment:
                FAUCET_CONFIG: '/etc/faucet/faucet.yaml'
                FAUCET_EVENT_SOCK: '/var/run/faucet/faucet.sock'
                FAUCET_CONFIG_STAT_RELOAD: '1'
    

    If Faucet and Poseidon are running on the same machine, you can start Poseidon and you will be done:

    poseidon start --standalone
    

    Step 5:

    If you are running Faucet and Poseidon on different machines, configuration is more complex (work to make this easier is ongoing): execute Step 4 first. Then you will need to run event-adapter-rabbitmq and faucetconfrpc services on the Faucet host, and change Poseidon’s configuration to match.

    First start all services from helpers/faucet/docker-compose.yaml on the Faucet host, using a Docker network that has network connectivity with your Poseidon host. Set FA_RABBIT_HOST to be the address of your Poseidon host. faucet_certstrap will generate keys in /opt/faucetconfrpc which will need to be copied to your Poseidon host. Then modify faucetconfrpc_address in /opt/poseidon/config/poseidon.config to point to your Faucet host.

    You can now start Poseidon:

    poseidon start --standalone
    

    Troubleshooting

    Poseidon by its nature depends on other systems. The following are some common issues and troubleshooting steps.

    Poseidon doesn’t detect any hosts.

    The most common cause of this problem, with the FAUCET controller, is RabbitMQ connectivity.

    • Check that the RabbitMQ event adapter (faucet/event-adapter-rabbitmq) is running and not restarting.
    # docker ps|grep faucet/event-adapter-rabbitmq
    4a7509829be0        faucet/event-adapter-rabbitmq           "/usr/local/bin/entr…"   3 days ago          Up 3 days
    
    • Check that FAUCET.Event messages are being received by Poseidon.

    This command reports the time that the most recent FAUCET.Event message was received by Poseidon.

    If run repeatedly over a couple of minutes this timestamp should increase.

    docker exec -it poseidon_poseidon_1 /bin/sh
    /poseidon # wget -q -O- localhost:9304|grep -E ^poseidon_last_rabbitmq_routing_key_time.+FAUCET.Event
    poseidon_last_rabbitmq_routing_key_time{routing_key="FAUCET.Event"} 1.5739482267393966e+09
    /poseidon # wget -q -O- localhost:9304|grep -E ^poseidon_last_rabbitmq_routing_key_time.+FAUCET.Event
    poseidon_last_rabbitmq_routing_key_time{routing_key="FAUCET.Event"} 1.5739487978768678e+09
    /poseidon # exit
    

    Poseidon doesn’t report any host roles.

    • Check that the mirror interface is up and receiving packets (should be configured in collector_nic. The interface must be up before Posiedon starts.
    # ifconfig enx0023559c2781
    enx0023559c2781: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::223:55ff:fe9c:2781  prefixlen 64  scopeid 0x20<link>
            ether 00:23:55:9c:27:81  txqueuelen 1000  (Ethernet)
            RX packets 82979981  bytes 77510139268 (77.5 GB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 202  bytes 15932 (15.9 KB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    • Check that there is disk space available and pcaps are being accumulated in /opt/poseidon_files (add POSEIDON_PREFIX in front if it was used.)
    # find /opt/poseidon_files -type f -name \*pcap |head -5
    /opt/poseidon_files/trace_d3f3217106acd75fe7b5c7069a84a227c9e48377_2019-11-15_03_10_41.pcap
    /opt/poseidon_files/tcprewrite-dot1q-2019-11-15-06_26_48.529473-UTC/pcap-node-splitter-2019-11-15-06_26_50.192570-UTC/clients/trace_0a6ce9490c193b65c3cad51fffbadeaed4ed5fdd_2019-11-15_06_11_24-client-ip-216-58-196-147-192-168-254-254-216-58-196-147-vssmonitoring-frame-eth-ip-icmp.pcap
    /opt/poseidon_files/tcprewrite-dot1q-2019-11-15-06_26_48.529473-UTC/pcap-node-splitter-2019-11-15-06_26_50.192570-UTC/clients/trace_0a6ce9490c193b65c3cad51fffbadeaed4ed5fdd_2019-11-15_06_11_24-miscellaneous-192-168-254-1-192-168-254-254-vssmonitoring-frame-eth-arp.pcap
    /opt/poseidon_files/tcprewrite-dot1q-2019-11-15-06_26_48.529473-UTC/pcap-node-splitter-2019-11-15-06_26_50.192570-UTC/clients/trace_0a6ce9490c193b65c3cad51fffbadeaed4ed5fdd_2019-11-15_06_11_24-client-ip-192-168-254-254-192-168-254-254-74-125-200-189-udp-frame-eth-ip-wsshort-port-443.pcap
    /opt/poseidon_files/tcprewrite-dot1q-2019-11-15-06_26_48.529473-UTC/pcap-node-splitter-2019-11-15-06_26_50.192570-UTC/servers/trace_0a6ce9490c193b65c3cad51fffbadeaed4ed5fdd_2019-11-15_06_11_24-server-ip-74-125-68-188-192-168-254-254-74-125-68-188-frame-eth-ip-tcp-port-5228.pcap
    

    Developing

    Modifying Code that Runs in a Docker Container

    If installed as described above, poseidon’s codebase will be at /opt/poseidon. At this location, make changes, then run poseidon restart.

    Network Data Logging

    Poseidon logs some data about the network it monitors. Therefore it is important to secure Poseidon’s own host (aside from logging, Poseidon can of course change FAUCET’s network configuration).

    There are two main types of logging at the lowest level. The first is FAUCET events – FAUCET generates an event when it learns on which port a host is present on the network, and the event includes source and destination Ethernet MAC and IP addresses (if present). For example:

    2019-11-21 20:18:41,909 [DEBUG] faucet - got faucet message for l2_learn: {'version': 1, 'time': 1574367516.3555572, 'dp_id': 1, 'dp_name': 'x930', 'event_id': 172760, 'L2_LEARN': {'port_no': 22, 'previous_port_no': None, 'vid': 254, 'eth_src': '0e:00:00:00:00:99', 'eth_dst': '0e:00:00:00:00:01', 'eth_type': 2048, 'l3_src_ip': '192.168.254.3', 'l3_dst_ip': '192.168.254.254'}}
    

    The second type of logging is host based pcap captures, with most of the application (L4) payload removed. Poseidon causes the ncapture component (https://github.com/IQTLabs/network-tools/tree/main/network_tap/ncapture) to capture traffic, which is logged in /opt/poseidon_files. These are used in turn to learn host roles, etc.

    Related Components

    Additional Info

    Visit original content creator repository https://github.com/faucetsdn/poseidon
  • vtbtn-server

    vtbtn-server

    所有按钮项目设计的服务器,提供如下功能:

    • 按钮数据
    • 统计数据

    运行

    从 DockerHub

    • 拉取 Docker 镜像
    docker pull imkiva/vtbtn-server
    • 创建数据目录
    mkdir /path/to/data/dir
    • 启动容器
    docker run -d -p 8080:8080 \
      --env VTBTN_SERVER_ROOT_NAME=<超级用户 ID> \
      --env VTBTN_SERVER_ROOT_PASSWORD=<超级用户密码> \
      --volume <数据目录>:/data/db \
      imkiva/vtbtn-server
    • 开发模式下启动容器 如果处于开发模式下,推荐使用下列命令启动容器
    sudo docker run -p 8080:8080 \
      --env VTBTN_SERVER_ROOT_NAME=<超级用户 ID> \
      --env VTBTN_SERVER_ROOT_PASSWORD=<超级用户密码> \
      --volume <数据目录>:/data/db \
      --rm -it imkiva/vtbtn-server

    该方式启动的容器为交互模式(-it),可以使用 Ctrl-C 进行关闭容器, 在容器关闭后,该容器会自动删除(数据不会删除),不会影响下次测试。

    为了您和用户的身心健康,请勿直接传输明文密码,推荐使用非对称加密算法保证用户的隐私

    服务器将被启动在 localhost:8080

    从源码

    • 编译项目并打包
    ./gradlew shadowJar
    • 编译 Docker 镜像
    docker build -t imkiva/vtbtn-server:latest .
    • 启动容器
    docker run -d -p 8080:8080 \
      --env VTBTN_SERVER_ROOT_NAME=<超级用户 ID> \
      --env VTBTN_SERVER_ROOT_PASSWORD=<超级用户密码> \
      --volume <数据目录>:/data/db \
      imkiva/vtbtn-server

    为了您和用户的身心健康,请勿直接传输明文密码,推荐使用非对称加密算法保证用户的隐私

    开发

    我们使用 JetBrains 提供的 Intellij IDEA. 进行开发。我们相信,这是世界上最棒的 IDE.

    API 文档

    API 采用 RESTful 风格设计,请求和响应的数据格式一律为 application/json

    按钮数据相关 API

    获取所有 Vtuber 的列表

    GET /vtubers
    响应
    {
      "<Vtuber 的名字>": "<该 Vtuber 的资源路径>",
       ...
    }

    例如:

    {
      "fubuki": "/vtubers/fubuki"
    }

    说明当前服务器上共存储了一位 Vtuber 的按钮信息(fubuki),获取该 Vtuber 的所有按钮数据 (语音/分组)都应该请求 /vtuber/fubuki 路径下的资源

    获取某个 Vtuber 的所有语音

    GET /vtubers/:name
    参数 说明
    name Vtuber 的名字
    响应
    {
      "name": "<Vtuber 的名字>",
      "groups": [ group, group, ... ]
    }

    每一个 group 都具有如下的结构

    {
      "name": "<组的名字>",
      "desc": {
        "zh": "<中文翻译>",
        "en": "<英文翻译>",
        "ja": "<日文翻译>"
      },
      "voices": [ voice, voice, ... ]
    }

    每一个 voice 都具有如下的结构

    {
      "name": "<音频名字>",
      "url": "<音频路径>",
      "group": "<音频所属组>",
      "desc": {
        "zh": "<中文翻译>",
        "en": "<英文翻译>",
        "ja": "<日文翻译>"
      }
    }

    获取某个 Vtuber 的某一分组下的所有语音

    GET /vtubers/:name/:group
    参数 说明
    name Vtuber 的名字
    group 组名
    响应

    返回的数据为一个 group (关于 group 的结构请查看上文)

    新增一个组 (Group)

    POST /vtubers/:name/add-group

    该操作会在 Vtuber 名为 :name 的语音数据库中新增一个组。

    注意:该操作需要对应 Vtuber 的管理员权限

    参数 说明
    name Vtuber 的名字
    请求体
    {
      "name": "<组的名字>",
      "desc": {
        "zh": "<中文翻译>",
        "en": "<英文翻译>",
        "ja": "<日文翻译>"
      }
    }
    响应

    如果操作成功,服务器返回 200 OK

    如果操作失败,服务器可能返回以下任一错误码:

    • 403: 权限不足
    • 500: 服务器内部错误 (可以提 issue 了)

    无论是哪种错误,响应体中均会包含如下格式的信息

    {
      "msg": "<操作失败的原因>"
    }

    新增一条语音 (Voice)

    POST /vtubers/:name/:group/add-voice

    该操作会在 Vtuber 名为 :name 的语音数据库中新增一条语音,并且该语音的组被设置为 :group

    注意:该操作需要对应 Vtuber 的管理员权限

    参数 说明
    name Vtuber 的名字
    group 组名
    请求体
    {
      "name": "<音频名字>",
      "url": "<音频路径>",
      "desc": {
        "zh": "<中文翻译>",
        "en": "<英文翻译>",
        "ja": "<日文翻译>"
      }
    }
    响应

    如果操作成功,服务器返回 200 OK

    如果操作失败,服务器可能返回以下任一错误码:

    • 403: 权限不足
    • 500: 服务器内部错误 (可以提 issue 了)

    无论是哪种错误,响应体中均会包含如下格式的信息

    {
      "msg": "<操作失败的原因>"
    }

    统计数据相关 API

    所有统计数据 API 均支持以下 Query Parameters

    参数 说明
    from 起始时间
    to 结束时间

    总体统计数据

    GET /statistics/:name
    参数 说明
    name Vtuber 的名字

    响应

    {
        "vtuber": "<Vtuber的名字>",
        "from": "<开始时间>",
        "to": "<当前时间>",
        "click": "<点击次数>"
    }

    例如

    {
        "vtuber": "fubuki",
        "from": "1970-01-01",
        "to": "2020-06-15",
        "click": 2
    }

    说明名叫fubuki的 Vtuber 对应的所有按钮

    1970-01-012020-06-15一共被点击了2

    分组语音统计数据

    GET /statistics/:name/:group
    参数 说明
    name Vtuber 的名字
    group 组名

    响应

    {
        "vtuber": "<Vtuber的名字>",
        "group": "<分组名称>",
        "from": "<开始时间>",
        "to": "<当前时间>",
        "click": "<点击次数>"
    }

    例如

    {
        "vtuber": "fubuki",
        "group": "actmoe",
        "from": "1970-01-01",
        "to": "2020-06-15",
        "click": 2
    }

    说明名叫fubuki的 Vtuber 对应的actmoe按钮

    1970-01-012020-06-15一共被点击了2

    单个语音点击数据

    GET /statistics/:name/:group/:voiceName
    参数 说明
    name Vtuber 的名字
    group 组名
    voiceName 音频文件名

    响应

    {
        "vtuber": "<Vtuber的名字>",
        "name": "<音频文件名>",
        "group": "<分组名称>",
        "from": "<开始时间>",
        "to": "<当前时间>",
        "click": "<点击次数>"
    }

    例如

    {
        "vtuber": "fubuki",
        "name": "f-006",
        "group": "actmoe",
        "from": "1970-01-01",
        "to": "2020-06-15",
        "click": 2
    }

    说明名叫fubuki的 Vtuber 对应的actmoe中的f-006按钮

    1970-01-012020-06-15一共被点击了2

    为某个按钮的点击次数+1

    POST /statistics/:name/click
    参数 说明
    name Vtuber 的名字

    请求体

    {
      "group": "<分组名称>",
      "name": "<音频文件名>"
    }
    • group: 按钮所属分组
    • name: 次数要+1的按钮

    管理权限相关 API

    你好

    GET /users/hi

    获取用户信息

    响应

    如果操作成功,服务器返回 200 OK,响应体中包含如下内容

    {
        "msg": "欢迎消息",
        "uid": "用户 ID",
        "root": 是否为超级管理员(true|false),
        "verified": 用户是否已验证(true|false),
        "admin": ["管理的 Vtuber 1", "管理的 Vtuber 2", ...],
        "profile": {
            "name": "昵称",
            "email": "邮箱"
        }
    }

    如果操作失败,服务器可能返回以下任一错误码:

    • 403: 权限不足
    • 500: 服务器内部错误 (可以提 issue 了)

    无论是哪种错误,响应体中均会包含如下格式的信息

    {
      "msg": "<操作失败的原因>"
    }

    登录

    POST /users/login
    请求体
    {
      "uid": "<用户 ID>",
      "password": "<用户密码>"
    }

    为了您和用户的身心健康,请勿直接传输明文密码,推荐使用非对称加密算法保证用户的隐私

    响应

    若登录成功,服务器返回 200 OK,并会自动设置 Set-Cookie 响应头

    若失败,服务器返回 403 Forbidden 并且响应体包含如下格式的信息

    {
      "msg": "<登陆失败的原因>"
    }

    注册

    POST /users/register
    请求体
    {
      "uid": "<用户 ID>",
      "password": "<用户密码>",
      "name": "<用户昵称>",
      "email": "<用户邮箱>"
    }

    为了您和用户的身心健康,请勿直接传输明文密码,推荐使用非对称加密算法保证用户的隐私

    响应

    若注册成功,服务器返回 200 OK

    若失败,服务器返回 403 Forbidden 并且响应体包含如下格式的信息

    {
      "msg": "<注册失败的原因>"
    }

    修改可管理的 Vtuber 列表

    POST /users/change-admin-vtuber

    注意:该操作需要超级管理员用户

    请求体
    {
      "uid": "<被修改的用户 ID>",
      "add": [],
      "remove": []
    }

    参数说明:

    • add: 将要添加到可管理列表中的 Vtuber 的名字
    • remove: 将要从可管理列表中移除的 Vtuber 的名字

    如果 remove 中包含 add 中的元素,则该元素并不会添加到可管理列表中

    响应

    如果操作成功,服务器返回 200 OK

    如果操作失败,服务器可能返回以下任一错误码:

    • 403: 权限不足
    • 500: 服务器内部错误 (可以提 issue 了)

    无论是哪种错误,响应体中均会包含如下格式的信息

    {
      "msg": "<操作失败的原因>"
    }
    Visit original content creator repository https://github.com/vbup-osc/vtbtn-server
  • Sentiment_Analysis_using_Machine_Learning_and_Deep_Learning

    Machine Learning Algorithms and Fine Tuning

    • Machine Learning Algorithms and Fine Tuning are performed by using pipeline and hyperparameter search for fine tuning as an example.

    Twitter sentiment analysis is performed by using dimensionality reduction techniques in ML algorithms

    • Application of ML algorithms is performed by using pipeline and hyperparameter search for fine tuning as an example.
    • Detailed information can be found via course link: https://bit.ly/intro_nlp

    IMDB Movie Reviews Sentiment Analysis with TF-IDF

    • Using pipeline with the GridSearchCV to fine-tune hyperparameters, Logistic regression and SVM model are used to predict the sentiment analysis of a movie review. After that, model is saved and loaded again to test it with text.
    • Pickle is used to save and load the model.

    Sentiment Analysis with Deep Learning using ANN and CNN in Tensorflow

    • Simple Artificial Neural Network (ANN) and 1D Convolutional Neural Network (CNN) models are used to predict sentiment analysis for IMDB movei reviews, which is a binary classification problem.
    • Spacy en_core_web_lg pretrained model (https://spacy.io/models/en) is used to convert texts to vectors in dataframes.
    • Feature scaling is applied to X_train and X_test using MinMaxScaler before being fed inputs into the neural network.

    Visit original content creator repository
    https://github.com/seroetr/Sentiment_Analysis_using_Machine_Learning_and_Deep_Learning