{"_id": "q_0", "text": "str->list\n Convert XML to URL List.\n From Biligrab."} {"_id": "q_1", "text": "Downloads Sina videos by URL."} {"_id": "q_2", "text": "Format text with color or other effects into ANSI escaped string."} {"_id": "q_3", "text": "Print a log message to standard error."} {"_id": "q_4", "text": "Print an error log message."} {"_id": "q_5", "text": "Detect operating system."} {"_id": "q_6", "text": "str->None"} {"_id": "q_7", "text": "str->dict\n Information for CKPlayer API content."} {"_id": "q_8", "text": "str->list of str\n Give you the real URLs."} {"_id": "q_9", "text": "Converts a string to a valid filename."} {"_id": "q_10", "text": "Downloads CBS videos by URL."} {"_id": "q_11", "text": "Override the original one\n Ugly ugly dirty hack"} {"_id": "q_12", "text": "str, str, str, bool, bool ->None\n\n Download Acfun video by vid.\n\n Call Acfun API, decide which site to use, and pass the job to its\n extractor."} {"_id": "q_13", "text": "Scans through a string for substrings matched some patterns.\n\n Args:\n text: A string to be scanned.\n patterns: a list of regex pattern.\n\n Returns:\n a list if matched. empty if not."} {"_id": "q_14", "text": "Parses the query string of a URL and returns the value of a parameter.\n\n Args:\n url: A URL.\n param: A string representing the name of the parameter.\n\n Returns:\n The value of the parameter."} {"_id": "q_15", "text": "Post the content of a URL via sending a HTTP POST request.\n\n Args:\n url: A URL.\n headers: Request headers used by the client.\n decoded: Whether decode the response body using UTF-8 or the charset specified in Content-Type.\n\n Returns:\n The content as a string."} {"_id": "q_16", "text": "Parses host name and port number from a string."} {"_id": "q_17", "text": "str->str"} {"_id": "q_18", "text": "JSON, int, int, int->str\n \n Get a proper title with courseid+topicID+partID."} {"_id": "q_19", "text": "int->None\n \n Download a WHOLE course.\n Reuse the API call to save time."} {"_id": "q_20", "text": "int, int, int->None\n \n Download ONE PART of the course."} {"_id": "q_21", "text": "Checks if a task is either queued or running in this executor\n\n :param task_instance: TaskInstance\n :return: True if the task is known to this executor"} {"_id": "q_22", "text": "Returns and flush the event buffer. In case dag_ids is specified\n it will only return and flush events for the given dag_ids. Otherwise\n it returns and flushes all\n\n :param dag_ids: to dag_ids to return events for, if None returns all\n :return: a dict of events"} {"_id": "q_23", "text": "Returns a snowflake.connection object"} {"_id": "q_24", "text": "returns aws_access_key_id, aws_secret_access_key\n from extra\n\n intended to be used by external import and export statements"} {"_id": "q_25", "text": "Executes SQL using psycopg2 copy_expert method.\n Necessary to execute COPY command without access to a superuser.\n\n Note: if this method is called with a \"COPY FROM\" statement and\n the specified input file does not exist, it creates an empty\n file and no data is loaded, but the operation succeeds.\n So if users want to be aware when the input file does not exist,\n they have to check its existence by themselves."} {"_id": "q_26", "text": "Uploads the file to Google cloud storage"} {"_id": "q_27", "text": "Runs forever, monitoring the child processes of @gunicorn_master_proc and\n restarting workers occasionally.\n Each iteration of the loop traverses one edge of this state transition\n diagram, where each state (node) represents\n [ num_ready_workers_running / num_workers_running ]. We expect most time to\n be spent in [n / n]. `bs` is the setting webserver.worker_refresh_batch_size.\n The horizontal transition at ? happens after the new worker parses all the\n dags (so it could take a while!)\n V \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n [n / n] \u2500\u2500TTIN\u2500\u2500> [ [n, n+bs) / n + bs ] \u2500\u2500\u2500\u2500?\u2500\u2500\u2500> [n + bs / n + bs] \u2500\u2500TTOU\u2500\u2518\n ^ ^\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500v\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500 [ [0, n) / n ] <\u2500\u2500\u2500 start\n We change the number of workers by sending TTIN and TTOU to the gunicorn\n master process, which increases and decreases the number of child workers\n respectively. Gunicorn guarantees that on TTOU workers are terminated\n gracefully and that the oldest worker is terminated."} {"_id": "q_28", "text": "Translate a string or list of strings.\n\n See https://cloud.google.com/translate/docs/translating-text\n\n :type values: str or list\n :param values: String or list of strings to translate.\n\n :type target_language: str\n :param target_language: The language to translate results into. This\n is required by the API and defaults to\n the target language of the current instance.\n\n :type format_: str\n :param format_: (Optional) One of ``text`` or ``html``, to specify\n if the input text is plain text or HTML.\n\n :type source_language: str or None\n :param source_language: (Optional) The language of the text to\n be translated.\n\n :type model: str or None\n :param model: (Optional) The model used to translate the text, such\n as ``'base'`` or ``'nmt'``.\n\n :rtype: str or list\n :returns: A list of dictionaries for each queried value. Each\n dictionary typically contains three keys (though not\n all will be present in all cases)\n\n * ``detectedSourceLanguage``: The detected language (as an\n ISO 639-1 language code) of the text.\n * ``translatedText``: The translation of the text into the\n target language.\n * ``input``: The corresponding input value.\n * ``model``: The model used to translate the text.\n\n If only a single value is passed, then only a single\n dictionary will be returned.\n :raises: :class:`~exceptions.ValueError` if the number of\n values and translations differ."} {"_id": "q_29", "text": "Deletes a Cloud SQL instance.\n\n :param project_id: Project ID of the project that contains the instance. If set\n to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :param instance: Cloud SQL instance ID. This does not include the project ID.\n :type instance: str\n :return: None"} {"_id": "q_30", "text": "Retrieves a database resource from a Cloud SQL instance.\n\n :param instance: Database instance ID. This does not include the project ID.\n :type instance: str\n :param database: Name of the database in the instance.\n :type database: str\n :param project_id: Project ID of the project that contains the instance. If set\n to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: A Cloud SQL database resource, as described in\n https://cloud.google.com/sql/docs/mysql/admin-api/v1beta4/databases#resource.\n :rtype: dict"} {"_id": "q_31", "text": "Creates a new database inside a Cloud SQL instance.\n\n :param instance: Database instance ID. This does not include the project ID.\n :type instance: str\n :param body: The request body, as described in\n https://cloud.google.com/sql/docs/mysql/admin-api/v1beta4/databases/insert#request-body.\n :type body: dict\n :param project_id: Project ID of the project that contains the instance. If set\n to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"} {"_id": "q_32", "text": "Updates a database resource inside a Cloud SQL instance.\n\n This method supports patch semantics.\n See https://cloud.google.com/sql/docs/mysql/admin-api/how-tos/performance#patch.\n\n :param instance: Database instance ID. This does not include the project ID.\n :type instance: str\n :param database: Name of the database to be updated in the instance.\n :type database: str\n :param body: The request body, as described in\n https://cloud.google.com/sql/docs/mysql/admin-api/v1beta4/databases/insert#request-body.\n :type body: dict\n :param project_id: Project ID of the project that contains the instance. If set\n to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"} {"_id": "q_33", "text": "Exports data from a Cloud SQL instance to a Cloud Storage bucket as a SQL dump\n or CSV file.\n\n :param instance: Database instance ID of the Cloud SQL instance. This does not include the\n project ID.\n :type instance: str\n :param body: The request body, as described in\n https://cloud.google.com/sql/docs/mysql/admin-api/v1beta4/instances/export#request-body\n :type body: dict\n :param project_id: Project ID of the project that contains the instance. If set\n to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"} {"_id": "q_34", "text": "Returns version of the Cloud SQL Proxy."} {"_id": "q_35", "text": "Create connection in the Connection table, according to whether it uses\n proxy, TCP, UNIX sockets, SSL. Connection ID will be randomly generated.\n\n :param session: Session of the SQL Alchemy ORM (automatically generated with\n decorator)."} {"_id": "q_36", "text": "Retrieves the dynamically created connection from the Connection table.\n\n :param session: Session of the SQL Alchemy ORM (automatically generated with\n decorator)."} {"_id": "q_37", "text": "Delete the dynamically created connection from the Connection table.\n\n :param session: Session of the SQL Alchemy ORM (automatically generated with\n decorator)."} {"_id": "q_38", "text": "Retrieve Cloud SQL Proxy runner. It is used to manage the proxy\n lifecycle per task.\n\n :return: The Cloud SQL Proxy runner.\n :rtype: CloudSqlProxyRunner"} {"_id": "q_39", "text": "Reserve free TCP port to be used by Cloud SQL Proxy"} {"_id": "q_40", "text": "Replaces invalid MLEngine job_id characters with '_'.\n\n This also adds a leading 'z' in case job_id starts with an invalid\n character.\n\n Args:\n job_id: A job_id str that may have invalid characters.\n\n Returns:\n A valid job_id representation."} {"_id": "q_41", "text": "Extract error code from ftp exception"} {"_id": "q_42", "text": "Remove any existing DAG runs for the perf test DAGs."} {"_id": "q_43", "text": "Toggle the pause state of the DAGs in the test."} {"_id": "q_44", "text": "Override the scheduler heartbeat to determine when the test is complete"} {"_id": "q_45", "text": "Creates the directory specified by path, creating intermediate directories\n as necessary. If directory already exists, this is a no-op.\n\n :param path: The directory to create\n :type path: str\n :param mode: The mode to give to the directory e.g. 0o755, ignores umask\n :type mode: int"} {"_id": "q_46", "text": "Make a naive datetime.datetime in a given time zone aware.\n\n :param value: datetime\n :param timezone: timezone\n :return: localized datetime in settings.TIMEZONE or timezone"} {"_id": "q_47", "text": "Make an aware datetime.datetime naive in a given time zone.\n\n :param value: datetime\n :param timezone: timezone\n :return: naive datetime"} {"_id": "q_48", "text": "Wrapper around datetime.datetime that adds settings.TIMEZONE if tzinfo not specified\n\n :return: datetime.datetime"} {"_id": "q_49", "text": "Establish a connection to druid broker."} {"_id": "q_50", "text": "Returns http session for use with requests\n\n :param headers: additional headers to be passed through as a dictionary\n :type headers: dict"} {"_id": "q_51", "text": "Performs the request\n\n :param endpoint: the endpoint to be called i.e. resource/v1/query?\n :type endpoint: str\n :param data: payload to be uploaded or request parameters\n :type data: dict\n :param headers: additional headers to be passed through as a dictionary\n :type headers: dict\n :param extra_options: additional options to be used when executing the request\n i.e. {'check_response': False} to avoid checking raising exceptions on non\n 2XX or 3XX status codes\n :type extra_options: dict"} {"_id": "q_52", "text": "Checks the status code and raise an AirflowException exception on non 2XX or 3XX\n status codes\n\n :param response: A requests response object\n :type response: requests.response"} {"_id": "q_53", "text": "Grabs extra options like timeout and actually runs the request,\n checking for the result\n\n :param session: the session to be used to execute the request\n :type session: requests.Session\n :param prepped_request: the prepared request generated in run()\n :type prepped_request: session.prepare_request\n :param extra_options: additional options to be used when executing the request\n i.e. {'check_response': False} to avoid checking raising exceptions on non 2XX\n or 3XX status codes\n :type extra_options: dict"} {"_id": "q_54", "text": "Contextmanager that will create and teardown a session."} {"_id": "q_55", "text": "Parses some DatabaseError to provide a better error message"} {"_id": "q_56", "text": "Get a pandas dataframe from a sql query."} {"_id": "q_57", "text": "A generic way to insert a set of tuples into a table.\n\n :param table: Name of the target table\n :type table: str\n :param rows: The rows to insert into the table\n :type rows: iterable of tuples\n :param target_fields: The names of the columns to fill in the table\n :type target_fields: iterable of strings"} {"_id": "q_58", "text": "Return a cosmos db client."} {"_id": "q_59", "text": "Checks if a collection exists in CosmosDB."} {"_id": "q_60", "text": "Creates a new collection in the CosmosDB database."} {"_id": "q_61", "text": "Creates a new database in CosmosDB."} {"_id": "q_62", "text": "Deletes an existing database in CosmosDB."} {"_id": "q_63", "text": "Insert a list of new documents into an existing collection in the CosmosDB database."} {"_id": "q_64", "text": "Get a list of documents from an existing collection in the CosmosDB database via SQL query."} {"_id": "q_65", "text": "Returns the Cloud Function with the given name.\n\n :param name: Name of the function.\n :type name: str\n :return: A Cloud Functions object representing the function.\n :rtype: dict"} {"_id": "q_66", "text": "Creates a new function in Cloud Function in the location specified in the body.\n\n :param location: The location of the function.\n :type location: str\n :param body: The body required by the Cloud Functions insert API.\n :type body: dict\n :param project_id: Optional, Google Cloud Project project_id where the function belongs.\n If set to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"} {"_id": "q_67", "text": "Updates Cloud Functions according to the specified update mask.\n\n :param name: The name of the function.\n :type name: str\n :param body: The body required by the cloud function patch API.\n :type body: dict\n :param update_mask: The update mask - array of fields that should be patched.\n :type update_mask: [str]\n :return: None"} {"_id": "q_68", "text": "Uploads zip file with sources.\n\n :param location: The location where the function is created.\n :type location: str\n :param zip_path: The path of the valid .zip file to upload.\n :type zip_path: str\n :param project_id: Optional, Google Cloud Project project_id where the function belongs.\n If set to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: The upload URL that was returned by generateUploadUrl method."} {"_id": "q_69", "text": "Wrapper around the private _get_dep_statuses method that contains some global\n checks for all dependencies.\n\n :param ti: the task instance to get the dependency status for\n :type ti: airflow.models.TaskInstance\n :param session: database session\n :type session: sqlalchemy.orm.session.Session\n :param dep_context: the context for which this dependency should be evaluated for\n :type dep_context: DepContext"} {"_id": "q_70", "text": "Returns whether or not this dependency is met for a given task instance. A\n dependency is considered met if all of the dependency statuses it reports are\n passing.\n\n :param ti: the task instance to see if this dependency is met for\n :type ti: airflow.models.TaskInstance\n :param session: database session\n :type session: sqlalchemy.orm.session.Session\n :param dep_context: The context this dependency is being checked under that stores\n state that can be used by this dependency.\n :type dep_context: BaseDepContext"} {"_id": "q_71", "text": "Returns an iterable of strings that explain why this dependency wasn't met.\n\n :param ti: the task instance to see if this dependency is met for\n :type ti: airflow.models.TaskInstance\n :param session: database session\n :type session: sqlalchemy.orm.session.Session\n :param dep_context: The context this dependency is being checked under that stores\n state that can be used by this dependency.\n :type dep_context: BaseDepContext"} {"_id": "q_72", "text": "Parses a config file for s3 credentials. Can currently\n parse boto, s3cmd.conf and AWS SDK config formats\n\n :param config_file_name: path to the config file\n :type config_file_name: str\n :param config_format: config type. One of \"boto\", \"s3cmd\" or \"aws\".\n Defaults to \"boto\"\n :type config_format: str\n :param profile: profile name in AWS type config file\n :type profile: str"} {"_id": "q_73", "text": "Ensure all logging output has been flushed"} {"_id": "q_74", "text": "If the path contains a folder with a .zip suffix, then\n the folder is treated as a zip archive and path to zip is returned."} {"_id": "q_75", "text": "Traverse a directory and look for Python files.\n\n :param directory: the directory to traverse\n :type directory: unicode\n :param safe_mode: whether to use a heuristic to determine whether a file\n contains Airflow DAG definitions\n :return: a list of paths to Python files in the specified directory\n :rtype: list[unicode]"} {"_id": "q_76", "text": "Launch DagFileProcessorManager processor and start DAG parsing loop in manager."} {"_id": "q_77", "text": "Send termination signal to DAG parsing processor manager\n and expect it to terminate all DAG file processors."} {"_id": "q_78", "text": "Use multiple processes to parse and generate tasks for the\n DAGs in parallel. By processing them in separate processes,\n we can get parallelism and isolation from potentially harmful\n user code."} {"_id": "q_79", "text": "Parse DAG files repeatedly in a standalone loop."} {"_id": "q_80", "text": "Refresh file paths from dag dir if we haven't done it for too long."} {"_id": "q_81", "text": "Occasionally print out stats about how fast the files are getting processed"} {"_id": "q_82", "text": "Sleeps until all the processors are done."} {"_id": "q_83", "text": "This should be periodically called by the manager loop. This method will\n kick off new processes to process DAG definition files and read the\n results from the finished processors.\n\n :return: a list of SimpleDags that were produced by processors that\n have finished since the last time this was called\n :rtype: list[airflow.utils.dag_processing.SimpleDag]"} {"_id": "q_84", "text": "Opens a ssh connection to the remote host.\n\n :rtype: paramiko.client.SSHClient"} {"_id": "q_85", "text": "Gets the latest state of a long-running operation in Google Storage\n Transfer Service.\n\n :param job_name: (Required) Name of the job to be fetched\n :type job_name: str\n :param project_id: (Optional) the ID of the project that owns the Transfer\n Job. If set to None or missing, the default project_id from the GCP\n connection is used.\n :type project_id: str\n :return: Transfer Job\n :rtype: dict"} {"_id": "q_86", "text": "Lists long-running operations in Google Storage Transfer\n Service that match the specified filter.\n\n :param filter: (Required) A request filter, as described in\n https://cloud.google.com/storage-transfer/docs/reference/rest/v1/transferJobs/list#body.QUERY_PARAMETERS.filter\n :type filter: dict\n :return: List of Transfer Jobs\n :rtype: list[dict]"} {"_id": "q_87", "text": "Cancels an transfer operation in Google Storage Transfer Service.\n\n :param operation_name: Name of the transfer operation.\n :type operation_name: str\n :rtype: None"} {"_id": "q_88", "text": "Pauses an transfer operation in Google Storage Transfer Service.\n\n :param operation_name: (Required) Name of the transfer operation.\n :type operation_name: str\n :rtype: None"} {"_id": "q_89", "text": "Waits until the job reaches the expected state.\n\n :param job: Transfer job\n See:\n https://cloud.google.com/storage-transfer/docs/reference/rest/v1/transferJobs#TransferJob\n :type job: dict\n :param expected_statuses: State that is expected\n See:\n https://cloud.google.com/storage-transfer/docs/reference/rest/v1/transferOperations#Status\n :type expected_statuses: set[str]\n :param timeout:\n :type timeout: time in which the operation must end in seconds\n :rtype: None"} {"_id": "q_90", "text": "Returns the number of slots open at the moment"} {"_id": "q_91", "text": "Runs command and returns stdout"} {"_id": "q_92", "text": "Remove an option if it exists in config from a file or\n default config. If both of config have the same option, this removes\n the option in both configs unless remove_default=False."} {"_id": "q_93", "text": "Allocate IDs for incomplete keys.\n\n .. seealso::\n https://cloud.google.com/datastore/docs/reference/rest/v1/projects/allocateIds\n\n :param partial_keys: a list of partial keys.\n :type partial_keys: list\n :return: a list of full keys.\n :rtype: list"} {"_id": "q_94", "text": "Commit a transaction, optionally creating, deleting or modifying some entities.\n\n .. seealso::\n https://cloud.google.com/datastore/docs/reference/rest/v1/projects/commit\n\n :param body: the body of the commit request.\n :type body: dict\n :return: the response body of the commit request.\n :rtype: dict"} {"_id": "q_95", "text": "Lookup some entities by key.\n\n .. seealso::\n https://cloud.google.com/datastore/docs/reference/rest/v1/projects/lookup\n\n :param keys: the keys to lookup.\n :type keys: list\n :param read_consistency: the read consistency to use. default, strong or eventual.\n Cannot be used with a transaction.\n :type read_consistency: str\n :param transaction: the transaction to use, if any.\n :type transaction: str\n :return: the response body of the lookup request.\n :rtype: dict"} {"_id": "q_96", "text": "Roll back a transaction.\n\n .. seealso::\n https://cloud.google.com/datastore/docs/reference/rest/v1/projects/rollback\n\n :param transaction: the transaction to roll back.\n :type transaction: str"} {"_id": "q_97", "text": "Gets the latest state of a long-running operation.\n\n .. seealso::\n https://cloud.google.com/datastore/docs/reference/data/rest/v1/projects.operations/get\n\n :param name: the name of the operation resource.\n :type name: str\n :return: a resource operation instance.\n :rtype: dict"} {"_id": "q_98", "text": "Deletes the long-running operation.\n\n .. seealso::\n https://cloud.google.com/datastore/docs/reference/data/rest/v1/projects.operations/delete\n\n :param name: the name of the operation resource.\n :type name: str\n :return: none if successful.\n :rtype: dict"} {"_id": "q_99", "text": "Poll backup operation state until it's completed.\n\n :param name: the name of the operation resource\n :type name: str\n :param polling_interval_in_seconds: The number of seconds to wait before calling another request.\n :type polling_interval_in_seconds: int\n :return: a resource operation instance.\n :rtype: dict"} {"_id": "q_100", "text": "Publish a message to a topic or an endpoint.\n\n :param target_arn: either a TopicArn or an EndpointArn\n :type target_arn: str\n :param message: the default message you want to send\n :param message: str"} {"_id": "q_101", "text": "Retrieves connection to Cloud Natural Language service.\n\n :return: Cloud Natural Language service object\n :rtype: google.cloud.language_v1.LanguageServiceClient"} {"_id": "q_102", "text": "Finds named entities in the text along with entity types,\n salience, mentions for each entity, and other properties.\n\n :param document: Input document.\n If a dict is provided, it must be of the same form as the protobuf message Document\n :type document: dict or class google.cloud.language_v1.types.Document\n :param encoding_type: The encoding type used by the API to calculate offsets.\n :type encoding_type: google.cloud.language_v1.types.EncodingType\n :param retry: A retry object used to retry requests. If None is specified, requests will not be\n retried.\n :type retry: google.api_core.retry.Retry\n :param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if\n retry is specified, the timeout applies to each individual attempt.\n :type timeout: float\n :param metadata: Additional metadata that is provided to the method.\n :type metadata: sequence[tuple[str, str]]]\n :rtype: google.cloud.language_v1.types.AnalyzeEntitiesResponse"} {"_id": "q_103", "text": "Classifies a document into categories.\n\n :param document: Input document.\n If a dict is provided, it must be of the same form as the protobuf message Document\n :type document: dict or class google.cloud.language_v1.types.Document\n :param retry: A retry object used to retry requests. If None is specified, requests will not be\n retried.\n :type retry: google.api_core.retry.Retry\n :param timeout: The amount of time, in seconds, to wait for the request to complete. Note that if\n retry is specified, the timeout applies to each individual attempt.\n :type timeout: float\n :param metadata: Additional metadata that is provided to the method.\n :type metadata: sequence[tuple[str, str]]]\n :rtype: google.cloud.language_v1.types.AnalyzeEntitiesResponse"} {"_id": "q_104", "text": "Gets template fields for specific operator class.\n\n :param fullname: Full path to operator class.\n For example: ``airflow.contrib.operators.gcp_vision_operator.CloudVisionProductSetCreateOperator``\n :return: List of template field\n :rtype: list[str]"} {"_id": "q_105", "text": "A role that allows you to include a list of template fields in the middle of the text. This is especially\n useful when writing guides describing how to use the operator.\n The result is a list of fields where each field is shorted in the literal block.\n\n Sample usage::\n\n :template-fields:`airflow.contrib.operators.gcp_natural_language_operator.CloudLanguageAnalyzeSentimentOperator`\n\n For further information look at:\n\n * [http://docutils.sourceforge.net/docs/howto/rst-roles.html](Creating reStructuredText Interpreted\n Text Roles)"} {"_id": "q_106", "text": "Properly close pooled database connections"} {"_id": "q_107", "text": "Ensures that certain subfolders of AIRFLOW_HOME are on the classpath"} {"_id": "q_108", "text": "Gets the returned Celery result from the Airflow task\n ID provided to the sensor, and returns True if the\n celery result has been finished execution.\n\n :param context: Airflow's execution context\n :type context: dict\n :return: True if task has been executed, otherwise False\n :rtype: bool"} {"_id": "q_109", "text": "Return true if the ticket cache contains \"conf\" information as is found\n in ticket caches of Kerberos 1.8.1 or later. This is incompatible with the\n Sun Java Krb5LoginModule in Java6, so we need to take an action to work\n around it."} {"_id": "q_110", "text": "Transforms a SQLAlchemy model instance into a dictionary"} {"_id": "q_111", "text": "Reduce the given list of items by splitting it into chunks\n of the given size and passing each chunk through the reducer"} {"_id": "q_112", "text": "Given a number of tasks, builds a dependency chain.\n\n chain(task_1, task_2, task_3, task_4)\n\n is equivalent to\n\n task_1.set_downstream(task_2)\n task_2.set_downstream(task_3)\n task_3.set_downstream(task_4)"} {"_id": "q_113", "text": "Returns a pretty ascii table from tuples\n\n If namedtuple are used, the table will have headers"} {"_id": "q_114", "text": "Returns a Google Cloud Dataproc service object."} {"_id": "q_115", "text": "Awaits for Google Cloud Dataproc Operation to complete."} {"_id": "q_116", "text": "Handles the Airflow + Databricks lifecycle logic for a Databricks operator\n\n :param operator: Databricks operator being handled\n :param context: Airflow context"} {"_id": "q_117", "text": "Run an pig script using the pig cli\n\n >>> ph = PigCliHook()\n >>> result = ph.run_cli(\"ls /;\")\n >>> (\"hdfs://\" in result)\n True"} {"_id": "q_118", "text": "Fetch and return the state of the given celery task. The scope of this function is\n global so that it can be called by subprocesses in the pool.\n\n :param celery_task: a tuple of the Celery task key and the async Celery object used\n to fetch the task's state\n :type celery_task: tuple(str, celery.result.AsyncResult)\n :return: a tuple of the Celery task key and the Celery state of the task\n :rtype: tuple[str, str]"} {"_id": "q_119", "text": "How many Celery tasks should each worker process send.\n\n :return: Number of tasks that should be sent per process\n :rtype: int"} {"_id": "q_120", "text": "How many Celery tasks should be sent to each worker process.\n\n :return: Number of tasks that should be used per process\n :rtype: int"} {"_id": "q_121", "text": "Like a Python builtin dict object, setdefault returns the current value\n for a key, and if it isn't there, stores the default value and returns it.\n\n :param key: Dict key for this Variable\n :type key: str\n :param default: Default value to set and return if the variable\n isn't already in the DB\n :type default: Mixed\n :param deserialize_json: Store this as a JSON encoded value in the DB\n and un-encode it when retrieving a value\n :return: Mixed"} {"_id": "q_122", "text": "Launches a MLEngine job and wait for it to reach a terminal state.\n\n :param project_id: The Google Cloud project id within which MLEngine\n job will be launched.\n :type project_id: str\n\n :param job: MLEngine Job object that should be provided to the MLEngine\n API, such as: ::\n\n {\n 'jobId': 'my_job_id',\n 'trainingInput': {\n 'scaleTier': 'STANDARD_1',\n ...\n }\n }\n\n :type job: dict\n\n :param use_existing_job_fn: In case that a MLEngine job with the same\n job_id already exist, this method (if provided) will decide whether\n we should use this existing job, continue waiting for it to finish\n and returning the job object. It should accepts a MLEngine job\n object, and returns a boolean value indicating whether it is OK to\n reuse the existing job. If 'use_existing_job_fn' is not provided,\n we by default reuse the existing MLEngine job.\n :type use_existing_job_fn: function\n\n :return: The MLEngine job object if the job successfully reach a\n terminal state (which might be FAILED or CANCELLED state).\n :rtype: dict"} {"_id": "q_123", "text": "Gets a MLEngine job based on the job name.\n\n :return: MLEngine job object if succeed.\n :rtype: dict\n\n Raises:\n googleapiclient.errors.HttpError: if HTTP error is returned from server"} {"_id": "q_124", "text": "Create a Model. Blocks until finished."} {"_id": "q_125", "text": "Write batch items to dynamodb table with provisioned throughout capacity."} {"_id": "q_126", "text": "Integrate plugins to the context."} {"_id": "q_127", "text": "Creates a new instance of the configured executor if none exists and returns it"} {"_id": "q_128", "text": "Handles error callbacks when using Segment with segment_debug_mode set to True"} {"_id": "q_129", "text": "Returns a mssql connection object"} {"_id": "q_130", "text": "Delete all DB records related to the specified Dag."} {"_id": "q_131", "text": "Returns a JSON with a task's public instance variables."} {"_id": "q_132", "text": "Get all pools."} {"_id": "q_133", "text": "Delete pool."} {"_id": "q_134", "text": "Create a new container group\n\n :param resource_group: the name of the resource group\n :type resource_group: str\n :param name: the name of the container group\n :type name: str\n :param container_group: the properties of the container group\n :type container_group: azure.mgmt.containerinstance.models.ContainerGroup"} {"_id": "q_135", "text": "Get the state and exitcode of a container group\n\n :param resource_group: the name of the resource group\n :type resource_group: str\n :param name: the name of the container group\n :type name: str\n :return: A tuple with the state, exitcode, and details.\n If the exitcode is unknown 0 is returned.\n :rtype: tuple(state,exitcode,details)"} {"_id": "q_136", "text": "Builds an ingest query for an HDFS TSV load.\n\n :param static_path: The path on hdfs where the data is\n :type static_path: str\n :param columns: List of all the columns that are available\n :type columns: list"} {"_id": "q_137", "text": "Check for message on subscribed channels and write to xcom the message with key ``message``\n\n An example of message ``{'type': 'message', 'pattern': None, 'channel': b'test', 'data': b'hello'}``\n\n :param context: the context object\n :type context: dict\n :return: ``True`` if message (with type 'message') is available or ``False`` if not"} {"_id": "q_138", "text": "Returns a set of dag runs for the given search criteria.\n\n :param dag_id: the dag_id to find dag runs for\n :type dag_id: int, list\n :param run_id: defines the the run id for this dag run\n :type run_id: str\n :param execution_date: the execution date\n :type execution_date: datetime.datetime\n :param state: the state of the dag run\n :type state: airflow.utils.state.State\n :param external_trigger: whether this dag run is externally triggered\n :type external_trigger: bool\n :param no_backfills: return no backfills (True), return all (False).\n Defaults to False\n :type no_backfills: bool\n :param session: database session\n :type session: sqlalchemy.orm.session.Session"} {"_id": "q_139", "text": "Returns the task instances for this dag run"} {"_id": "q_140", "text": "Returns the task instance specified by task_id for this dag run\n\n :param task_id: the task id"} {"_id": "q_141", "text": "The previous DagRun, if there is one"} {"_id": "q_142", "text": "The previous, SCHEDULED DagRun, if there is one"} {"_id": "q_143", "text": "Determines the overall state of the DagRun based on the state\n of its TaskInstances.\n\n :return: State"} {"_id": "q_144", "text": "Verifies the DagRun by checking for removed tasks or tasks that are not in the\n database yet. It will set state to removed or add the task if required."} {"_id": "q_145", "text": "We need to get the headers in addition to the body answer\n to get the location from them\n This function uses jenkins_request method from python-jenkins library\n with just the return call changed\n\n :param jenkins_server: The server to query\n :param req: The request to execute\n :return: Dict containing the response body (key body)\n and the headers coming along (headers)"} {"_id": "q_146", "text": "Given a context, this function provides a dictionary of values that can be used to\n externally reconstruct relations between dags, dag_runs, tasks and task_instances.\n Default to abc.def.ghi format and can be made to ABC_DEF_GHI format if\n in_env_var_format is set to True.\n\n :param context: The context for the task_instance of interest.\n :type context: dict\n :param in_env_var_format: If returned vars should be in ABC_DEF_GHI format.\n :type in_env_var_format: bool\n :return: task_instance context as dict."} {"_id": "q_147", "text": "Queries datadog for a specific metric, potentially with some\n function applied to it and returns the results.\n\n :param query: The datadog query to execute (see datadog docs)\n :type query: str\n :param from_seconds_ago: How many seconds ago to start querying for.\n :type from_seconds_ago: int\n :param to_seconds_ago: Up to how many seconds ago to query for.\n :type to_seconds_ago: int"} {"_id": "q_148", "text": "Fail given zombie tasks, which are tasks that haven't\n had a heartbeat for too long, in the current DagBag.\n\n :param zombies: zombie task instances to kill.\n :type zombies: airflow.utils.dag_processing.SimpleTaskInstance\n :param session: DB session.\n :type session: sqlalchemy.orm.session.Session"} {"_id": "q_149", "text": "Adds the DAG into the bag, recurses into sub dags.\n Throws AirflowDagCycleException if a cycle is detected in this dag or its subdags"} {"_id": "q_150", "text": "Given a file path or a folder, this method looks for python modules,\n imports them and adds them to the dagbag collection.\n\n Note that if a ``.airflowignore`` file is found while processing\n the directory, it will behave much like a ``.gitignore``,\n ignoring files that match any of the regex patterns specified\n in the file.\n\n **Note**: The patterns in .airflowignore are treated as\n un-anchored regexes, not shell-like glob patterns."} {"_id": "q_151", "text": "Prints a report around DagBag loading stats"} {"_id": "q_152", "text": "Add or subtract days from a YYYY-MM-DD\n\n :param ds: anchor date in ``YYYY-MM-DD`` format to add to\n :type ds: str\n :param days: number of days to add to the ds, you can use negative values\n :type days: int\n\n >>> ds_add('2015-01-01', 5)\n '2015-01-06'\n >>> ds_add('2015-01-06', -5)\n '2015-01-01'"} {"_id": "q_153", "text": "Takes an input string and outputs another string\n as specified in the output format\n\n :param ds: input string which contains a date\n :type ds: str\n :param input_format: input string format. E.g. %Y-%m-%d\n :type input_format: str\n :param output_format: output string format E.g. %Y-%m-%d\n :type output_format: str\n\n >>> ds_format('2015-01-01', \"%Y-%m-%d\", \"%m-%d-%y\")\n '01-01-15'\n >>> ds_format('1/5/2015', \"%m/%d/%Y\", \"%Y-%m-%d\")\n '2015-01-05'"} {"_id": "q_154", "text": "poke matching files in a directory with self.regex\n\n :return: Bool depending on the search criteria"} {"_id": "q_155", "text": "Clears a set of task instances, but makes sure the running ones\n get killed.\n\n :param tis: a list of task instances\n :param session: current session\n :param activate_dag_runs: flag to check for active dag run\n :param dag: DAG object"} {"_id": "q_156", "text": "Return the try number that this task number will be when it is actually\n run.\n\n If the TI is currently running, this will match the column in the\n databse, in all othercases this will be incremenetd"} {"_id": "q_157", "text": "Generates the shell command required to execute this task instance.\n\n :param dag_id: DAG ID\n :type dag_id: unicode\n :param task_id: Task ID\n :type task_id: unicode\n :param execution_date: Execution date for the task\n :type execution_date: datetime\n :param mark_success: Whether to mark the task as successful\n :type mark_success: bool\n :param ignore_all_deps: Ignore all ignorable dependencies.\n Overrides the other ignore_* parameters.\n :type ignore_all_deps: bool\n :param ignore_depends_on_past: Ignore depends_on_past parameter of DAGs\n (e.g. for Backfills)\n :type ignore_depends_on_past: bool\n :param ignore_task_deps: Ignore task-specific dependencies such as depends_on_past\n and trigger rule\n :type ignore_task_deps: bool\n :param ignore_ti_state: Ignore the task instance's previous failure/success\n :type ignore_ti_state: bool\n :param local: Whether to run the task locally\n :type local: bool\n :param pickle_id: If the DAG was serialized to the DB, the ID\n associated with the pickled DAG\n :type pickle_id: unicode\n :param file_path: path to the file containing the DAG definition\n :param raw: raw mode (needs more details)\n :param job_id: job ID (needs more details)\n :param pool: the Airflow pool that the task should run in\n :type pool: unicode\n :param cfg_path: the Path to the configuration file\n :type cfg_path: basestring\n :return: shell command that can be used to run the task instance"} {"_id": "q_158", "text": "Get the very latest state from the database, if a session is passed,\n we use and looking up the state becomes part of the session, otherwise\n a new session is used."} {"_id": "q_159", "text": "Forces the task instance's state to FAILED in the database."} {"_id": "q_160", "text": "Clears all XCom data from the database for the task instance"} {"_id": "q_161", "text": "Returns a tuple that identifies the task instance uniquely"} {"_id": "q_162", "text": "Pull XComs that optionally meet certain criteria.\n\n The default value for `key` limits the search to XComs\n that were returned by other tasks (as opposed to those that were pushed\n manually). To remove this filter, pass key=None (or any desired value).\n\n If a single task_id string is provided, the result is the value of the\n most recent matching XCom from that task_id. If multiple task_ids are\n provided, a tuple of matching values is returned. None is returned\n whenever no matches are found.\n\n :param key: A key for the XCom. If provided, only XComs with matching\n keys will be returned. The default key is 'return_value', also\n available as a constant XCOM_RETURN_KEY. This key is automatically\n given to XComs returned by tasks (as opposed to being pushed\n manually). To remove the filter, pass key=None.\n :type key: str\n :param task_ids: Only XComs from tasks with matching ids will be\n pulled. Can pass None to remove the filter.\n :type task_ids: str or iterable of strings (representing task_ids)\n :param dag_id: If provided, only pulls XComs from this DAG.\n If None (default), the DAG of the calling task is used.\n :type dag_id: str\n :param include_prior_dates: If False, only XComs from the current\n execution_date are returned. If True, XComs from previous dates\n are returned as well.\n :type include_prior_dates: bool"} {"_id": "q_163", "text": "Sets the log context."} {"_id": "q_164", "text": "Retrieves connection to Google Compute Engine.\n\n :return: Google Compute Engine services object\n :rtype: dict"} {"_id": "q_165", "text": "Sets machine type of an instance defined by project_id, zone and resource_id.\n Must be called with keyword arguments rather than positional.\n\n :param zone: Google Cloud Platform zone where the instance exists.\n :type zone: str\n :param resource_id: Name of the Compute Engine instance resource\n :type resource_id: str\n :param body: Body required by the Compute Engine setMachineType API,\n as described in\n https://cloud.google.com/compute/docs/reference/rest/v1/instances/setMachineType\n :type body: dict\n :param project_id: Optional, Google Cloud Platform project ID where the\n Compute Engine Instance exists. If set to None or missing,\n the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"} {"_id": "q_166", "text": "Retrieves instance template by project_id and resource_id.\n Must be called with keyword arguments rather than positional.\n\n :param resource_id: Name of the instance template\n :type resource_id: str\n :param project_id: Optional, Google Cloud Platform project ID where the\n Compute Engine Instance exists. If set to None or missing,\n the default project_id from the GCP connection is used.\n :type project_id: str\n :return: Instance template representation as object according to\n https://cloud.google.com/compute/docs/reference/rest/v1/instanceTemplates\n :rtype: dict"} {"_id": "q_167", "text": "Inserts instance template using body specified\n Must be called with keyword arguments rather than positional.\n\n :param body: Instance template representation as object according to\n https://cloud.google.com/compute/docs/reference/rest/v1/instanceTemplates\n :type body: dict\n :param request_id: Optional, unique request_id that you might add to achieve\n full idempotence (for example when client call times out repeating the request\n with the same request id will not create a new instance template again)\n It should be in UUID format as defined in RFC 4122\n :type request_id: str\n :param project_id: Optional, Google Cloud Platform project ID where the\n Compute Engine Instance exists. If set to None or missing,\n the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"} {"_id": "q_168", "text": "Retrieves Instance Group Manager by project_id, zone and resource_id.\n Must be called with keyword arguments rather than positional.\n\n :param zone: Google Cloud Platform zone where the Instance Group Manager exists\n :type zone: str\n :param resource_id: Name of the Instance Group Manager\n :type resource_id: str\n :param project_id: Optional, Google Cloud Platform project ID where the\n Compute Engine Instance exists. If set to None or missing,\n the default project_id from the GCP connection is used.\n :type project_id: str\n :return: Instance group manager representation as object according to\n https://cloud.google.com/compute/docs/reference/rest/beta/instanceGroupManagers\n :rtype: dict"} {"_id": "q_169", "text": "Patches Instance Group Manager with the specified body.\n Must be called with keyword arguments rather than positional.\n\n :param zone: Google Cloud Platform zone where the Instance Group Manager exists\n :type zone: str\n :param resource_id: Name of the Instance Group Manager\n :type resource_id: str\n :param body: Instance Group Manager representation as json-merge-patch object\n according to\n https://cloud.google.com/compute/docs/reference/rest/beta/instanceTemplates/patch\n :type body: dict\n :param request_id: Optional, unique request_id that you might add to achieve\n full idempotence (for example when client call times out repeating the request\n with the same request id will not create a new instance template again).\n It should be in UUID format as defined in RFC 4122\n :type request_id: str\n :param project_id: Optional, Google Cloud Platform project ID where the\n Compute Engine Instance exists. If set to None or missing,\n the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"} {"_id": "q_170", "text": "Waits for the named operation to complete - checks status of the async call.\n\n :param operation_name: name of the operation\n :type operation_name: str\n :param zone: optional region of the request (might be None for global operations)\n :type zone: str\n :return: None"} {"_id": "q_171", "text": "Check if bucket_name exists.\n\n :param bucket_name: the name of the bucket\n :type bucket_name: str"} {"_id": "q_172", "text": "Checks that a prefix exists in a bucket\n\n :param bucket_name: the name of the bucket\n :type bucket_name: str\n :param prefix: a key prefix\n :type prefix: str\n :param delimiter: the delimiter marks key hierarchy.\n :type delimiter: str"} {"_id": "q_173", "text": "Lists prefixes in a bucket under prefix\n\n :param bucket_name: the name of the bucket\n :type bucket_name: str\n :param prefix: a key prefix\n :type prefix: str\n :param delimiter: the delimiter marks key hierarchy.\n :type delimiter: str\n :param page_size: pagination size\n :type page_size: int\n :param max_items: maximum items to return\n :type max_items: int"} {"_id": "q_174", "text": "Returns a boto3.s3.Object\n\n :param key: the path to the key\n :type key: str\n :param bucket_name: the name of the bucket\n :type bucket_name: str"} {"_id": "q_175", "text": "Reads a key from S3\n\n :param key: S3 key that will point to the file\n :type key: str\n :param bucket_name: Name of the bucket in which the file is stored\n :type bucket_name: str"} {"_id": "q_176", "text": "Loads a local file to S3\n\n :param filename: name of the file to load.\n :type filename: str\n :param key: S3 key that will point to the file\n :type key: str\n :param bucket_name: Name of the bucket in which to store the file\n :type bucket_name: str\n :param replace: A flag to decide whether or not to overwrite the key\n if it already exists. If replace is False and the key exists, an\n error will be raised.\n :type replace: bool\n :param encrypt: If True, the file will be encrypted on the server-side\n by S3 and will be stored in an encrypted form while at rest in S3.\n :type encrypt: bool"} {"_id": "q_177", "text": "Loads a string to S3\n\n This is provided as a convenience to drop a string in S3. It uses the\n boto infrastructure to ship a file to s3.\n\n :param string_data: str to set as content for the key.\n :type string_data: str\n :param key: S3 key that will point to the file\n :type key: str\n :param bucket_name: Name of the bucket in which to store the file\n :type bucket_name: str\n :param replace: A flag to decide whether or not to overwrite the key\n if it already exists\n :type replace: bool\n :param encrypt: If True, the file will be encrypted on the server-side\n by S3 and will be stored in an encrypted form while at rest in S3.\n :type encrypt: bool"} {"_id": "q_178", "text": "Loads bytes to S3\n\n This is provided as a convenience to drop a string in S3. It uses the\n boto infrastructure to ship a file to s3.\n\n :param bytes_data: bytes to set as content for the key.\n :type bytes_data: bytes\n :param key: S3 key that will point to the file\n :type key: str\n :param bucket_name: Name of the bucket in which to store the file\n :type bucket_name: str\n :param replace: A flag to decide whether or not to overwrite the key\n if it already exists\n :type replace: bool\n :param encrypt: If True, the file will be encrypted on the server-side\n by S3 and will be stored in an encrypted form while at rest in S3.\n :type encrypt: bool"} {"_id": "q_179", "text": "Loads a file object to S3\n\n :param file_obj: The file-like object to set as the content for the S3 key.\n :type file_obj: file-like object\n :param key: S3 key that will point to the file\n :type key: str\n :param bucket_name: Name of the bucket in which to store the file\n :type bucket_name: str\n :param replace: A flag that indicates whether to overwrite the key\n if it already exists.\n :type replace: bool\n :param encrypt: If True, S3 encrypts the file on the server,\n and the file is stored in encrypted form at rest in S3.\n :type encrypt: bool"} {"_id": "q_180", "text": "Creates a copy of an object that is already stored in S3.\n\n Note: the S3 connection used here needs to have access to both\n source and destination bucket/key.\n\n :param source_bucket_key: The key of the source object.\n\n It can be either full s3:// style url or relative path from root level.\n\n When it's specified as a full s3:// url, please omit source_bucket_name.\n :type source_bucket_key: str\n :param dest_bucket_key: The key of the object to copy to.\n\n The convention to specify `dest_bucket_key` is the same\n as `source_bucket_key`.\n :type dest_bucket_key: str\n :param source_bucket_name: Name of the S3 bucket where the source object is in.\n\n It should be omitted when `source_bucket_key` is provided as a full s3:// url.\n :type source_bucket_name: str\n :param dest_bucket_name: Name of the S3 bucket to where the object is copied.\n\n It should be omitted when `dest_bucket_key` is provided as a full s3:// url.\n :type dest_bucket_name: str\n :param source_version_id: Version ID of the source object (OPTIONAL)\n :type source_version_id: str"} {"_id": "q_181", "text": "Queries cassandra and returns a cursor to the results."} {"_id": "q_182", "text": "Send an email with html content using sendgrid.\n\n To use this plugin:\n 0. include sendgrid subpackage as part of your Airflow installation, e.g.,\n pip install 'apache-airflow[sendgrid]'\n 1. update [email] backend in airflow.cfg, i.e.,\n [email]\n email_backend = airflow.contrib.utils.sendgrid.send_email\n 2. configure Sendgrid specific environment variables at all Airflow instances:\n SENDGRID_MAIL_FROM={your-mail-from}\n SENDGRID_API_KEY={your-sendgrid-api-key}."} {"_id": "q_183", "text": "Recognizes audio input\n\n :param config: information to the recognizer that specifies how to process the request.\n https://googleapis.github.io/google-cloud-python/latest/speech/gapic/v1/types.html#google.cloud.speech_v1.types.RecognitionConfig\n :type config: dict or google.cloud.speech_v1.types.RecognitionConfig\n :param audio: audio data to be recognized\n https://googleapis.github.io/google-cloud-python/latest/speech/gapic/v1/types.html#google.cloud.speech_v1.types.RecognitionAudio\n :type audio: dict or google.cloud.speech_v1.types.RecognitionAudio\n :param retry: (Optional) A retry object used to retry requests. If None is specified,\n requests will not be retried.\n :type retry: google.api_core.retry.Retry\n :param timeout: (Optional) The amount of time, in seconds, to wait for the request to complete.\n Note that if retry is specified, the timeout applies to each individual attempt.\n :type timeout: float"} {"_id": "q_184", "text": "Check whether a potential object is a subclass of\n the AirflowPlugin class.\n\n :param plugin_obj: potential subclass of AirflowPlugin\n :param existing_plugins: Existing list of AirflowPlugin subclasses\n :return: Whether or not the obj is a valid subclass of\n AirflowPlugin"} {"_id": "q_185", "text": "Sets tasks instances to skipped from the same dag run.\n\n :param dag_run: the DagRun for which to set the tasks to skipped\n :param execution_date: execution_date\n :param tasks: tasks to skip (not task_ids)\n :param session: db session to use"} {"_id": "q_186", "text": "Upload a file to Azure Data Lake.\n\n :param local_path: local path. Can be single file, directory (in which case,\n upload recursively) or glob pattern. Recursive glob patterns using `**`\n are not supported.\n :type local_path: str\n :param remote_path: Remote path to upload to; if multiple files, this is the\n directory root to write within.\n :type remote_path: str\n :param nthreads: Number of threads to use. If None, uses the number of cores.\n :type nthreads: int\n :param overwrite: Whether to forcibly overwrite existing files/directories.\n If False and remote path is a directory, will quit regardless if any files\n would be overwritten or not. If True, only matching filenames are actually\n overwritten.\n :type overwrite: bool\n :param buffersize: int [2**22]\n Number of bytes for internal buffer. This block cannot be bigger than\n a chunk and cannot be smaller than a block.\n :type buffersize: int\n :param blocksize: int [2**22]\n Number of bytes for a block. Within each chunk, we write a smaller\n block for each API call. This block cannot be bigger than a chunk.\n :type blocksize: int"} {"_id": "q_187", "text": "List files in Azure Data Lake Storage\n\n :param path: full path/globstring to use to list files in ADLS\n :type path: str"} {"_id": "q_188", "text": "Uncompress gz and bz2 files"} {"_id": "q_189", "text": "Decorates function to execute function at the same time submitting action_logging\n but in CLI context. It will call action logger callbacks twice,\n one for pre-execution and the other one for post-execution.\n\n Action logger will be called with below keyword parameters:\n sub_command : name of sub-command\n start_datetime : start datetime instance by utc\n end_datetime : end datetime instance by utc\n full_command : full command line arguments\n user : current user\n log : airflow.models.log.Log ORM instance\n dag_id : dag id (optional)\n task_id : task_id (optional)\n execution_date : execution date (optional)\n error : exception instance if there's an exception\n\n :param f: function instance\n :return: wrapped function"} {"_id": "q_190", "text": "Builds metrics dict from function args\n It assumes that function arguments is from airflow.bin.cli module's function\n and has Namespace instance where it optionally contains \"dag_id\", \"task_id\",\n and \"execution_date\".\n\n :param func_name: name of function\n :param namespace: Namespace instance from argparse\n :return: dict with metrics"} {"_id": "q_191", "text": "Create the specified cgroup.\n\n :param path: The path of the cgroup to create.\n E.g. cpu/mygroup/mysubgroup\n :return: the Node associated with the created cgroup.\n :rtype: cgroupspy.nodes.Node"} {"_id": "q_192", "text": "The purpose of this function is to be robust to improper connections\n settings provided by users, specifically in the host field.\n\n For example -- when users supply ``https://xx.cloud.databricks.com`` as the\n host, we must strip out the protocol to get the host.::\n\n h = DatabricksHook()\n assert h._parse_host('https://xx.cloud.databricks.com') == \\\n 'xx.cloud.databricks.com'\n\n In the case where users supply the correct ``xx.cloud.databricks.com`` as the\n host, this function is a no-op.::\n\n assert h._parse_host('xx.cloud.databricks.com') == 'xx.cloud.databricks.com'"} {"_id": "q_193", "text": "Utility function to perform an API call with retries\n\n :param endpoint_info: Tuple of method and endpoint\n :type endpoint_info: tuple[string, string]\n :param json: Parameters for this API call.\n :type json: dict\n :return: If the api call returns a OK status code,\n this function returns the response in JSON. Otherwise,\n we throw an AirflowException.\n :rtype: dict"} {"_id": "q_194", "text": "Sign into Salesforce, only if we are not already signed in."} {"_id": "q_195", "text": "Make a query to Salesforce.\n\n :param query: The query to make to Salesforce.\n :type query: str\n :return: The query result.\n :rtype: dict"} {"_id": "q_196", "text": "Get the description of an object from Salesforce.\n This description is the object's schema and\n some extra metadata that Salesforce stores for each object.\n\n :param obj: The name of the Salesforce object that we are getting a description of.\n :type obj: str\n :return: the description of the Salesforce object.\n :rtype: dict"} {"_id": "q_197", "text": "Get all instances of the `object` from Salesforce.\n For each model, only get the fields specified in fields.\n\n All we really do underneath the hood is run:\n SELECT FROM ;\n\n :param obj: The object name to get from Salesforce.\n :type obj: str\n :param fields: The fields to get from the object.\n :type fields: iterable\n :return: all instances of the object from Salesforce.\n :rtype: dict"} {"_id": "q_198", "text": "Convert a column of a dataframe to UNIX timestamps if applicable\n\n :param column: A Series object representing a column of a dataframe.\n :type column: pd.Series\n :return: a new series that maintains the same index as the original\n :rtype: pd.Series"} {"_id": "q_199", "text": "Write query results to file.\n\n Acceptable formats are:\n - csv:\n comma-separated-values file. This is the default format.\n - json:\n JSON array. Each element in the array is a different row.\n - ndjson:\n JSON array but each element is new-line delimited instead of comma delimited like in `json`\n\n This requires a significant amount of cleanup.\n Pandas doesn't handle output to CSV and json in a uniform way.\n This is especially painful for datetime types.\n Pandas wants to write them as strings in CSV, but as millisecond Unix timestamps.\n\n By default, this function will try and leave all values as they are represented in Salesforce.\n You use the `coerce_to_timestamp` flag to force all datetimes to become Unix timestamps (UTC).\n This is can be greatly beneficial as it will make all of your datetime fields look the same,\n and makes it easier to work with in other database environments\n\n :param query_results: the results from a SQL query\n :type query_results: list of dict\n :param filename: the name of the file where the data should be dumped to\n :type filename: str\n :param fmt: the format you want the output in. Default: 'csv'\n :type fmt: str\n :param coerce_to_timestamp: True if you want all datetime fields to be converted into Unix timestamps.\n False if you want them to be left in the same format as they were in Salesforce.\n Leaving the value as False will result in datetimes being strings. Default: False\n :type coerce_to_timestamp: bool\n :param record_time_added: True if you want to add a Unix timestamp field\n to the resulting data that marks when the data was fetched from Salesforce. Default: False\n :type record_time_added: bool\n :return: the dataframe that gets written to the file.\n :rtype: pd.Dataframe"} {"_id": "q_200", "text": "Fetches a mongo collection object for querying.\n\n Uses connection schema as DB unless specified."} {"_id": "q_201", "text": "Retrieves mail's attachments in the mail folder by its name.\n\n :param name: The name of the attachment that will be downloaded.\n :type name: str\n :param mail_folder: The mail folder where to look at.\n :type mail_folder: str\n :param check_regex: Checks the name for a regular expression.\n :type check_regex: bool\n :param latest_only: If set to True it will only retrieve\n the first matched attachment.\n :type latest_only: bool\n :param not_found_mode: Specify what should happen if no attachment has been found.\n Supported values are 'raise', 'warn' and 'ignore'.\n If it is set to 'raise' it will raise an exception,\n if set to 'warn' it will only print a warning and\n if set to 'ignore' it won't notify you at all.\n :type not_found_mode: str\n :returns: a list of tuple each containing the attachment filename and its payload.\n :rtype: a list of tuple"} {"_id": "q_202", "text": "Downloads mail's attachments in the mail folder by its name to the local directory.\n\n :param name: The name of the attachment that will be downloaded.\n :type name: str\n :param local_output_directory: The output directory on the local machine\n where the files will be downloaded to.\n :type local_output_directory: str\n :param mail_folder: The mail folder where to look at.\n :type mail_folder: str\n :param check_regex: Checks the name for a regular expression.\n :type check_regex: bool\n :param latest_only: If set to True it will only download\n the first matched attachment.\n :type latest_only: bool\n :param not_found_mode: Specify what should happen if no attachment has been found.\n Supported values are 'raise', 'warn' and 'ignore'.\n If it is set to 'raise' it will raise an exception,\n if set to 'warn' it will only print a warning and\n if set to 'ignore' it won't notify you at all.\n :type not_found_mode: str"} {"_id": "q_203", "text": "Gets all attachments by name for the mail.\n\n :param name: The name of the attachment to look for.\n :type name: str\n :param check_regex: Checks the name for a regular expression.\n :type check_regex: bool\n :param find_first: If set to True it will only find the first match and then quit.\n :type find_first: bool\n :returns: a list of tuples each containing name and payload\n where the attachments name matches the given name.\n :rtype: list of tuple"} {"_id": "q_204", "text": "Write batch records to Kinesis Firehose"} {"_id": "q_205", "text": "Determines whether a task is ready to be rescheduled. Only tasks in\n NONE state with at least one row in task_reschedule table are\n handled by this dependency class, otherwise this dependency is\n considered as passed. This dependency fails if the latest reschedule\n request's reschedule date is still in future."} {"_id": "q_206", "text": "Send an email with html content\n\n >>> send_email('test@example.com', 'foo', 'Foo bar', ['/dev/null'], dryrun=True)"} {"_id": "q_207", "text": "Check if a blob exists on Azure Blob Storage.\n\n :param container_name: Name of the container.\n :type container_name: str\n :param blob_name: Name of the blob.\n :type blob_name: str\n :param kwargs: Optional keyword arguments that\n `BlockBlobService.exists()` takes.\n :type kwargs: object\n :return: True if the blob exists, False otherwise.\n :rtype: bool"} {"_id": "q_208", "text": "Check if a prefix exists on Azure Blob storage.\n\n :param container_name: Name of the container.\n :type container_name: str\n :param prefix: Prefix of the blob.\n :type prefix: str\n :param kwargs: Optional keyword arguments that\n `BlockBlobService.list_blobs()` takes.\n :type kwargs: object\n :return: True if blobs matching the prefix exist, False otherwise.\n :rtype: bool"} {"_id": "q_209", "text": "Delete a file from Azure Blob Storage.\n\n :param container_name: Name of the container.\n :type container_name: str\n :param blob_name: Name of the blob.\n :type blob_name: str\n :param is_prefix: If blob_name is a prefix, delete all matching files\n :type is_prefix: bool\n :param ignore_if_missing: if True, then return success even if the\n blob does not exist.\n :type ignore_if_missing: bool\n :param kwargs: Optional keyword arguments that\n `BlockBlobService.create_blob_from_path()` takes.\n :type kwargs: object"} {"_id": "q_210", "text": "Transfers the remote file to a local location.\n\n If local_full_path_or_buffer is a string path, the file will be put\n at that location; if it is a file-like buffer, the file will\n be written to the buffer but not closed.\n\n :param remote_full_path: full path to the remote file\n :type remote_full_path: str\n :param local_full_path_or_buffer: full path to the local file or a\n file-like buffer\n :type local_full_path_or_buffer: str or file-like buffer\n :param callback: callback which is called each time a block of data\n is read. if you do not use a callback, these blocks will be written\n to the file or buffer passed in. if you do pass in a callback, note\n that writing to a file or buffer will need to be handled inside the\n callback.\n [default: output_handle.write()]\n :type callback: callable\n\n :Example::\n\n hook = FTPHook(ftp_conn_id='my_conn')\n\n remote_path = '/path/to/remote/file'\n local_path = '/path/to/local/file'\n\n # with a custom callback (in this case displaying progress on each read)\n def print_progress(percent_progress):\n self.log.info('Percent Downloaded: %s%%' % percent_progress)\n\n total_downloaded = 0\n total_file_size = hook.get_size(remote_path)\n output_handle = open(local_path, 'wb')\n def write_to_file_with_progress(data):\n total_downloaded += len(data)\n output_handle.write(data)\n percent_progress = (total_downloaded / total_file_size) * 100\n print_progress(percent_progress)\n hook.retrieve_file(remote_path, None, callback=write_to_file_with_progress)\n\n # without a custom callback data is written to the local_path\n hook.retrieve_file(remote_path, local_path)"} {"_id": "q_211", "text": "Transfers a local file to the remote location.\n\n If local_full_path_or_buffer is a string path, the file will be read\n from that location; if it is a file-like buffer, the file will\n be read from the buffer but not closed.\n\n :param remote_full_path: full path to the remote file\n :type remote_full_path: str\n :param local_full_path_or_buffer: full path to the local file or a\n file-like buffer\n :type local_full_path_or_buffer: str or file-like buffer"} {"_id": "q_212", "text": "Returns a datetime object representing the last time the file was modified\n\n :param path: remote file path\n :type path: string"} {"_id": "q_213", "text": "Call the DiscordWebhookHook to post message"} {"_id": "q_214", "text": "Return the FileService object."} {"_id": "q_215", "text": "Check if a directory exists on Azure File Share.\n\n :param share_name: Name of the share.\n :type share_name: str\n :param directory_name: Name of the directory.\n :type directory_name: str\n :param kwargs: Optional keyword arguments that\n `FileService.exists()` takes.\n :type kwargs: object\n :return: True if the file exists, False otherwise.\n :rtype: bool"} {"_id": "q_216", "text": "Check if a file exists on Azure File Share.\n\n :param share_name: Name of the share.\n :type share_name: str\n :param directory_name: Name of the directory.\n :type directory_name: str\n :param file_name: Name of the file.\n :type file_name: str\n :param kwargs: Optional keyword arguments that\n `FileService.exists()` takes.\n :type kwargs: object\n :return: True if the file exists, False otherwise.\n :rtype: bool"} {"_id": "q_217", "text": "Return the list of directories and files stored on a Azure File Share.\n\n :param share_name: Name of the share.\n :type share_name: str\n :param directory_name: Name of the directory.\n :type directory_name: str\n :param kwargs: Optional keyword arguments that\n `FileService.list_directories_and_files()` takes.\n :type kwargs: object\n :return: A list of files and directories\n :rtype: list"} {"_id": "q_218", "text": "Create a new directory on a Azure File Share.\n\n :param share_name: Name of the share.\n :type share_name: str\n :param directory_name: Name of the directory.\n :type directory_name: str\n :param kwargs: Optional keyword arguments that\n `FileService.create_directory()` takes.\n :type kwargs: object\n :return: A list of files and directories\n :rtype: list"} {"_id": "q_219", "text": "Upload a file to Azure File Share.\n\n :param file_path: Path to the file to load.\n :type file_path: str\n :param share_name: Name of the share.\n :type share_name: str\n :param directory_name: Name of the directory.\n :type directory_name: str\n :param file_name: Name of the file.\n :type file_name: str\n :param kwargs: Optional keyword arguments that\n `FileService.create_file_from_path()` takes.\n :type kwargs: object"} {"_id": "q_220", "text": "Upload a string to Azure File Share.\n\n :param string_data: String to load.\n :type string_data: str\n :param share_name: Name of the share.\n :type share_name: str\n :param directory_name: Name of the directory.\n :type directory_name: str\n :param file_name: Name of the file.\n :type file_name: str\n :param kwargs: Optional keyword arguments that\n `FileService.create_file_from_text()` takes.\n :type kwargs: object"} {"_id": "q_221", "text": "Upload a stream to Azure File Share.\n\n :param stream: Opened file/stream to upload as the file content.\n :type stream: file-like\n :param share_name: Name of the share.\n :type share_name: str\n :param directory_name: Name of the directory.\n :type directory_name: str\n :param file_name: Name of the file.\n :type file_name: str\n :param count: Size of the stream in bytes\n :type count: int\n :param kwargs: Optional keyword arguments that\n `FileService.create_file_from_stream()` takes.\n :type kwargs: object"} {"_id": "q_222", "text": "Returns a Google Cloud Storage service object."} {"_id": "q_223", "text": "Copies an object from a bucket to another, with renaming if requested.\n\n destination_bucket or destination_object can be omitted, in which case\n source bucket/object is used, but not both.\n\n :param source_bucket: The bucket of the object to copy from.\n :type source_bucket: str\n :param source_object: The object to copy.\n :type source_object: str\n :param destination_bucket: The destination of the object to copied to.\n Can be omitted; then the same bucket is used.\n :type destination_bucket: str\n :param destination_object: The (renamed) path of the object if given.\n Can be omitted; then the same name is used.\n :type destination_object: str"} {"_id": "q_224", "text": "Get a file from Google Cloud Storage.\n\n :param bucket_name: The bucket to fetch from.\n :type bucket_name: str\n :param object_name: The object to fetch.\n :type object_name: str\n :param filename: If set, a local file path where the file should be written to.\n :type filename: str"} {"_id": "q_225", "text": "Uploads a local file to Google Cloud Storage.\n\n :param bucket_name: The bucket to upload to.\n :type bucket_name: str\n :param object_name: The object name to set when uploading the local file.\n :type object_name: str\n :param filename: The local file path to the file to be uploaded.\n :type filename: str\n :param mime_type: The MIME type to set when uploading the file.\n :type mime_type: str\n :param gzip: Option to compress file for upload\n :type gzip: bool"} {"_id": "q_226", "text": "Deletes an object from the bucket.\n\n :param bucket_name: name of the bucket, where the object resides\n :type bucket_name: str\n :param object_name: name of the object to delete\n :type object_name: str"} {"_id": "q_227", "text": "List all objects from the bucket with the give string prefix in name\n\n :param bucket_name: bucket name\n :type bucket_name: str\n :param versions: if true, list all versions of the objects\n :type versions: bool\n :param max_results: max count of items to return in a single page of responses\n :type max_results: int\n :param prefix: prefix string which filters objects whose name begin with\n this prefix\n :type prefix: str\n :param delimiter: filters objects based on the delimiter (for e.g '.csv')\n :type delimiter: str\n :return: a stream of object names matching the filtering criteria"} {"_id": "q_228", "text": "Gets the size of a file in Google Cloud Storage.\n\n :param bucket_name: The Google cloud storage bucket where the blob_name is.\n :type bucket_name: str\n :param object_name: The name of the object to check in the Google\n cloud storage bucket_name.\n :type object_name: str"} {"_id": "q_229", "text": "Gets the MD5 hash of an object in Google Cloud Storage.\n\n :param bucket_name: The Google cloud storage bucket where the blob_name is.\n :type bucket_name: str\n :param object_name: The name of the object to check in the Google cloud\n storage bucket_name.\n :type object_name: str"} {"_id": "q_230", "text": "Creates a new bucket. Google Cloud Storage uses a flat namespace, so\n you can't create a bucket with a name that is already in use.\n\n .. seealso::\n For more information, see Bucket Naming Guidelines:\n https://cloud.google.com/storage/docs/bucketnaming.html#requirements\n\n :param bucket_name: The name of the bucket.\n :type bucket_name: str\n :param resource: An optional dict with parameters for creating the bucket.\n For information on available parameters, see Cloud Storage API doc:\n https://cloud.google.com/storage/docs/json_api/v1/buckets/insert\n :type resource: dict\n :param storage_class: This defines how objects in the bucket are stored\n and determines the SLA and the cost of storage. Values include\n\n - ``MULTI_REGIONAL``\n - ``REGIONAL``\n - ``STANDARD``\n - ``NEARLINE``\n - ``COLDLINE``.\n\n If this value is not specified when the bucket is\n created, it will default to STANDARD.\n :type storage_class: str\n :param location: The location of the bucket.\n Object data for objects in the bucket resides in physical storage\n within this region. Defaults to US.\n\n .. seealso::\n https://developers.google.com/storage/docs/bucket-locations\n\n :type location: str\n :param project_id: The ID of the GCP Project.\n :type project_id: str\n :param labels: User-provided labels, in key/value pairs.\n :type labels: dict\n :return: If successful, it returns the ``id`` of the bucket."} {"_id": "q_231", "text": "Returns true if training job's secondary status message has changed.\n\n :param current_job_description: Current job description, returned from DescribeTrainingJob call.\n :type current_job_description: dict\n :param prev_job_description: Previous job description, returned from DescribeTrainingJob call.\n :type prev_job_description: dict\n\n :return: Whether the secondary status message of a training job changed or not."} {"_id": "q_232", "text": "Tar the local file or directory and upload to s3\n\n :param path: local file or directory\n :type path: str\n :param key: s3 key\n :type key: str\n :param bucket: s3 bucket\n :type bucket: str\n :return: None"} {"_id": "q_233", "text": "Extract the S3 operations from the configuration and execute them.\n\n :param config: config of SageMaker operation\n :type config: dict\n :rtype: dict"} {"_id": "q_234", "text": "Check if an S3 URL exists\n\n :param s3url: S3 url\n :type s3url: str\n :rtype: bool"} {"_id": "q_235", "text": "Establish an AWS connection for retrieving logs during training\n\n :rtype: CloudWatchLogs.Client"} {"_id": "q_236", "text": "Return the training job info associated with job_name and print CloudWatch logs"} {"_id": "q_237", "text": "Check status of a SageMaker job\n\n :param job_name: name of the job to check status\n :type job_name: str\n :param key: the key of the response dict\n that points to the state\n :type key: str\n :param describe_function: the function used to retrieve the status\n :type describe_function: python callable\n :param args: the arguments for the function\n :param check_interval: the time interval in seconds which the operator\n will check the status of any SageMaker job\n :type check_interval: int\n :param max_ingestion_time: the maximum ingestion time in seconds. Any\n SageMaker jobs that run longer than this will fail. Setting this to\n None implies no timeout for any SageMaker job.\n :type max_ingestion_time: int\n :param non_terminal_states: the set of nonterminal states\n :type non_terminal_states: set\n :return: response of describe call after job is done"} {"_id": "q_238", "text": "Execute the python dataflow job."} {"_id": "q_239", "text": "Run migrations in 'offline' mode.\n\n This configures the context with just a URL\n and not an Engine, though an Engine is acceptable\n here as well. By skipping the Engine creation\n we don't even need a DBAPI to be available.\n\n Calls to context.execute() here emit the given string to the\n script output."} {"_id": "q_240", "text": "Deletes the specified Cloud Bigtable instance.\n Raises google.api_core.exceptions.NotFound if the Cloud Bigtable instance does\n not exist.\n\n :param project_id: Optional, Google Cloud Platform project ID where the\n BigTable exists. If set to None or missing,\n the default project_id from the GCP connection is used.\n :type project_id: str\n :param instance_id: The ID of the Cloud Bigtable instance.\n :type instance_id: str"} {"_id": "q_241", "text": "Updates number of nodes in the specified Cloud Bigtable cluster.\n Raises google.api_core.exceptions.NotFound if the cluster does not exist.\n\n :type instance: Instance\n :param instance: The Cloud Bigtable instance that owns the cluster.\n :type cluster_id: str\n :param cluster_id: The ID of the cluster.\n :type nodes: int\n :param nodes: The desired number of nodes."} {"_id": "q_242", "text": "Loads a pandas DataFrame into hive.\n\n Hive data types will be inferred if not passed but column names will\n not be sanitized.\n\n :param df: DataFrame to load into a Hive table\n :type df: pandas.DataFrame\n :param table: target Hive table, use dot notation to target a\n specific database\n :type table: str\n :param field_dict: mapping from column name to hive data type.\n Note that it must be OrderedDict so as to keep columns' order.\n :type field_dict: collections.OrderedDict\n :param delimiter: field delimiter in the file\n :type delimiter: str\n :param encoding: str encoding to use when writing DataFrame to file\n :type encoding: str\n :param pandas_kwargs: passed to DataFrame.to_csv\n :type pandas_kwargs: dict\n :param kwargs: passed to self.load_file"} {"_id": "q_243", "text": "Loads a local file into Hive\n\n Note that the table generated in Hive uses ``STORED AS textfile``\n which isn't the most efficient serialization format. If a\n large amount of data is loaded and/or if the tables gets\n queried considerably, you may want to use this operator only to\n stage the data into a temporary table before loading it into its\n final destination using a ``HiveOperator``.\n\n :param filepath: local filepath of the file to load\n :type filepath: str\n :param table: target Hive table, use dot notation to target a\n specific database\n :type table: str\n :param delimiter: field delimiter in the file\n :type delimiter: str\n :param field_dict: A dictionary of the fields name in the file\n as keys and their Hive types as values.\n Note that it must be OrderedDict so as to keep columns' order.\n :type field_dict: collections.OrderedDict\n :param create: whether to create the table if it doesn't exist\n :type create: bool\n :param overwrite: whether to overwrite the data in table or partition\n :type overwrite: bool\n :param partition: target partition as a dict of partition columns\n and values\n :type partition: dict\n :param recreate: whether to drop and recreate the table at every\n execution\n :type recreate: bool\n :param tblproperties: TBLPROPERTIES of the hive table being created\n :type tblproperties: dict"} {"_id": "q_244", "text": "Returns a Hive thrift client."} {"_id": "q_245", "text": "Checks whether a partition with a given name exists\n\n :param schema: Name of hive schema (database) @table belongs to\n :type schema: str\n :param table: Name of hive table @partition belongs to\n :type schema: str\n :partition: Name of the partitions to check for (eg `a=b/c=d`)\n :type schema: str\n :rtype: bool\n\n >>> hh = HiveMetastoreHook()\n >>> t = 'static_babynames_partitioned'\n >>> hh.check_for_named_partition('airflow', t, \"ds=2015-01-01\")\n True\n >>> hh.check_for_named_partition('airflow', t, \"ds=xxx\")\n False"} {"_id": "q_246", "text": "Check if table exists\n\n >>> hh = HiveMetastoreHook()\n >>> hh.table_exists(db='airflow', table_name='static_babynames')\n True\n >>> hh.table_exists(db='airflow', table_name='does_not_exist')\n False"} {"_id": "q_247", "text": "Returns a Hive connection object."} {"_id": "q_248", "text": "Get a set of records from a Hive query.\n\n :param hql: hql to be executed.\n :type hql: str or list\n :param schema: target schema, default to 'default'.\n :type schema: str\n :param hive_conf: hive_conf to execute alone with the hql.\n :type hive_conf: dict\n :return: result of hive execution\n :rtype: list\n\n >>> hh = HiveServer2Hook()\n >>> sql = \"SELECT * FROM airflow.static_babynames LIMIT 100\"\n >>> len(hh.get_records(sql))\n 100"} {"_id": "q_249", "text": "Retrieves connection to Cloud Vision.\n\n :return: Google Cloud Vision client object.\n :rtype: google.cloud.vision_v1.ProductSearchClient"} {"_id": "q_250", "text": "Send Dingding message"} {"_id": "q_251", "text": "Helper method that binds parameters to a SQL query."} {"_id": "q_252", "text": "Helper method that escapes parameters to a SQL query."} {"_id": "q_253", "text": "Helper method that casts a BigQuery row to the appropriate data types.\n This is useful because BigQuery returns all fields as strings."} {"_id": "q_254", "text": "function to check expected type and raise\n error if type is not correct"} {"_id": "q_255", "text": "Returns a BigQuery PEP 249 connection object."} {"_id": "q_256", "text": "Returns a BigQuery service object."} {"_id": "q_257", "text": "Checks for the existence of a table in Google BigQuery.\n\n :param project_id: The Google cloud project in which to look for the\n table. The connection supplied to the hook must provide access to\n the specified project.\n :type project_id: str\n :param dataset_id: The name of the dataset in which to look for the\n table.\n :type dataset_id: str\n :param table_id: The name of the table to check the existence of.\n :type table_id: str"} {"_id": "q_258", "text": "Creates a new, empty table in the dataset.\n To create a view, which is defined by a SQL query, parse a dictionary to 'view' kwarg\n\n :param project_id: The project to create the table into.\n :type project_id: str\n :param dataset_id: The dataset to create the table into.\n :type dataset_id: str\n :param table_id: The Name of the table to be created.\n :type table_id: str\n :param schema_fields: If set, the schema field list as defined here:\n https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load.schema\n :type schema_fields: list\n :param labels: a dictionary containing labels for the table, passed to BigQuery\n :type labels: dict\n\n **Example**: ::\n\n schema_fields=[{\"name\": \"emp_name\", \"type\": \"STRING\", \"mode\": \"REQUIRED\"},\n {\"name\": \"salary\", \"type\": \"INTEGER\", \"mode\": \"NULLABLE\"}]\n\n :param time_partitioning: configure optional time partitioning fields i.e.\n partition by field, type and expiration as per API specifications.\n\n .. seealso::\n https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#timePartitioning\n :type time_partitioning: dict\n :param cluster_fields: [Optional] The fields used for clustering.\n Must be specified with time_partitioning, data in the table will be first\n partitioned and subsequently clustered.\n https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#clustering.fields\n :type cluster_fields: list\n :param view: [Optional] A dictionary containing definition for the view.\n If set, it will create a view instead of a table:\n https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#view\n :type view: dict\n\n **Example**: ::\n\n view = {\n \"query\": \"SELECT * FROM `test-project-id.test_dataset_id.test_table_prefix*` LIMIT 1000\",\n \"useLegacySql\": False\n }\n\n :return: None"} {"_id": "q_259", "text": "Grant authorized view access of a dataset to a view table.\n If this view has already been granted access to the dataset, do nothing.\n This method is not atomic. Running it may clobber a simultaneous update.\n\n :param source_dataset: the source dataset\n :type source_dataset: str\n :param view_dataset: the dataset that the view is in\n :type view_dataset: str\n :param view_table: the table of the view\n :type view_table: str\n :param source_project: the project of the source dataset. If None,\n self.project_id will be used.\n :type source_project: str\n :param view_project: the project that the view is in. If None,\n self.project_id will be used.\n :type view_project: str\n :return: the datasets resource of the source dataset."} {"_id": "q_260", "text": "Method returns dataset_resource if dataset exist\n and raised 404 error if dataset does not exist\n\n :param dataset_id: The BigQuery Dataset ID\n :type dataset_id: str\n :param project_id: The GCP Project ID\n :type project_id: str\n :return: dataset_resource\n\n .. seealso::\n For more information, see Dataset Resource content:\n https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets#resource"} {"_id": "q_261", "text": "Executes a BigQuery query, and returns the job ID.\n\n :param operation: The query to execute.\n :type operation: str\n :param parameters: Parameters to substitute into the query.\n :type parameters: dict"} {"_id": "q_262", "text": "Queries Postgres and returns a cursor to the results."} {"_id": "q_263", "text": "Create all the intermediate directories in a remote host\n\n :param sftp_client: A Paramiko SFTP client.\n :param remote_directory: Absolute Path of the directory containing the file\n :return:"} {"_id": "q_264", "text": "Send message to the queue\n\n :param queue_url: queue url\n :type queue_url: str\n :param message_body: the contents of the message\n :type message_body: str\n :param delay_seconds: seconds to delay the message\n :type delay_seconds: int\n :param message_attributes: additional attributes for the message (default: None)\n For details of the attributes parameter see :py:meth:`botocore.client.SQS.send_message`\n :type message_attributes: dict\n\n :return: dict with the information about the message sent\n For details of the returned value see :py:meth:`botocore.client.SQS.send_message`\n :rtype: dict"} {"_id": "q_265", "text": "Run the task command.\n\n :param run_with: list of tokens to run the task command with e.g. ``['bash', '-c']``\n :type run_with: list\n :param join_args: whether to concatenate the list of command tokens e.g. ``['airflow', 'run']`` vs\n ``['airflow run']``\n :param join_args: bool\n :return: the process that was run\n :rtype: subprocess.Popen"} {"_id": "q_266", "text": "Parse options and process commands"} {"_id": "q_267", "text": "generate HTML header content"} {"_id": "q_268", "text": "generate HTML div"} {"_id": "q_269", "text": "Create X-axis"} {"_id": "q_270", "text": "Decorator to make a view compressed"} {"_id": "q_271", "text": "Creates a dag run from this dag including the tasks associated with this dag.\n Returns the dag run.\n\n :param run_id: defines the the run id for this dag run\n :type run_id: str\n :param execution_date: the execution date of this dag run\n :type execution_date: datetime.datetime\n :param state: the state of the dag run\n :type state: airflow.utils.state.State\n :param start_date: the date this dag run should be evaluated\n :type start_date: datetime.datetime\n :param external_trigger: whether this dag run is externally triggered\n :type external_trigger: bool\n :param session: database session\n :type session: sqlalchemy.orm.session.Session"} {"_id": "q_272", "text": "Publish the message to SQS queue\n\n :param context: the context object\n :type context: dict\n :return: dict with information about the message sent\n For details of the returned dict see :py:meth:`botocore.client.SQS.send_message`\n :rtype: dict"} {"_id": "q_273", "text": "returns a json response from a json serializable python object"} {"_id": "q_274", "text": "Opens the given file. If the path contains a folder with a .zip suffix, then\n the folder is treated as a zip archive, opening the file inside the archive.\n\n :return: a file object, as in `open`, or as in `ZipFile.open`."} {"_id": "q_275", "text": "Get Opsgenie api_key for creating alert"} {"_id": "q_276", "text": "Overwrite HttpHook get_conn because this hook just needs base_url\n and headers, and does not need generic params\n\n :param headers: additional headers to be passed through as a dictionary\n :type headers: dict"} {"_id": "q_277", "text": "Execute the Opsgenie Alert call\n\n :param payload: Opsgenie API Create Alert payload values\n See https://docs.opsgenie.com/docs/alert-api#section-create-alert\n :type payload: dict"} {"_id": "q_278", "text": "Construct the Opsgenie JSON payload. All relevant parameters are combined here\n to a valid Opsgenie JSON payload.\n\n :return: Opsgenie payload (dict) to send"} {"_id": "q_279", "text": "Call the OpsgenieAlertHook to post message"} {"_id": "q_280", "text": "Fetch the status of submitted athena query. Returns None or one of valid query states.\n\n :param query_execution_id: Id of submitted athena query\n :type query_execution_id: str\n :return: str"} {"_id": "q_281", "text": "Call Zendesk API and return results\n\n :param path: The Zendesk API to call\n :param query: Query parameters\n :param get_all_pages: Accumulate results over all pages before\n returning. Due to strict rate limiting, this can often timeout.\n Waits for recommended period between tries after a timeout.\n :param side_loading: Retrieve related records as part of a single\n request. In order to enable side-loading, add an 'include'\n query parameter containing a comma-separated list of resources\n to load. For more information on side-loading see\n https://developer.zendesk.com/rest_api/docs/core/side_loading"} {"_id": "q_282", "text": "Retrieves the partition values for a table.\n\n :param database_name: The name of the catalog database where the partitions reside.\n :type database_name: str\n :param table_name: The name of the partitions' table.\n :type table_name: str\n :param expression: An expression filtering the partitions to be returned.\n Please see official AWS documentation for further information.\n https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-catalog-partitions.html#aws-glue-api-catalog-partitions-GetPartitions\n :type expression: str\n :param page_size: pagination size\n :type page_size: int\n :param max_items: maximum items to return\n :type max_items: int\n :return: set of partition values where each value is a tuple since\n a partition may be composed of multiple columns. For example:\n ``{('2018-01-01','1'), ('2018-01-01','2')}``"} {"_id": "q_283", "text": "Get the information of the table\n\n :param database_name: Name of hive database (schema) @table belongs to\n :type database_name: str\n :param table_name: Name of hive table\n :type table_name: str\n :rtype: dict\n\n >>> hook = AwsGlueCatalogHook()\n >>> r = hook.get_table('db', 'table_foo')\n >>> r['Name'] = 'table_foo'"} {"_id": "q_284", "text": "Get the physical location of the table\n\n :param database_name: Name of hive database (schema) @table belongs to\n :type database_name: str\n :param table_name: Name of hive table\n :type table_name: str\n :return: str"} {"_id": "q_285", "text": "Return status of a cluster\n\n :param cluster_identifier: unique identifier of a cluster\n :type cluster_identifier: str"} {"_id": "q_286", "text": "Delete a cluster and optionally create a snapshot\n\n :param cluster_identifier: unique identifier of a cluster\n :type cluster_identifier: str\n :param skip_final_cluster_snapshot: determines cluster snapshot creation\n :type skip_final_cluster_snapshot: bool\n :param final_cluster_snapshot_identifier: name of final cluster snapshot\n :type final_cluster_snapshot_identifier: str"} {"_id": "q_287", "text": "Restores a cluster from its snapshot\n\n :param cluster_identifier: unique identifier of a cluster\n :type cluster_identifier: str\n :param snapshot_identifier: unique identifier for a snapshot of a cluster\n :type snapshot_identifier: str"} {"_id": "q_288", "text": "SlackAPIOperator calls will not fail even if the call is not unsuccessful.\n It should not prevent a DAG from completing in success"} {"_id": "q_289", "text": "Will test the filepath result and test if its size is at least self.filesize\n\n :param result: a list of dicts returned by Snakebite ls\n :param size: the file size in MB a file should be at least to trigger True\n :return: (bool) depending on the matching criteria"} {"_id": "q_290", "text": "Will filter if instructed to do so the result to remove matching criteria\n\n :param result: list of dicts returned by Snakebite ls\n :type result: list[dict]\n :param ignored_ext: list of ignored extensions\n :type ignored_ext: list\n :param ignore_copying: shall we ignore ?\n :type ignore_copying: bool\n :return: list of dicts which were not removed\n :rtype: list[dict]"} {"_id": "q_291", "text": "Create a pool with a given parameters."} {"_id": "q_292", "text": "Delete pool by a given name."} {"_id": "q_293", "text": "Given an operation, continuously fetches the status from Google Cloud until either\n completion or an error occurring\n\n :param operation: The Operation to wait for\n :type operation: google.cloud.container_V1.gapic.enums.Operation\n :param project_id: Google Cloud Platform project ID\n :type project_id: str\n :return: A new, updated operation fetched from Google Cloud"} {"_id": "q_294", "text": "Creates a cluster, consisting of the specified number and type of Google Compute\n Engine instances.\n\n :param cluster: A Cluster protobuf or dict. If dict is provided, it must\n be of the same form as the protobuf message\n :class:`google.cloud.container_v1.types.Cluster`\n :type cluster: dict or google.cloud.container_v1.types.Cluster\n :param project_id: Google Cloud Platform project ID\n :type project_id: str\n :param retry: A retry object (``google.api_core.retry.Retry``) used to\n retry requests.\n If None is specified, requests will not be retried.\n :type retry: google.api_core.retry.Retry\n :param timeout: The amount of time, in seconds, to wait for the request to\n complete. Note that if retry is specified, the timeout applies to each\n individual attempt.\n :type timeout: float\n :return: The full url to the new, or existing, cluster\n :raises:\n ParseError: On JSON parsing problems when trying to convert dict\n AirflowException: cluster is not dict type nor Cluster proto type"} {"_id": "q_295", "text": "Gets details of specified cluster\n\n :param name: The name of the cluster to retrieve\n :type name: str\n :param project_id: Google Cloud Platform project ID\n :type project_id: str\n :param retry: A retry object used to retry requests. If None is specified,\n requests will not be retried.\n :type retry: google.api_core.retry.Retry\n :param timeout: The amount of time, in seconds, to wait for the request to\n complete. Note that if retry is specified, the timeout applies to each\n individual attempt.\n :type timeout: float\n :return: google.cloud.container_v1.types.Cluster"} {"_id": "q_296", "text": "Construct the Discord JSON payload. All relevant parameters are combined here\n to a valid Discord JSON payload.\n\n :return: Discord payload (str) to send"} {"_id": "q_297", "text": "Encrypts a plaintext message using Google Cloud KMS.\n\n :param key_name: The Resource Name for the key (or key version)\n to be used for encyption. Of the form\n ``projects/*/locations/*/keyRings/*/cryptoKeys/**``\n :type key_name: str\n :param plaintext: The message to be encrypted.\n :type plaintext: bytes\n :param authenticated_data: Optional additional authenticated data that\n must also be provided to decrypt the message.\n :type authenticated_data: bytes\n :return: The base 64 encoded ciphertext of the original message.\n :rtype: str"} {"_id": "q_298", "text": "Imports table from remote location to target dir. Arguments are\n copies of direct sqoop command line arguments\n\n :param table: Table to read\n :param target_dir: HDFS destination dir\n :param append: Append data to an existing dataset in HDFS\n :param file_type: \"avro\", \"sequence\", \"text\" or \"parquet\".\n Imports data to into the specified format. Defaults to text.\n :param columns: Columns to import from table\n :param split_by: Column of the table used to split work units\n :param where: WHERE clause to use during import\n :param direct: Use direct connector if exists for the database\n :param driver: Manually specify JDBC driver class to use\n :param extra_import_options: Extra import options to pass as dict.\n If a key doesn't have a value, just pass an empty string to it.\n Don't include prefix of -- for sqoop options."} {"_id": "q_299", "text": "Retrieves connection to Cloud Text to Speech.\n\n :return: Google Cloud Text to Speech client object.\n :rtype: google.cloud.texttospeech_v1.TextToSpeechClient"} {"_id": "q_300", "text": "Close and upload local log file to remote storage S3."} {"_id": "q_301", "text": "When using git to retrieve the DAGs, use the GitSync Init Container"} {"_id": "q_302", "text": "Defines any necessary environment variables for the pod executor"} {"_id": "q_303", "text": "Defines any necessary secrets for the pod executor"} {"_id": "q_304", "text": "Defines the security context"} {"_id": "q_305", "text": "Heartbeats update the job's entry in the database with a timestamp\n for the latest_heartbeat and allows for the job to be killed\n externally. This allows at the system level to monitor what is\n actually active.\n\n For instance, an old heartbeat for SchedulerJob would mean something\n is wrong.\n\n This also allows for any job to be killed externally, regardless\n of who is running it or on which machine it is running.\n\n Note that if your heartbeat is set to 60 seconds and you call this\n method after 10 seconds of processing since the last heartbeat, it\n will sleep 50 seconds to complete the 60 seconds and keep a steady\n heart rate. If you go over 60 seconds before calling it, it won't\n sleep at all."} {"_id": "q_306", "text": "Launch the process and start processing the DAG."} {"_id": "q_307", "text": "Check if the process launched to process this file is done.\n\n :return: whether the process is finished running\n :rtype: bool"} {"_id": "q_308", "text": "Helper method to clean up processor_agent to avoid leaving orphan processes."} {"_id": "q_309", "text": "For the DAGs in the given DagBag, record any associated import errors and clears\n errors for files that no longer have them. These are usually displayed through the\n Airflow UI so that users know that there are issues parsing DAGs.\n\n :param session: session for ORM operations\n :type session: sqlalchemy.orm.session.Session\n :param dagbag: DagBag containing DAGs with import errors\n :type dagbag: airflow.models.DagBag"} {"_id": "q_310", "text": "Get the concurrency maps.\n\n :param states: List of states to query for\n :type states: list[airflow.utils.state.State]\n :return: A map from (dag_id, task_id) to # of task instances and\n a map from (dag_id, task_id) to # of task instances in the given state list\n :rtype: dict[tuple[str, str], int]"} {"_id": "q_311", "text": "Changes the state of task instances in the list with one of the given states\n to QUEUED atomically, and returns the TIs changed in SimpleTaskInstance format.\n\n :param task_instances: TaskInstances to change the state of\n :type task_instances: list[airflow.models.TaskInstance]\n :param acceptable_states: Filters the TaskInstances updated to be in these states\n :type acceptable_states: Iterable[State]\n :rtype: list[airflow.utils.dag_processing.SimpleTaskInstance]"} {"_id": "q_312", "text": "Takes task_instances, which should have been set to queued, and enqueues them\n with the executor.\n\n :param simple_task_instances: TaskInstances to enqueue\n :type simple_task_instances: list[SimpleTaskInstance]\n :param simple_dag_bag: Should contains all of the task_instances' dags\n :type simple_dag_bag: airflow.utils.dag_processing.SimpleDagBag"} {"_id": "q_313", "text": "Attempts to execute TaskInstances that should be executed by the scheduler.\n\n There are three steps:\n 1. Pick TIs by priority with the constraint that they are in the expected states\n and that we do exceed max_active_runs or pool limits.\n 2. Change the state for the TIs above atomically.\n 3. Enqueue the TIs in the executor.\n\n :param simple_dag_bag: TaskInstances associated with DAGs in the\n simple_dag_bag will be fetched from the DB and executed\n :type simple_dag_bag: airflow.utils.dag_processing.SimpleDagBag\n :param states: Execute TaskInstances in these states\n :type states: tuple[airflow.utils.state.State]\n :return: Number of task instance with state changed."} {"_id": "q_314", "text": "If there are tasks left over in the executor,\n we set them back to SCHEDULED to avoid creating hanging tasks.\n\n :param session: session for ORM operations"} {"_id": "q_315", "text": "Process a Python file containing Airflow DAGs.\n\n This includes:\n\n 1. Execute the file and look for DAG objects in the namespace.\n 2. Pickle the DAG and save it to the DB (if necessary).\n 3. For each DAG, see what tasks should run and create appropriate task\n instances in the DB.\n 4. Record any errors importing the file into ORM\n 5. Kill (in ORM) any task instances belonging to the DAGs that haven't\n issued a heartbeat in a while.\n\n Returns a list of SimpleDag objects that represent the DAGs found in\n the file\n\n :param file_path: the path to the Python file that should be executed\n :type file_path: unicode\n :param zombies: zombie task instances to kill.\n :type zombies: list[airflow.utils.dag_processing.SimpleTaskInstance]\n :param pickle_dags: whether serialize the DAGs found in the file and\n save them to the db\n :type pickle_dags: bool\n :return: a list of SimpleDags made from the Dags found in the file\n :rtype: list[airflow.utils.dag_processing.SimpleDagBag]"} {"_id": "q_316", "text": "Updates the counters per state of the tasks that were running. Can re-add\n to tasks to run in case required.\n\n :param ti_status: the internal status of the backfill job tasks\n :type ti_status: BackfillJob._DagRunTaskStatus"} {"_id": "q_317", "text": "Returns a dag run for the given run date, which will be matched to an existing\n dag run if available or create a new dag run otherwise. If the max_active_runs\n limit is reached, this function will return None.\n\n :param run_date: the execution date for the dag run\n :type run_date: datetime.datetime\n :param session: the database session object\n :type session: sqlalchemy.orm.session.Session\n :return: a DagRun in state RUNNING or None"} {"_id": "q_318", "text": "Returns a map of task instance key to task instance object for the tasks to\n run in the given dag run.\n\n :param dag_run: the dag run to get the tasks from\n :type dag_run: airflow.models.DagRun\n :param session: the database session object\n :type session: sqlalchemy.orm.session.Session"} {"_id": "q_319", "text": "Computes the dag runs and their respective task instances for\n the given run dates and executes the task instances.\n Returns a list of execution dates of the dag runs that were executed.\n\n :param run_dates: Execution dates for dag runs\n :type run_dates: list\n :param ti_status: internal BackfillJob status structure to tis track progress\n :type ti_status: BackfillJob._DagRunTaskStatus\n :param executor: the executor to use, it must be previously started\n :type executor: BaseExecutor\n :param pickle_id: numeric id of the pickled dag, None if not pickled\n :type pickle_id: int\n :param start_date: backfill start date\n :type start_date: datetime.datetime\n :param session: the current session object\n :type session: sqlalchemy.orm.session.Session"} {"_id": "q_320", "text": "Self destruct task if state has been moved away from running externally"} {"_id": "q_321", "text": "Provides a client for interacting with the Cloud Spanner API.\n\n :param project_id: The ID of the GCP project.\n :type project_id: str\n :return: google.cloud.spanner_v1.client.Client\n :rtype: object"} {"_id": "q_322", "text": "Gets information about a particular instance.\n\n :param project_id: Optional, The ID of the GCP project that owns the Cloud Spanner\n database. If set to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :param instance_id: The ID of the Cloud Spanner instance.\n :type instance_id: str\n :return: google.cloud.spanner_v1.instance.Instance\n :rtype: object"} {"_id": "q_323", "text": "Invokes a method on a given instance by applying a specified Callable.\n\n :param project_id: The ID of the GCP project that owns the Cloud Spanner\n database.\n :type project_id: str\n :param instance_id: The ID of the instance.\n :type instance_id: str\n :param configuration_name: Name of the instance configuration defining how the\n instance will be created. Required for instances which do not yet exist.\n :type configuration_name: str\n :param node_count: (Optional) Number of nodes allocated to the instance.\n :type node_count: int\n :param display_name: (Optional) The display name for the instance in the Cloud\n Console UI. (Must be between 4 and 30 characters.) If this value is not set\n in the constructor, will fall back to the instance ID.\n :type display_name: str\n :param func: Method of the instance to be called.\n :type func: Callable"} {"_id": "q_324", "text": "Creates a new Cloud Spanner instance.\n\n :param instance_id: The ID of the Cloud Spanner instance.\n :type instance_id: str\n :param configuration_name: The name of the instance configuration defining how the\n instance will be created. Possible configuration values can be retrieved via\n https://cloud.google.com/spanner/docs/reference/rest/v1/projects.instanceConfigs/list\n :type configuration_name: str\n :param node_count: (Optional) The number of nodes allocated to the Cloud Spanner\n instance.\n :type node_count: int\n :param display_name: (Optional) The display name for the instance in the GCP\n Console. Must be between 4 and 30 characters. If this value is not set in\n the constructor, the name falls back to the instance ID.\n :type display_name: str\n :param project_id: Optional, the ID of the GCP project that owns the Cloud Spanner\n database. If set to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"} {"_id": "q_325", "text": "Updates an existing Cloud Spanner instance.\n\n :param instance_id: The ID of the Cloud Spanner instance.\n :type instance_id: str\n :param configuration_name: The name of the instance configuration defining how the\n instance will be created. Possible configuration values can be retrieved via\n https://cloud.google.com/spanner/docs/reference/rest/v1/projects.instanceConfigs/list\n :type configuration_name: str\n :param node_count: (Optional) The number of nodes allocated to the Cloud Spanner\n instance.\n :type node_count: int\n :param display_name: (Optional) The display name for the instance in the GCP\n Console. Must be between 4 and 30 characters. If this value is not set in\n the constructor, the name falls back to the instance ID.\n :type display_name: str\n :param project_id: Optional, the ID of the GCP project that owns the Cloud Spanner\n database. If set to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"} {"_id": "q_326", "text": "Deletes an existing Cloud Spanner instance.\n\n :param instance_id: The ID of the Cloud Spanner instance.\n :type instance_id: str\n :param project_id: Optional, the ID of the GCP project that owns the Cloud Spanner\n database. If set to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: None"} {"_id": "q_327", "text": "Retrieves a database in Cloud Spanner. If the database does not exist\n in the specified instance, it returns None.\n\n :param instance_id: The ID of the Cloud Spanner instance.\n :type instance_id: str\n :param database_id: The ID of the database in Cloud Spanner.\n :type database_id: str\n :param project_id: Optional, the ID of the GCP project that owns the Cloud Spanner\n database. If set to None or missing, the default project_id from the GCP connection is used.\n :type project_id: str\n :return: Database object or None if database does not exist\n :rtype: google.cloud.spanner_v1.database.Database or None"} {"_id": "q_328", "text": "Creates a new database in Cloud Spanner.\n\n :type project_id: str\n :param instance_id: The ID of the Cloud Spanner instance.\n :type instance_id: str\n :param database_id: The ID of the database to create in Cloud Spanner.\n :type database_id: str\n :param ddl_statements: The string list containing DDL for the new database.\n :type ddl_statements: list[str]\n :param project_id: Optional, the ID of the GCP project that owns the Cloud Spanner\n database. If set to None or missing, the default project_id from the GCP connection is used.\n :return: None"} {"_id": "q_329", "text": "Updates DDL of a database in Cloud Spanner.\n\n :type project_id: str\n :param instance_id: The ID of the Cloud Spanner instance.\n :type instance_id: str\n :param database_id: The ID of the database in Cloud Spanner.\n :type database_id: str\n :param ddl_statements: The string list containing DDL for the new database.\n :type ddl_statements: list[str]\n :param project_id: Optional, the ID of the GCP project that owns the Cloud Spanner\n database. If set to None or missing, the default project_id from the GCP connection is used.\n :param operation_id: (Optional) The unique per database operation ID that can be\n specified to implement idempotency check.\n :type operation_id: str\n :return: None"} {"_id": "q_330", "text": "Returns a cassandra Session object"} {"_id": "q_331", "text": "Checks if a record exists in Cassandra\n\n :param table: Target Cassandra table.\n Use dot notation to target a specific keyspace.\n :type table: str\n :param keys: The keys and their values to check the existence.\n :type keys: dict"} {"_id": "q_332", "text": "Construct the command to poll the driver status.\n\n :return: full command to be executed"} {"_id": "q_333", "text": "Remote Popen to execute the spark-submit job\n\n :param application: Submitted application, jar or py file\n :type application: str\n :param kwargs: extra arguments to Popen (see subprocess.Popen)"} {"_id": "q_334", "text": "Processes the log files and extracts useful information out of it.\n\n If the deploy-mode is 'client', log the output of the submit command as those\n are the output logs of the Spark worker directly.\n\n Remark: If the driver needs to be tracked for its status, the log-level of the\n spark deploy needs to be at least INFO (log4j.logger.org.apache.spark.deploy=INFO)\n\n :param itr: An iterator which iterates over the input of the subprocess"} {"_id": "q_335", "text": "parses the logs of the spark driver status query process\n\n :param itr: An iterator which iterates over the input of the subprocess"} {"_id": "q_336", "text": "Get the task runner that can be used to run the given job.\n\n :param local_task_job: The LocalTaskJob associated with the TaskInstance\n that needs to be executed.\n :type local_task_job: airflow.jobs.LocalTaskJob\n :return: The task runner to use to run the task.\n :rtype: airflow.task.task_runner.base_task_runner.BaseTaskRunner"} {"_id": "q_337", "text": "Try to use a waiter from the below pull request\n\n * https://github.com/boto/botocore/pull/1307\n\n If the waiter is not available apply a exponential backoff\n\n * docs.aws.amazon.com/general/latest/gr/api-retries.html"} {"_id": "q_338", "text": "Queries mysql and returns a cursor to the results."} {"_id": "q_339", "text": "Configure a csv writer with the file_handle and write schema\n as headers for the new file."} {"_id": "q_340", "text": "Return a dict of column name and column type based on self.schema if not None."} {"_id": "q_341", "text": "Execute sqoop job"} {"_id": "q_342", "text": "Returns the extra property by deserializing json."} {"_id": "q_343", "text": "Get a set of dates as a list based on a start, end and delta, delta\n can be something that can be added to `datetime.datetime`\n or a cron expression as a `str`\n\n :Example::\n\n date_range(datetime(2016, 1, 1), datetime(2016, 1, 3), delta=timedelta(1))\n [datetime.datetime(2016, 1, 1, 0, 0), datetime.datetime(2016, 1, 2, 0, 0),\n datetime.datetime(2016, 1, 3, 0, 0)]\n date_range(datetime(2016, 1, 1), datetime(2016, 1, 3), delta='0 0 * * *')\n [datetime.datetime(2016, 1, 1, 0, 0), datetime.datetime(2016, 1, 2, 0, 0),\n datetime.datetime(2016, 1, 3, 0, 0)]\n date_range(datetime(2016, 1, 1), datetime(2016, 3, 3), delta=\"0 0 0 * *\")\n [datetime.datetime(2016, 1, 1, 0, 0), datetime.datetime(2016, 2, 1, 0, 0),\n datetime.datetime(2016, 3, 1, 0, 0)]\n\n :param start_date: anchor date to start the series from\n :type start_date: datetime.datetime\n :param end_date: right boundary for the date range\n :type end_date: datetime.datetime\n :param num: alternatively to end_date, you can specify the number of\n number of entries you want in the range. This number can be negative,\n output will always be sorted regardless\n :type num: int"} {"_id": "q_344", "text": "Get a datetime object representing `n` days ago. By default the time is\n set to midnight."} {"_id": "q_345", "text": "Initialize the role with the permissions and related view-menus.\n\n :param role_name:\n :param role_vms:\n :param role_perms:\n :return:"} {"_id": "q_346", "text": "Delete the given Role\n\n :param role_name: the name of a role in the ab_role table"} {"_id": "q_347", "text": "Get all the roles associated with the user.\n\n :param user: the ab_user in FAB model.\n :return: a list of roles associated with the user."} {"_id": "q_348", "text": "Returns a set of tuples with the perm name and view menu name"} {"_id": "q_349", "text": "Whether the user has this role name"} {"_id": "q_350", "text": "Whether the user has this perm"} {"_id": "q_351", "text": "FAB leaves faulty permissions that need to be cleaned up"} {"_id": "q_352", "text": "Add the new permission , view_menu to ab_permission_view_role if not exists.\n It will add the related entry to ab_permission\n and ab_view_menu two meta tables as well.\n\n :param permission_name: Name of the permission.\n :type permission_name: str\n :param view_menu_name: Name of the view-menu\n :type view_menu_name: str\n :return:"} {"_id": "q_353", "text": "Create perm-vm if not exist and insert into FAB security model for all-dags."} {"_id": "q_354", "text": "Deferred load of Fernet key.\n\n This function could fail either because Cryptography is not installed\n or because the Fernet key is invalid.\n\n :return: Fernet object\n :raises: airflow.exceptions.AirflowException if there's a problem trying to load Fernet"} {"_id": "q_355", "text": "Checks for existence of the partition in the AWS Glue Catalog table"} {"_id": "q_356", "text": "Check for message on subscribed queue and write to xcom the message with key ``messages``\n\n :param context: the context object\n :type context: dict\n :return: ``True`` if message is available or ``False``"} {"_id": "q_357", "text": "Returns a snakebite HDFSClient object."} {"_id": "q_358", "text": "Establishes a connection depending on the security mode set via config or environment variable.\n\n :return: a hdfscli InsecureClient or KerberosClient object.\n :rtype: hdfs.InsecureClient or hdfs.ext.kerberos.KerberosClient"} {"_id": "q_359", "text": "Check for the existence of a path in HDFS by querying FileStatus.\n\n :param hdfs_path: The path to check.\n :type hdfs_path: str\n :return: True if the path exists and False if not.\n :rtype: bool"} {"_id": "q_360", "text": "r\"\"\"\n Uploads a file to HDFS.\n\n :param source: Local path to file or folder.\n If it's a folder, all the files inside of it will be uploaded.\n .. note:: This implies that folders empty of files will not be created remotely.\n\n :type source: str\n :param destination: PTarget HDFS path.\n If it already exists and is a directory, files will be uploaded inside.\n :type destination: str\n :param overwrite: Overwrite any existing file or directory.\n :type overwrite: bool\n :param parallelism: Number of threads to use for parallelization.\n A value of `0` (or negative) uses as many threads as there are files.\n :type parallelism: int\n :param \\**kwargs: Keyword arguments forwarded to :meth:`hdfs.client.Client.upload`."} {"_id": "q_361", "text": "Establish a connection to pinot broker through pinot dbqpi."} {"_id": "q_362", "text": "Get the connection uri for pinot broker.\n\n e.g: http://localhost:9000/pql"} {"_id": "q_363", "text": "Convert native python ``datetime.time`` object to a format supported by the API"} {"_id": "q_364", "text": "Executes the sql and returns a pandas dataframe\n\n :param sql: the sql statement to be executed (str) or a list of\n sql statements to execute\n :type sql: str or list\n :param parameters: The parameters to render the SQL query with.\n :type parameters: mapping or iterable"} {"_id": "q_365", "text": "A generic way to insert a set of tuples into a table,\n a new transaction is created every commit_every rows\n\n :param table: Name of the target table\n :type table: str\n :param rows: The rows to insert into the table\n :type rows: iterable of tuples\n :param target_fields: The names of the columns to fill in the table\n :type target_fields: iterable of strings\n :param commit_every: The maximum number of rows to insert in one\n transaction. Set to 0 to insert all rows in one transaction.\n :type commit_every: int\n :param replace: Whether to replace instead of insert\n :type replace: bool"} {"_id": "q_366", "text": "An endpoint helping check the health status of the Airflow instance,\n including metadatabase and scheduler."} {"_id": "q_367", "text": "A restful endpoint that returns external links for a given Operator\n\n It queries the operator that sent the request for the links it wishes\n to provide for a given external link name.\n\n API: GET\n Args: dag_id: The id of the dag containing the task in question\n task_id: The id of the task in question\n execution_date: The date of execution of the task\n link_name: The name of the link reference to find the actual URL for\n\n Returns:\n 200: {url: , error: None} - returned when there was no problem\n finding the URL\n 404: {url: None, error: } - returned when the operator does\n not return a URL"} {"_id": "q_368", "text": "Opens a connection to the cloudant service and closes it automatically if used as context manager.\n\n .. note::\n In the connection form:\n - 'host' equals the 'Account' (optional)\n - 'login' equals the 'Username (or API Key)' (required)\n - 'password' equals the 'Password' (required)\n\n :return: an authorized cloudant session context manager object.\n :rtype: cloudant"} {"_id": "q_369", "text": "Call the SlackWebhookHook to post the provided Slack message"} {"_id": "q_370", "text": "A list of states indicating that a task either has not completed\n a run or has not even started."} {"_id": "q_371", "text": "Save model to a pickle located at `path`"} {"_id": "q_372", "text": "CNN from Nature paper."} {"_id": "q_373", "text": "convolutions-only net\n\n Parameters:\n ----------\n\n conv: list of triples (filter_number, filter_size, stride) specifying parameters for each layer.\n\n Returns:\n\n function that takes tensorflow tensor as input and returns the output of the last convolutional layer"} {"_id": "q_374", "text": "Create a wrapped, monitored SubprocVecEnv for Atari and MuJoCo."} {"_id": "q_375", "text": "Create placeholder to feed observations into of the size appropriate to the observation space\n\n Parameters:\n ----------\n\n ob_space: gym.Space observation space\n\n batch_size: int size of the batch to be fed into input. Can be left None in most cases.\n\n name: str name of the placeholder\n\n Returns:\n -------\n\n tensorflow placeholder tensor"} {"_id": "q_376", "text": "Create placeholder to feed observations into of the size appropriate to the observation space, and add input\n encoder of the appropriate type."} {"_id": "q_377", "text": "Deep-copy an observation dict."} {"_id": "q_378", "text": "Calculates q_retrace targets\n\n :param R: Rewards\n :param D: Dones\n :param q_i: Q values for actions taken\n :param v: V values\n :param rho_i: Importance weight for each action\n :return: Q_retrace values"} {"_id": "q_379", "text": "See Schedule.value"} {"_id": "q_380", "text": "Control a single environment instance using IPC and\n shared memory."} {"_id": "q_381", "text": "Main entrypoint for A2C algorithm. Train a policy with given network architecture on a given environment using a2c algorithm.\n\n Parameters:\n -----------\n\n network: policy network architecture. Either string (mlp, lstm, lnlstm, cnn_lstm, cnn, cnn_small, conv_only - see baselines.common/models.py for full list)\n specifying the standard network architecture, or a function that takes tensorflow tensor as input and returns\n tuple (output_tensor, extra_feed) where output tensor is the last network layer output, extra_feed is None for feed-forward\n neural nets, and extra_feed is a dictionary describing how to feed state into the network for recurrent neural nets.\n See baselines.common/policies.py/lstm for more details on using recurrent nets in policies\n\n\n env: RL environment. Should implement interface similar to VecEnv (baselines.common/vec_env) or be wrapped with DummyVecEnv (baselines.common/vec_env/dummy_vec_env.py)\n\n\n seed: seed to make random number sequence in the alorightm reproducible. By default is None which means seed from system noise generator (not reproducible)\n\n nsteps: int, number of steps of the vectorized environment per update (i.e. batch size is nsteps * nenv where\n nenv is number of environment copies simulated in parallel)\n\n total_timesteps: int, total number of timesteps to train on (default: 80M)\n\n vf_coef: float, coefficient in front of value function loss in the total loss function (default: 0.5)\n\n ent_coef: float, coeffictiant in front of the policy entropy in the total loss function (default: 0.01)\n\n max_gradient_norm: float, gradient is clipped to have global L2 norm no more than this value (default: 0.5)\n\n lr: float, learning rate for RMSProp (current implementation has RMSProp hardcoded in) (default: 7e-4)\n\n lrschedule: schedule of learning rate. Can be 'linear', 'constant', or a function [0..1] -> [0..1] that takes fraction of the training progress as input and\n returns fraction of the learning rate (specified as lr) as output\n\n epsilon: float, RMSProp epsilon (stabilizes square root computation in denominator of RMSProp update) (default: 1e-5)\n\n alpha: float, RMSProp decay parameter (default: 0.99)\n\n gamma: float, reward discounting parameter (default: 0.99)\n\n log_interval: int, specifies how frequently the logs are printed out (default: 100)\n\n **network_kwargs: keyword arguments to the policy / network builder. See baselines.common/policies.py/build_policy and arguments to a particular type of network\n For instance, 'mlp' network architecture has arguments num_hidden and num_layers."} {"_id": "q_382", "text": "swap and then flatten axes 0 and 1"} {"_id": "q_383", "text": "Print the number of seconds in human readable format.\n\n Examples:\n 2 days\n 2 hours and 37 minutes\n less than a minute\n\n Paramters\n ---------\n seconds_left: int\n Number of seconds to be converted to the ETA\n Returns\n -------\n eta: str\n String representing the pretty ETA."} {"_id": "q_384", "text": "Add a boolean flag to argparse parser.\n\n Parameters\n ----------\n parser: argparse.Parser\n parser to add the flag to\n name: str\n -- will enable the flag, while --no- will disable it\n default: bool or None\n default value of the flag\n help: str\n help string for the flag"} {"_id": "q_385", "text": "Stores provided method args as instance attributes."} {"_id": "q_386", "text": "Flattens a variables and their gradients."} {"_id": "q_387", "text": "Creates a simple neural network"} {"_id": "q_388", "text": "Re-launches the current script with workers\n Returns \"parent\" for original parent, \"child\" for MPI children"} {"_id": "q_389", "text": "Get default session or create one with a given config"} {"_id": "q_390", "text": "Initialize all the uninitialized variables in the global scope."} {"_id": "q_391", "text": "adjust shape of the data to the shape of the placeholder if possible.\n If shape is incompatible, AssertionError is thrown\n\n Parameters:\n placeholder tensorflow input placeholder\n\n data input data to be (potentially) reshaped to be fed into placeholder\n\n Returns:\n reshaped data"} {"_id": "q_392", "text": "Configure environment for DeepMind-style Atari."} {"_id": "q_393", "text": "Count the GPUs on this machine."} {"_id": "q_394", "text": "Set CUDA_VISIBLE_DEVICES to MPI rank if not already set"} {"_id": "q_395", "text": "Copies the file from rank 0 to all other ranks\n Puts it in the same place on all machines"} {"_id": "q_396", "text": "computes discounted sums along 0th dimension of x.\n\n inputs\n ------\n x: ndarray\n gamma: float\n\n outputs\n -------\n y: ndarray with same shape as x, satisfying\n\n y[t] = x[t] + gamma*x[t+1] + gamma^2*x[t+2] + ... + gamma^k x[t+k],\n where k = len(x) - t - 1"} {"_id": "q_397", "text": "See ReplayBuffer.store_effect"} {"_id": "q_398", "text": "Update priorities of sampled transitions.\n\n sets priority of transition at index idxes[i] in buffer\n to priorities[i].\n\n Parameters\n ----------\n idxes: [int]\n List of idxes of sampled transitions\n priorities: [float]\n List of updated priorities corresponding to\n transitions at the sampled idxes denoted by\n variable `idxes`."} {"_id": "q_399", "text": "Configure environment for retro games, using config similar to DeepMind-style Atari in wrap_deepmind"} {"_id": "q_400", "text": "Creates a sample function that can be used for HER experience replay.\n\n Args:\n replay_strategy (in ['future', 'none']): the HER replay strategy; if set to 'none',\n regular DDPG experience replay is used\n replay_k (int): the ratio between HER replays and regular replays (e.g. k = 4 -> 4 times\n as many HER replays as regular replays are used)\n reward_fun (function): function to re-compute the reward with substituted goals"} {"_id": "q_401", "text": "Estimate the geometric median of points in 2D.\n\n Code from https://stackoverflow.com/a/30305181\n\n Parameters\n ----------\n X : (N,2) ndarray\n Points in 2D. Second axis must be given in xy-form.\n\n eps : float, optional\n Distance threshold when to return the median.\n\n Returns\n -------\n (2,) ndarray\n Geometric median as xy-coordinate."} {"_id": "q_402", "text": "Project the keypoint onto a new position on a new image.\n\n E.g. if the keypoint is on its original image at x=(10 of 100 pixels)\n and y=(20 of 100 pixels) and is projected onto a new image with\n size (width=200, height=200), its new position will be (20, 40).\n\n This is intended for cases where the original image is resized.\n It cannot be used for more complex changes (e.g. padding, cropping).\n\n Parameters\n ----------\n from_shape : tuple of int\n Shape of the original image. (Before resize.)\n\n to_shape : tuple of int\n Shape of the new image. (After resize.)\n\n Returns\n -------\n imgaug.Keypoint\n Keypoint object with new coordinates."} {"_id": "q_403", "text": "Move the keypoint around on an image.\n\n Parameters\n ----------\n x : number, optional\n Move by this value on the x axis.\n\n y : number, optional\n Move by this value on the y axis.\n\n Returns\n -------\n imgaug.Keypoint\n Keypoint object with new coordinates."} {"_id": "q_404", "text": "Draw the keypoint onto a given image.\n\n The keypoint is drawn as a square.\n\n Parameters\n ----------\n image : (H,W,3) ndarray\n The image onto which to draw the keypoint.\n\n color : int or list of int or tuple of int or (3,) ndarray, optional\n The RGB color of the keypoint. If a single int ``C``, then that is\n equivalent to ``(C,C,C)``.\n\n alpha : float, optional\n The opacity of the drawn keypoint, where ``1.0`` denotes a fully\n visible keypoint and ``0.0`` an invisible one.\n\n size : int, optional\n The size of the keypoint. If set to ``S``, each square will have\n size ``S x S``.\n\n copy : bool, optional\n Whether to copy the image before drawing the keypoint.\n\n raise_if_out_of_image : bool, optional\n Whether to raise an exception if the keypoint is outside of the\n image.\n\n Returns\n -------\n image : (H,W,3) ndarray\n Image with drawn keypoint."} {"_id": "q_405", "text": "Create a shallow copy of the Keypoint object.\n\n Parameters\n ----------\n x : None or number, optional\n Coordinate of the keypoint on the x axis.\n If ``None``, the instance's value will be copied.\n\n y : None or number, optional\n Coordinate of the keypoint on the y axis.\n If ``None``, the instance's value will be copied.\n\n Returns\n -------\n imgaug.Keypoint\n Shallow copy."} {"_id": "q_406", "text": "Create a deep copy of the Keypoint object.\n\n Parameters\n ----------\n x : None or number, optional\n Coordinate of the keypoint on the x axis.\n If ``None``, the instance's value will be copied.\n\n y : None or number, optional\n Coordinate of the keypoint on the y axis.\n If ``None``, the instance's value will be copied.\n\n Returns\n -------\n imgaug.Keypoint\n Deep copy."} {"_id": "q_407", "text": "Project keypoints from one image to a new one.\n\n Parameters\n ----------\n image : ndarray or tuple of int\n New image onto which the keypoints are to be projected.\n May also simply be that new image's shape tuple.\n\n Returns\n -------\n keypoints : imgaug.KeypointsOnImage\n Object containing all projected keypoints."} {"_id": "q_408", "text": "Move the keypoints around on an image.\n\n Parameters\n ----------\n x : number, optional\n Move each keypoint by this value on the x axis.\n\n y : number, optional\n Move each keypoint by this value on the y axis.\n\n Returns\n -------\n out : KeypointsOnImage\n Keypoints after moving them."} {"_id": "q_409", "text": "Create a shallow copy of the KeypointsOnImage object.\n\n Parameters\n ----------\n keypoints : None or list of imgaug.Keypoint, optional\n List of keypoints on the image. If ``None``, the instance's\n keypoints will be copied.\n\n shape : tuple of int, optional\n The shape of the image on which the keypoints are placed.\n If ``None``, the instance's shape will be copied.\n\n Returns\n -------\n imgaug.KeypointsOnImage\n Shallow copy."} {"_id": "q_410", "text": "Compute the intersection bounding box of this bounding box and another one.\n\n Note that in extreme cases, the intersection can be a single point, meaning that the intersection bounding box\n will exist, but then also has a height and width of zero.\n\n Parameters\n ----------\n other : imgaug.BoundingBox\n Other bounding box with which to generate the intersection.\n\n default : any, optional\n Default value to return if there is no intersection.\n\n Returns\n -------\n imgaug.BoundingBox or any\n Intersection bounding box of the two bounding boxes if there is an intersection.\n If there is no intersection, the default value will be returned, which can by anything."} {"_id": "q_411", "text": "Compute the union bounding box of this bounding box and another one.\n\n This is equivalent to drawing a bounding box around all corners points of both\n bounding boxes.\n\n Parameters\n ----------\n other : imgaug.BoundingBox\n Other bounding box with which to generate the union.\n\n Returns\n -------\n imgaug.BoundingBox\n Union bounding box of the two bounding boxes."} {"_id": "q_412", "text": "Estimate whether the bounding box is at least partially inside the image area.\n\n Parameters\n ----------\n image : (H,W,...) ndarray or tuple of int\n Image dimensions to use.\n If an ndarray, its shape will be used.\n If a tuple, it is assumed to represent the image shape\n and must contain at least two integers.\n\n Returns\n -------\n bool\n True if the bounding box is at least partially inside the image area. False otherwise."} {"_id": "q_413", "text": "Estimate whether the bounding box is partially or fully outside of the image area.\n\n Parameters\n ----------\n image : (H,W,...) ndarray or tuple of int\n Image dimensions to use. If an ndarray, its shape will be used. If a tuple, it is\n assumed to represent the image shape and must contain at least two integers.\n\n fully : bool, optional\n Whether to return True if the bounding box is fully outside fo the image area.\n\n partly : bool, optional\n Whether to return True if the bounding box is at least partially outside fo the\n image area.\n\n Returns\n -------\n bool\n True if the bounding box is partially/fully outside of the image area, depending\n on defined parameters. False otherwise."} {"_id": "q_414", "text": "Clip off all parts of the bounding box that are outside of the image.\n\n Parameters\n ----------\n image : (H,W,...) ndarray or tuple of int\n Image dimensions to use for the clipping of the bounding box.\n If an ndarray, its shape will be used.\n If a tuple, it is assumed to represent the image shape and must contain at least two integers.\n\n Returns\n -------\n result : imgaug.BoundingBox\n Bounding box, clipped to fall within the image dimensions."} {"_id": "q_415", "text": "Draw the bounding box on an image.\n\n Parameters\n ----------\n image : (H,W,C) ndarray(uint8)\n The image onto which to draw the bounding box.\n\n color : iterable of int, optional\n The color to use, corresponding to the channel layout of the image. Usually RGB.\n\n alpha : float, optional\n The transparency of the drawn bounding box, where 1.0 denotes no transparency and\n 0.0 is invisible.\n\n size : int, optional\n The thickness of the bounding box in pixels. If the value is larger than 1, then\n additional pixels will be added around the bounding box (i.e. extension towards the\n outside).\n\n copy : bool, optional\n Whether to copy the input image or change it in-place.\n\n raise_if_out_of_image : bool, optional\n Whether to raise an error if the bounding box is fully outside of the\n image. If set to False, no error will be raised and only the parts inside the image\n will be drawn.\n\n thickness : None or int, optional\n Deprecated.\n\n Returns\n -------\n result : (H,W,C) ndarray(uint8)\n Image with bounding box drawn on it."} {"_id": "q_416", "text": "Extract the image pixels within the bounding box.\n\n This function will zero-pad the image if the bounding box is partially/fully outside of\n the image.\n\n Parameters\n ----------\n image : (H,W) ndarray or (H,W,C) ndarray\n The image from which to extract the pixels within the bounding box.\n\n pad : bool, optional\n Whether to zero-pad the image if the object is partially/fully\n outside of it.\n\n pad_max : None or int, optional\n The maximum number of pixels that may be zero-paded on any side,\n i.e. if this has value ``N`` the total maximum of added pixels\n is ``4*N``.\n This option exists to prevent extremely large images as a result of\n single points being moved very far away during augmentation.\n\n prevent_zero_size : bool, optional\n Whether to prevent height or width of the extracted image from becoming zero.\n If this is set to True and height or width of the bounding box is below 1, the height/width will\n be increased to 1. This can be useful to prevent problems, e.g. with image saving or plotting.\n If it is set to False, images will be returned as ``(H', W')`` or ``(H', W', 3)`` with ``H`` or\n ``W`` potentially being 0.\n\n Returns\n -------\n image : (H',W') ndarray or (H',W',C) ndarray\n Pixels within the bounding box. Zero-padded if the bounding box is partially/fully\n outside of the image. If prevent_zero_size is activated, it is guarantueed that ``H'>0``\n and ``W'>0``, otherwise only ``H'>=0`` and ``W'>=0``."} {"_id": "q_417", "text": "Remove all bounding boxes that are fully or partially outside of the image.\n\n Parameters\n ----------\n fully : bool, optional\n Whether to remove bounding boxes that are fully outside of the image.\n\n partly : bool, optional\n Whether to remove bounding boxes that are partially outside of the image.\n\n Returns\n -------\n imgaug.BoundingBoxesOnImage\n Reduced set of bounding boxes, with those that were fully/partially outside of\n the image removed."} {"_id": "q_418", "text": "Clip off all parts from all bounding boxes that are outside of the image.\n\n Returns\n -------\n imgaug.BoundingBoxesOnImage\n Bounding boxes, clipped to fall within the image dimensions."} {"_id": "q_419", "text": "Augmenter that embosses images and overlays the result with the original\n image.\n\n The embossed version pronounces highlights and shadows,\n letting the image look as if it was recreated on a metal plate (\"embossed\").\n\n dtype support::\n\n See ``imgaug.augmenters.convolutional.Convolve``.\n\n Parameters\n ----------\n alpha : number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional\n Visibility of the sharpened image. At 0, only the original image is\n visible, at 1.0 only its sharpened version is visible.\n\n * If an int or float, exactly that value will be used.\n * If a tuple ``(a, b)``, a random value from the range ``a <= x <= b`` will\n be sampled per image.\n * If a list, then a random value will be sampled from that list\n per image.\n * If a StochasticParameter, a value will be sampled from the\n parameter per image.\n\n strength : number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional\n Parameter that controls the strength of the embossing.\n Sane values are somewhere in the range ``(0, 2)`` with 1 being the standard\n embossing effect. Default value is 1.\n\n * If an int or float, exactly that value will be used.\n * If a tuple ``(a, b)``, a random value from the range ``a <= x <= b`` will\n be sampled per image.\n * If a list, then a random value will be sampled from that list\n per image.\n * If a StochasticParameter, a value will be sampled from the\n parameter per image.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = Emboss(alpha=(0.0, 1.0), strength=(0.5, 1.5))\n\n embosses an image with a variable strength in the range ``0.5 <= x <= 1.5``\n and overlays the result with a variable alpha in the range ``0.0 <= a <= 1.0``\n over the old image."} {"_id": "q_420", "text": "Augmenter that detects edges that have certain directions and marks them\n in a black and white image and then overlays the result with the original\n image.\n\n dtype support::\n\n See ``imgaug.augmenters.convolutional.Convolve``.\n\n Parameters\n ----------\n alpha : number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional\n Visibility of the sharpened image. At 0, only the original image is\n visible, at 1.0 only its sharpened version is visible.\n\n * If an int or float, exactly that value will be used.\n * If a tuple ``(a, b)``, a random value from the range ``a <= x <= b`` will\n be sampled per image.\n * If a list, then a random value will be sampled from that list\n per image.\n * If a StochasticParameter, a value will be sampled from the\n parameter per image.\n\n direction : number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional\n Angle of edges to pronounce, where 0 represents 0 degrees and 1.0\n represents 360 degrees (both clockwise, starting at the top).\n Default value is ``(0.0, 1.0)``, i.e. pick a random angle per image.\n\n * If an int or float, exactly that value will be used.\n * If a tuple ``(a, b)``, a random value from the range ``a <= x <= b`` will\n be sampled per image.\n * If a list, then a random value will be sampled from that list\n per image.\n * If a StochasticParameter, a value will be sampled from the\n parameter per image.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = DirectedEdgeDetect(alpha=1.0, direction=0)\n\n turns input images into edge images in which edges are detected from\n top side of the image (i.e. the top sides of horizontal edges are\n added to the output).\n\n >>> aug = DirectedEdgeDetect(alpha=1.0, direction=90/360)\n\n same as before, but detecting edges from the right (right side of each\n vertical edge).\n\n >>> aug = DirectedEdgeDetect(alpha=1.0, direction=(0.0, 1.0))\n\n same as before, but detecting edges from a variable direction (anything\n between 0 and 1.0, i.e. 0 degrees and 360 degrees, starting from the\n top and moving clockwise).\n\n >>> aug = DirectedEdgeDetect(alpha=(0.0, 0.3), direction=0)\n\n generates edge images (edges detected from the top) and overlays them\n with the input images by a variable amount between 0 and 30 percent\n (e.g. for 0.3 then ``0.7*old_image + 0.3*edge_image``)."} {"_id": "q_421", "text": "Normalize a shape tuple or array to a shape tuple.\n\n Parameters\n ----------\n shape : tuple of int or ndarray\n The input to normalize. May optionally be an array.\n\n Returns\n -------\n tuple of int\n Shape tuple."} {"_id": "q_422", "text": "Project coordinates from one image shape to another.\n\n This performs a relative projection, e.g. a point at 60% of the old\n image width will be at 60% of the new image width after projection.\n\n Parameters\n ----------\n coords : ndarray or tuple of number\n Coordinates to project. Either a ``(N,2)`` numpy array or a tuple\n of `(x,y)` coordinates.\n\n from_shape : tuple of int or ndarray\n Old image shape.\n\n to_shape : tuple of int or ndarray\n New image shape.\n\n Returns\n -------\n ndarray\n Projected coordinates as ``(N,2)`` ``float32`` numpy array."} {"_id": "q_423", "text": "Create an augmenter to add poisson noise to images.\n\n Poisson noise is comparable to gaussian noise as in ``AdditiveGaussianNoise``, but the values are sampled from\n a poisson distribution instead of a gaussian distribution. As poisson distributions produce only positive numbers,\n the sign of the sampled values are here randomly flipped.\n\n Values of around ``10.0`` for `lam` lead to visible noise (for uint8).\n Values of around ``20.0`` for `lam` lead to very visible noise (for uint8).\n It is recommended to usually set `per_channel` to True.\n\n dtype support::\n\n See ``imgaug.augmenters.arithmetic.AddElementwise``.\n\n Parameters\n ----------\n lam : number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional\n Lambda parameter of the poisson distribution. Recommended values are around ``0.0`` to ``10.0``.\n\n * If a number, exactly that value will be used.\n * If a tuple ``(a, b)``, a random value from the range ``a <= x <= b`` will\n be sampled per image.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, a value will be sampled from the\n parameter per image.\n\n per_channel : bool or float, optional\n Whether to use the same noise value per pixel for all channels (False)\n or to sample a new value for each channel (True).\n If this value is a float ``p``, then for ``p`` percent of all images\n `per_channel` will be treated as True, otherwise as False.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.AdditivePoissonNoise(lam=5.0)\n\n Adds poisson noise sampled from ``Poisson(5.0)`` to images.\n\n >>> aug = iaa.AdditivePoissonNoise(lam=(0.0, 10.0))\n\n Adds poisson noise sampled from ``Poisson(x)`` to images, where ``x`` is randomly sampled per image from the\n interval ``[0.0, 10.0]``.\n\n >>> aug = iaa.AdditivePoissonNoise(lam=5.0, per_channel=True)\n\n Adds poisson noise sampled from ``Poisson(5.0)`` to images,\n where the values are different per pixel *and* channel (e.g. a\n different one for red, green and blue channels for the same pixel).\n\n >>> aug = iaa.AdditivePoissonNoise(lam=(0.0, 10.0), per_channel=True)\n\n Adds poisson noise sampled from ``Poisson(x)`` to images,\n with ``x`` being sampled from ``uniform(0.0, 10.0)`` per image, pixel and channel.\n This is the *recommended* configuration.\n\n >>> aug = iaa.AdditivePoissonNoise(lam=2, per_channel=0.5)\n\n Adds poisson noise sampled from the distribution ``Poisson(2)`` to images,\n where the values are sometimes (50 percent of all cases) the same\n per pixel for all channels and sometimes different (other 50 percent)."} {"_id": "q_424", "text": "Augmenter that sets a certain fraction of pixels in images to zero.\n\n dtype support::\n\n See ``imgaug.augmenters.arithmetic.MultiplyElementwise``.\n\n Parameters\n ----------\n p : float or tuple of float or imgaug.parameters.StochasticParameter, optional\n The probability of any pixel being dropped (i.e. set to zero).\n\n * If a float, then that value will be used for all images. A value\n of 1.0 would mean that all pixels will be dropped and 0.0 that\n no pixels would be dropped. A value of 0.05 corresponds to 5\n percent of all pixels dropped.\n * If a tuple ``(a, b)``, then a value p will be sampled from the\n range ``a <= p <= b`` per image and be used as the pixel's dropout\n probability.\n * If a StochasticParameter, then this parameter will be used to\n determine per pixel whether it should be dropped (sampled value\n of 0) or shouldn't (sampled value of 1).\n If you instead want to provide the probability as a stochastic\n parameter, you can usually do ``imgaug.parameters.Binomial(1-p)``\n to convert parameter `p` to a 0/1 representation.\n\n per_channel : bool or float, optional\n Whether to use the same value (is dropped / is not dropped)\n for all channels of a pixel (False) or to sample a new value for each\n channel (True).\n If this value is a float p, then for p percent of all images\n `per_channel` will be treated as True, otherwise as False.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.Dropout(0.02)\n\n drops 2 percent of all pixels.\n\n >>> aug = iaa.Dropout((0.0, 0.05))\n\n drops in each image a random fraction of all pixels, where the fraction\n is in the range ``0.0 <= x <= 0.05``.\n\n >>> aug = iaa.Dropout(0.02, per_channel=True)\n\n drops 2 percent of all pixels in a channel-wise fashion, i.e. it is unlikely\n for any pixel to have all channels set to zero (black pixels).\n\n >>> aug = iaa.Dropout(0.02, per_channel=0.5)\n\n same as previous example, but the `per_channel` feature is only active\n for 50 percent of all images."} {"_id": "q_425", "text": "Creates an augmenter to apply impulse noise to an image.\n\n This is identical to ``SaltAndPepper``, except that per_channel is always set to True.\n\n dtype support::\n\n See ``imgaug.augmenters.arithmetic.SaltAndPepper``."} {"_id": "q_426", "text": "Adds salt and pepper noise to an image, i.e. some white-ish and black-ish pixels.\n\n dtype support::\n\n See ``imgaug.augmenters.arithmetic.ReplaceElementwise``.\n\n Parameters\n ----------\n p : float or tuple of float or list of float or imgaug.parameters.StochasticParameter, optional\n Probability of changing a pixel to salt/pepper noise.\n\n * If a float, then that value will be used for all images as the\n probability.\n * If a tuple ``(a, b)``, then a probability will be sampled per image\n from the range ``a <= x <= b``.\n * If a list, then a random value will be sampled from that list\n per image.\n * If a StochasticParameter, then this parameter will be used as\n the *mask*, i.e. it is expected to contain values between\n 0.0 and 1.0, where 1.0 means that salt/pepper is to be added\n at that location.\n\n per_channel : bool or float, optional\n Whether to use the same value for all channels (False)\n or to sample a new value for each channel (True).\n If this value is a float ``p``, then for ``p`` percent of all images\n `per_channel` will be treated as True, otherwise as False.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.SaltAndPepper(0.05)\n\n Replaces 5 percent of all pixels with salt/pepper."} {"_id": "q_427", "text": "Adds pepper noise to an image, i.e. black-ish pixels.\n\n This is similar to dropout, but slower and the black pixels are not uniformly black.\n\n dtype support::\n\n See ``imgaug.augmenters.arithmetic.ReplaceElementwise``.\n\n Parameters\n ----------\n p : float or tuple of float or list of float or imgaug.parameters.StochasticParameter, optional\n Probability of changing a pixel to pepper noise.\n\n * If a float, then that value will be used for all images as the\n probability.\n * If a tuple ``(a, b)``, then a probability will be sampled per image\n from the range ``a <= x <= b``.\n * If a list, then a random value will be sampled from that list\n per image.\n * If a StochasticParameter, then this parameter will be used as\n the *mask*, i.e. it is expected to contain values between\n 0.0 and 1.0, where 1.0 means that pepper is to be added\n at that location.\n\n per_channel : bool or float, optional\n Whether to use the same value for all channels (False)\n or to sample a new value for each channel (True).\n If this value is a float ``p``, then for ``p`` percent of all images\n `per_channel` will be treated as True, otherwise as False.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.Pepper(0.05)\n\n Replaces 5 percent of all pixels with pepper."} {"_id": "q_428", "text": "Adds coarse pepper noise to an image, i.e. rectangles that contain noisy black-ish pixels.\n\n dtype support::\n\n See ``imgaug.augmenters.arithmetic.ReplaceElementwise``.\n\n Parameters\n ----------\n p : float or tuple of float or list of float or imgaug.parameters.StochasticParameter, optional\n Probability of changing a pixel to pepper noise.\n\n * If a float, then that value will be used for all images as the\n probability.\n * If a tuple ``(a, b)``, then a probability will be sampled per image\n from the range ``a <= x <= b.``\n * If a list, then a random value will be sampled from that list\n per image.\n * If a StochasticParameter, then this parameter will be used as\n the *mask*, i.e. it is expected to contain values between\n 0.0 and 1.0, where 1.0 means that pepper is to be added\n at that location.\n\n size_px : int or tuple of int or imgaug.parameters.StochasticParameter, optional\n The size of the lower resolution image from which to sample the noise\n mask in absolute pixel dimensions.\n\n * If an integer, then that size will be used for both height and\n width. E.g. a value of 3 would lead to a ``3x3`` mask, which is then\n upsampled to ``HxW``, where ``H`` is the image size and W the image width.\n * If a tuple ``(a, b)``, then two values ``M``, ``N`` will be sampled from the\n range ``[a..b]`` and the mask will be generated at size ``MxN``, then\n upsampled to ``HxW``.\n * If a StochasticParameter, then this parameter will be used to\n determine the sizes. It is expected to be discrete.\n\n size_percent : float or tuple of float or imgaug.parameters.StochasticParameter, optional\n The size of the lower resolution image from which to sample the noise\n mask *in percent* of the input image.\n\n * If a float, then that value will be used as the percentage of the\n height and width (relative to the original size). E.g. for value\n p, the mask will be sampled from ``(p*H)x(p*W)`` and later upsampled\n to ``HxW``.\n * If a tuple ``(a, b)``, then two values ``m``, ``n`` will be sampled from the\n interval ``(a, b)`` and used as the percentages, i.e the mask size\n will be ``(m*H)x(n*W)``.\n * If a StochasticParameter, then this parameter will be used to\n sample the percentage values. It is expected to be continuous.\n\n per_channel : bool or float, optional\n Whether to use the same value (is dropped / is not dropped)\n for all channels of a pixel (False) or to sample a new value for each\n channel (True).\n If this value is a float ``p``, then for ``p`` percent of all images\n `per_channel` will be treated as True, otherwise as False.\n\n min_size : int, optional\n Minimum size of the low resolution mask, both width and height. If\n `size_percent` or `size_px` leads to a lower value than this, `min_size`\n will be used instead. This should never have a value of less than 2,\n otherwise one may end up with a 1x1 low resolution mask, leading easily\n to the whole image being replaced.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.CoarsePepper(0.05, size_percent=(0.01, 0.1))\n\n Replaces 5 percent of all pixels with pepper in an image that has\n 1 to 10 percent of the input image size, then upscales the results\n to the input image size, leading to large rectangular areas being replaced."} {"_id": "q_429", "text": "Augmenter that changes the contrast of images.\n\n dtype support:\n\n See ``imgaug.augmenters.contrast.LinearContrast``.\n\n Parameters\n ----------\n alpha : number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional\n Strength of the contrast normalization. Higher values than 1.0\n lead to higher contrast, lower values decrease the contrast.\n\n * If a number, then that value will be used for all images.\n * If a tuple ``(a, b)``, then a value will be sampled per image from\n the range ``a <= x <= b`` and be used as the alpha value.\n * If a list, then a random value will be sampled per image from\n that list.\n * If a StochasticParameter, then this parameter will be used to\n sample the alpha value per image.\n\n per_channel : bool or float, optional\n Whether to use the same value for all channels (False)\n or to sample a new value for each channel (True).\n If this value is a float ``p``, then for ``p`` percent of all images\n `per_channel` will be treated as True, otherwise as False.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> iaa.ContrastNormalization((0.5, 1.5))\n\n Decreases oder improves contrast per image by a random factor between\n 0.5 and 1.5. The factor 0.5 means that any difference from the center value\n (i.e. 128) will be halved, leading to less contrast.\n\n >>> iaa.ContrastNormalization((0.5, 1.5), per_channel=0.5)\n\n Same as before, but for 50 percent of all images the normalization is done\n independently per channel (i.e. factors can vary per channel for the same\n image). In the other 50 percent of all images, the factor is the same for\n all channels."} {"_id": "q_430", "text": "Checks whether a variable is a numpy integer array.\n\n Parameters\n ----------\n val\n The variable to check.\n\n Returns\n -------\n bool\n True if the variable is a numpy integer array. Otherwise False."} {"_id": "q_431", "text": "Checks whether a variable is a numpy float array.\n\n Parameters\n ----------\n val\n The variable to check.\n\n Returns\n -------\n bool\n True if the variable is a numpy float array. Otherwise False."} {"_id": "q_432", "text": "Creates a copy of a random state.\n\n Parameters\n ----------\n random_state : numpy.random.RandomState\n The random state to copy.\n\n force_copy : bool, optional\n If True, this function will always create a copy of every random\n state. If False, it will not copy numpy's default random state,\n but all other random states.\n\n Returns\n -------\n rs_copy : numpy.random.RandomState\n The copied random state."} {"_id": "q_433", "text": "Create N new random states based on an existing random state or seed.\n\n Parameters\n ----------\n random_state : numpy.random.RandomState\n Random state or seed from which to derive new random states.\n\n n : int, optional\n Number of random states to derive.\n\n Returns\n -------\n list of numpy.random.RandomState\n Derived random states."} {"_id": "q_434", "text": "Generate a normalized rectangle to be extract from the standard quokka image.\n\n Parameters\n ----------\n extract : 'square' or tuple of number or imgaug.BoundingBox or imgaug.BoundingBoxesOnImage\n Unnormalized representation of the image subarea to be extracted.\n\n * If string ``square``, then a squared area ``(x: 0 to max 643, y: 0 to max 643)``\n will be extracted from the image.\n * If a tuple, then expected to contain four numbers denoting ``x1``, ``y1``, ``x2``\n and ``y2``.\n * If a BoundingBox, then that bounding box's area will be extracted from the image.\n * If a BoundingBoxesOnImage, then expected to contain exactly one bounding box\n and a shape matching the full image dimensions (i.e. (643, 960, *)). Then the\n one bounding box will be used similar to BoundingBox.\n\n Returns\n -------\n bb : imgaug.BoundingBox\n Normalized representation of the area to extract from the standard quokka image."} {"_id": "q_435", "text": "Computes the intended new shape of an image-like array after resizing.\n\n Parameters\n ----------\n from_shape : tuple or ndarray\n Old shape of the array. Usually expected to be a tuple of form ``(H, W)`` or ``(H, W, C)`` or\n alternatively an array with two or three dimensions.\n\n to_shape : None or tuple of ints or tuple of floats or int or float or ndarray\n New shape of the array.\n\n * If None, then `from_shape` will be used as the new shape.\n * If an int ``V``, then the new shape will be ``(V, V, [C])``, where ``C`` will be added if it\n is part of `from_shape`.\n * If a float ``V``, then the new shape will be ``(H*V, W*V, [C])``, where ``H`` and ``W`` are the old\n height/width.\n * If a tuple ``(H', W', [C'])`` of ints, then ``H'`` and ``W'`` will be used as the new height\n and width.\n * If a tuple ``(H', W', [C'])`` of floats (except ``C``), then ``H'`` and ``W'`` will\n be used as the new height and width.\n * If a numpy array, then the array's shape will be used.\n\n Returns\n -------\n to_shape_computed : tuple of int\n New shape."} {"_id": "q_436", "text": "Returns an image of a quokka as a numpy array.\n\n Parameters\n ----------\n size : None or float or tuple of int, optional\n Size of the output image. Input into :func:`imgaug.imgaug.imresize_single_image`.\n Usually expected to be a tuple ``(H, W)``, where ``H`` is the desired height\n and ``W`` is the width. If None, then the image will not be resized.\n\n extract : None or 'square' or tuple of number or imgaug.BoundingBox or imgaug.BoundingBoxesOnImage\n Subarea of the quokka image to extract:\n\n * If None, then the whole image will be used.\n * If string ``square``, then a squared area ``(x: 0 to max 643, y: 0 to max 643)`` will\n be extracted from the image.\n * If a tuple, then expected to contain four numbers denoting ``x1``, ``y1``, ``x2``\n and ``y2``.\n * If a BoundingBox, then that bounding box's area will be extracted from the image.\n * If a BoundingBoxesOnImage, then expected to contain exactly one bounding box\n and a shape matching the full image dimensions (i.e. ``(643, 960, *)``). Then the\n one bounding box will be used similar to BoundingBox.\n\n Returns\n -------\n img : (H,W,3) ndarray\n The image array of dtype uint8."} {"_id": "q_437", "text": "Returns a segmentation map for the standard example quokka image.\n\n Parameters\n ----------\n size : None or float or tuple of int, optional\n See :func:`imgaug.quokka`.\n\n extract : None or 'square' or tuple of number or imgaug.BoundingBox or imgaug.BoundingBoxesOnImage\n See :func:`imgaug.quokka`.\n\n Returns\n -------\n result : imgaug.SegmentationMapOnImage\n Segmentation map object."} {"_id": "q_438", "text": "Returns example keypoints on the standard example quokke image.\n\n The keypoints cover the eyes, ears, nose and paws.\n\n Parameters\n ----------\n size : None or float or tuple of int or tuple of float, optional\n Size of the output image on which the keypoints are placed. If None, then the keypoints\n are not projected to any new size (positions on the original image are used).\n Floats lead to relative size changes, ints to absolute sizes in pixels.\n\n extract : None or 'square' or tuple of number or imgaug.BoundingBox or imgaug.BoundingBoxesOnImage\n Subarea to extract from the image. See :func:`imgaug.quokka`.\n\n Returns\n -------\n kpsoi : imgaug.KeypointsOnImage\n Example keypoints on the quokka image."} {"_id": "q_439", "text": "Returns example bounding boxes on the standard example quokke image.\n\n Currently only a single bounding box is returned that covers the quokka.\n\n Parameters\n ----------\n size : None or float or tuple of int or tuple of float, optional\n Size of the output image on which the BBs are placed. If None, then the BBs\n are not projected to any new size (positions on the original image are used).\n Floats lead to relative size changes, ints to absolute sizes in pixels.\n\n extract : None or 'square' or tuple of number or imgaug.BoundingBox or imgaug.BoundingBoxesOnImage\n Subarea to extract from the image. See :func:`imgaug.quokka`.\n\n Returns\n -------\n bbsoi : imgaug.BoundingBoxesOnImage\n Example BBs on the quokka image."} {"_id": "q_440", "text": "Returns example polygons on the standard example quokke image.\n\n The result contains one polygon, covering the quokka's outline.\n\n Parameters\n ----------\n size : None or float or tuple of int or tuple of float, optional\n Size of the output image on which the polygons are placed. If None,\n then the polygons are not projected to any new size (positions on the\n original image are used). Floats lead to relative size changes, ints\n to absolute sizes in pixels.\n\n extract : None or 'square' or tuple of number or imgaug.BoundingBox or \\\n imgaug.BoundingBoxesOnImage\n Subarea to extract from the image. See :func:`imgaug.quokka`.\n\n Returns\n -------\n psoi : imgaug.PolygonsOnImage\n Example polygons on the quokka image."} {"_id": "q_441", "text": "Returns the angle in radians between vectors `v1` and `v2`.\n\n From http://stackoverflow.com/questions/2827393/angles-between-two-n-dimensional-vectors-in-python\n\n Parameters\n ----------\n v1 : (N,) ndarray\n First vector.\n\n v2 : (N,) ndarray\n Second vector.\n\n Returns\n -------\n out : float\n Angle in radians.\n\n Examples\n --------\n >>> angle_between_vectors(np.float32([1, 0, 0]), np.float32([0, 1, 0]))\n 1.570796...\n\n >>> angle_between_vectors(np.float32([1, 0, 0]), np.float32([1, 0, 0]))\n 0.0\n\n >>> angle_between_vectors(np.float32([1, 0, 0]), np.float32([-1, 0, 0]))\n 3.141592..."} {"_id": "q_442", "text": "Compute the intersection point of two lines.\n\n Taken from https://stackoverflow.com/a/20679579 .\n\n Parameters\n ----------\n x1 : number\n x coordinate of the first point on line 1. (The lines extends beyond this point.)\n\n y1 : number\n y coordinate of the first point on line 1. (The lines extends beyond this point.)\n\n x2 : number\n x coordinate of the second point on line 1. (The lines extends beyond this point.)\n\n y2 : number\n y coordinate of the second point on line 1. (The lines extends beyond this point.)\n\n x3 : number\n x coordinate of the first point on line 2. (The lines extends beyond this point.)\n\n y3 : number\n y coordinate of the first point on line 2. (The lines extends beyond this point.)\n\n x4 : number\n x coordinate of the second point on line 2. (The lines extends beyond this point.)\n\n y4 : number\n y coordinate of the second point on line 2. (The lines extends beyond this point.)\n\n Returns\n -------\n tuple of number or bool\n The coordinate of the intersection point as a tuple ``(x, y)``.\n If the lines are parallel (no intersection point or an infinite number of them), the result is False."} {"_id": "q_443", "text": "Resizes a single image.\n\n\n dtype support::\n\n See :func:`imgaug.imgaug.imresize_many_images`.\n\n Parameters\n ----------\n image : (H,W,C) ndarray or (H,W) ndarray\n Array of the image to resize.\n Usually recommended to be of dtype uint8.\n\n sizes : float or iterable of int or iterable of float\n See :func:`imgaug.imgaug.imresize_many_images`.\n\n interpolation : None or str or int, optional\n See :func:`imgaug.imgaug.imresize_many_images`.\n\n Returns\n -------\n out : (H',W',C) ndarray or (H',W') ndarray\n The resized image."} {"_id": "q_444", "text": "Compute the amount of pixels by which an array has to be padded to fulfill an aspect ratio.\n\n The aspect ratio is given as width/height.\n Depending on which dimension is smaller (height or width), only the corresponding\n sides (left/right or top/bottom) will be padded. In each case, both of the sides will\n be padded equally.\n\n Parameters\n ----------\n arr : (H,W) ndarray or (H,W,C) ndarray\n Image-like array for which to compute pad amounts.\n\n aspect_ratio : float\n Target aspect ratio, given as width/height. E.g. 2.0 denotes the image having twice\n as much width as height.\n\n Returns\n -------\n result : tuple of int\n Required paddign amounts to reach the target aspect ratio, given as a tuple\n of the form ``(top, right, bottom, left)``."} {"_id": "q_445", "text": "Resize an array by pooling values within blocks.\n\n dtype support::\n\n * ``uint8``: yes; fully tested\n * ``uint16``: yes; tested\n * ``uint32``: yes; tested (2)\n * ``uint64``: no (1)\n * ``int8``: yes; tested\n * ``int16``: yes; tested\n * ``int32``: yes; tested (2)\n * ``int64``: no (1)\n * ``float16``: yes; tested\n * ``float32``: yes; tested\n * ``float64``: yes; tested\n * ``float128``: yes; tested (2)\n * ``bool``: yes; tested\n\n - (1) results too inaccurate (at least when using np.average as func)\n - (2) Note that scikit-image documentation says that the wrapped pooling function converts\n inputs to float64. Actual tests showed no indication of that happening (at least when\n using preserve_dtype=True).\n\n Parameters\n ----------\n arr : (H,W) ndarray or (H,W,C) ndarray\n Image-like array to pool. Ideally of datatype ``numpy.float64``.\n\n block_size : int or tuple of int\n Spatial size of each group of values to pool, aka kernel size.\n If a single integer, then a symmetric block of that size along height and width will be used.\n If a tuple of two values, it is assumed to be the block size along height and width of the image-like,\n with pooling happening per channel.\n If a tuple of three values, it is assumed to be the block size along height, width and channels.\n\n func : callable\n Function to apply to a given block in order to convert it to a single number,\n e.g. :func:`numpy.average`, :func:`numpy.min`, :func:`numpy.max`.\n\n cval : number, optional\n Value to use in order to pad the array along its border if the array cannot be divided\n by `block_size` without remainder.\n\n preserve_dtype : bool, optional\n Whether to convert the array back to the input datatype if it is changed away from\n that in the pooling process.\n\n Returns\n -------\n arr_reduced : (H',W') ndarray or (H',W',C') ndarray\n Array after pooling."} {"_id": "q_446", "text": "Resize an array using average pooling.\n\n dtype support::\n\n See :func:`imgaug.imgaug.pool`.\n\n Parameters\n ----------\n arr : (H,W) ndarray or (H,W,C) ndarray\n Image-like array to pool. See :func:`imgaug.pool` for details.\n\n block_size : int or tuple of int or tuple of int\n Size of each block of values to pool. See :func:`imgaug.pool` for details.\n\n cval : number, optional\n Padding value. See :func:`imgaug.pool` for details.\n\n preserve_dtype : bool, optional\n Whether to preserve the input array dtype. See :func:`imgaug.pool` for details.\n\n Returns\n -------\n arr_reduced : (H',W') ndarray or (H',W',C') ndarray\n Array after average pooling."} {"_id": "q_447", "text": "Resize an array using max-pooling.\n\n dtype support::\n\n See :func:`imgaug.imgaug.pool`.\n\n Parameters\n ----------\n arr : (H,W) ndarray or (H,W,C) ndarray\n Image-like array to pool. See :func:`imgaug.pool` for details.\n\n block_size : int or tuple of int or tuple of int\n Size of each block of values to pool. See `imgaug.pool` for details.\n\n cval : number, optional\n Padding value. See :func:`imgaug.pool` for details.\n\n preserve_dtype : bool, optional\n Whether to preserve the input array dtype. See :func:`imgaug.pool` for details.\n\n Returns\n -------\n arr_reduced : (H',W') ndarray or (H',W',C') ndarray\n Array after max-pooling."} {"_id": "q_448", "text": "Converts the input images to a grid image and shows it in a new window.\n\n dtype support::\n\n minimum of (\n :func:`imgaug.imgaug.draw_grid`,\n :func:`imgaug.imgaug.imshow`\n )\n\n Parameters\n ----------\n images : (N,H,W,3) ndarray or iterable of (H,W,3) array\n See :func:`imgaug.draw_grid`.\n\n rows : None or int, optional\n See :func:`imgaug.draw_grid`.\n\n cols : None or int, optional\n See :func:`imgaug.draw_grid`."} {"_id": "q_449", "text": "Shows an image in a window.\n\n dtype support::\n\n * ``uint8``: yes; not tested\n * ``uint16``: ?\n * ``uint32``: ?\n * ``uint64``: ?\n * ``int8``: ?\n * ``int16``: ?\n * ``int32``: ?\n * ``int64``: ?\n * ``float16``: ?\n * ``float32``: ?\n * ``float64``: ?\n * ``float128``: ?\n * ``bool``: ?\n\n Parameters\n ----------\n image : (H,W,3) ndarray\n Image to show.\n\n backend : {'matplotlib', 'cv2'}, optional\n Library to use to show the image. May be either matplotlib or OpenCV ('cv2').\n OpenCV tends to be faster, but apparently causes more technical issues."} {"_id": "q_450", "text": "Generate a non-silent deprecation warning with stacktrace.\n\n The used warning is ``imgaug.imgaug.DeprecationWarning``.\n\n Parameters\n ----------\n msg : str\n The message of the warning.\n\n stacklevel : int, optional\n How many steps above this function to \"jump\" in the stacktrace for\n the displayed file and line number of the error message.\n Usually 2."} {"_id": "q_451", "text": "Returns whether an augmenter may be executed.\n\n Returns\n -------\n bool\n If True, the augmenter may be executed. If False, it may not be executed."} {"_id": "q_452", "text": "A function to be called after the augmentation of images was\n performed.\n\n Returns\n -------\n (N,H,W,C) ndarray or (N,H,W) ndarray or list of (H,W,C) ndarray or list of (H,W) ndarray\n The input images, optionally modified."} {"_id": "q_453", "text": "Augment batches asynchonously.\n\n Parameters\n ----------\n batches : list of imgaug.augmentables.batches.Batch\n The batches to augment.\n\n chunksize : None or int, optional\n Rough indicator of how many tasks should be sent to each worker. Increasing this number can improve\n performance.\n\n callback : None or callable, optional\n Function to call upon finish. See `multiprocessing.Pool`.\n\n error_callback : None or callable, optional\n Function to call upon errors. See `multiprocessing.Pool`.\n\n Returns\n -------\n multiprocessing.MapResult\n Asynchonous result. See `multiprocessing.Pool`."} {"_id": "q_454", "text": "Augment batches from a generator.\n\n Parameters\n ----------\n batches : generator of imgaug.augmentables.batches.Batch\n The batches to augment, provided as a generator. Each call to the generator should yield exactly one\n batch.\n\n chunksize : None or int, optional\n Rough indicator of how many tasks should be sent to each worker. Increasing this number can improve\n performance.\n\n Yields\n ------\n imgaug.augmentables.batches.Batch\n Augmented batch."} {"_id": "q_455", "text": "Terminate the pool immediately."} {"_id": "q_456", "text": "Returns a batch from the queue of augmented batches.\n\n If workers are still running and there are no batches in the queue,\n it will automatically wait for the next batch.\n\n Returns\n -------\n out : None or imgaug.Batch\n One batch or None if all workers have finished."} {"_id": "q_457", "text": "Converts another parameter's results to negative values.\n\n Parameters\n ----------\n other_param : imgaug.parameters.StochasticParameter\n Other parameter which's sampled values are to be\n modified.\n\n mode : {'invert', 'reroll'}, optional\n How to change the signs. Valid values are ``invert`` and ``reroll``.\n ``invert`` means that wrong signs are simply flipped.\n ``reroll`` means that all samples with wrong signs are sampled again,\n optionally many times, until they randomly end up having the correct\n sign.\n\n reroll_count_max : int, optional\n If `mode` is set to ``reroll``, this determines how often values may\n be rerolled before giving up and simply flipping the sign (as in\n ``mode=\"invert\"``). This shouldn't be set too high, as rerolling is\n expensive.\n\n Examples\n --------\n >>> param = Negative(Normal(0, 1), mode=\"reroll\")\n\n Generates a normal distribution that has only negative values."} {"_id": "q_458", "text": "Estimate the area of the polygon.\n\n Returns\n -------\n number\n Area of the polygon."} {"_id": "q_459", "text": "Project the polygon onto an image with different shape.\n\n The relative coordinates of all points remain the same.\n E.g. a point at (x=20, y=20) on an image (width=100, height=200) will be\n projected on a new image (width=200, height=100) to (x=40, y=10).\n\n This is intended for cases where the original image is resized.\n It cannot be used for more complex changes (e.g. padding, cropping).\n\n Parameters\n ----------\n from_shape : tuple of int\n Shape of the original image. (Before resize.)\n\n to_shape : tuple of int\n Shape of the new image. (After resize.)\n\n Returns\n -------\n imgaug.Polygon\n Polygon object with new coordinates."} {"_id": "q_460", "text": "Find the index of the point within the exterior that is closest to the given coordinates.\n\n \"Closeness\" is here defined based on euclidean distance.\n This method will raise an AssertionError if the exterior contains no points.\n\n Parameters\n ----------\n x : number\n X-coordinate around which to search for close points.\n\n y : number\n Y-coordinate around which to search for close points.\n\n return_distance : bool, optional\n Whether to also return the distance of the closest point.\n\n Returns\n -------\n int\n Index of the closest point.\n\n number\n Euclidean distance to the closest point.\n This value is only returned if `return_distance` was set to True."} {"_id": "q_461", "text": "Estimate whether the polygon is fully inside the image area.\n\n Parameters\n ----------\n image : (H,W,...) ndarray or tuple of int\n Image dimensions to use.\n If an ndarray, its shape will be used.\n If a tuple, it is assumed to represent the image shape and must contain at least two integers.\n\n Returns\n -------\n bool\n True if the polygon is fully inside the image area.\n False otherwise."} {"_id": "q_462", "text": "Estimate whether the polygon is at least partially inside the image area.\n\n Parameters\n ----------\n image : (H,W,...) ndarray or tuple of int\n Image dimensions to use.\n If an ndarray, its shape will be used.\n If a tuple, it is assumed to represent the image shape and must contain at least two integers.\n\n Returns\n -------\n bool\n True if the polygon is at least partially inside the image area.\n False otherwise."} {"_id": "q_463", "text": "Estimate whether the polygon is partially or fully outside of the image area.\n\n Parameters\n ----------\n image : (H,W,...) ndarray or tuple of int\n Image dimensions to use.\n If an ndarray, its shape will be used.\n If a tuple, it is assumed to represent the image shape and must contain at least two integers.\n\n fully : bool, optional\n Whether to return True if the polygon is fully outside of the image area.\n\n partly : bool, optional\n Whether to return True if the polygon is at least partially outside fo the image area.\n\n Returns\n -------\n bool\n True if the polygon is partially/fully outside of the image area, depending\n on defined parameters. False otherwise."} {"_id": "q_464", "text": "Extract the image pixels within the polygon.\n\n This function will zero-pad the image if the polygon is partially/fully outside of\n the image.\n\n Parameters\n ----------\n image : (H,W) ndarray or (H,W,C) ndarray\n The image from which to extract the pixels within the polygon.\n\n Returns\n -------\n result : (H',W') ndarray or (H',W',C) ndarray\n Pixels within the polygon. Zero-padded if the polygon is partially/fully\n outside of the image."} {"_id": "q_465", "text": "Set the first point of the exterior to the given point based on its index.\n\n Note: This method does *not* work in-place.\n\n Parameters\n ----------\n point_idx : int\n Index of the desired starting point.\n\n Returns\n -------\n imgaug.Polygon\n Copy of this polygon with the new point order."} {"_id": "q_466", "text": "Convert this polygon to a Shapely polygon.\n\n Returns\n -------\n shapely.geometry.Polygon\n The Shapely polygon matching this polygon's exterior."} {"_id": "q_467", "text": "Convert this polygon to a Shapely LineString object.\n\n Parameters\n ----------\n closed : bool, optional\n Whether to return the line string with the last point being identical to the first point.\n\n interpolate : int, optional\n Number of points to interpolate between any pair of two consecutive points. These points are added\n to the final line string.\n\n Returns\n -------\n shapely.geometry.LineString\n The Shapely LineString matching the polygon's exterior."} {"_id": "q_468", "text": "Convert this polygon's `exterior` to a ``LineString`` instance.\n\n Parameters\n ----------\n closed : bool, optional\n Whether to close the line string, i.e. to add the first point of\n the `exterior` also as the last point at the end of the line string.\n This has no effect if the polygon has a single point or zero\n points.\n\n Returns\n -------\n imgaug.augmentables.lines.LineString\n Exterior of the polygon as a line string."} {"_id": "q_469", "text": "Estimate if this and other polygon's exterior are almost identical.\n\n The two exteriors can have different numbers of points, but any point\n randomly sampled on the exterior of one polygon should be close to the\n closest point on the exterior of the other polygon.\n\n Note that this method works approximately. One can come up with\n polygons with fairly different shapes that will still be estimated as\n equal by this method. In practice however this should be unlikely to be\n the case. The probability for something like that goes down as the\n interpolation parameter is increased.\n\n Parameters\n ----------\n other : imgaug.Polygon or (N,2) ndarray or list of tuple\n The other polygon with which to compare the exterior.\n If this is an ndarray, it is assumed to represent an exterior.\n It must then have dtype ``float32`` and shape ``(N,2)`` with the\n second dimension denoting xy-coordinates.\n If this is a list of tuples, it is assumed to represent an exterior.\n Each tuple then must contain exactly two numbers, denoting\n xy-coordinates.\n\n max_distance : number, optional\n The maximum euclidean distance between a point on one polygon and\n the closest point on the other polygon. If the distance is exceeded\n for any such pair, the two exteriors are not viewed as equal. The\n points are other the points contained in the polygon's exterior\n ndarray or interpolated points between these.\n\n points_per_edge : int, optional\n How many points to interpolate on each edge.\n\n Returns\n -------\n bool\n Whether the two polygon's exteriors can be viewed as equal\n (approximate test)."} {"_id": "q_470", "text": "Create a shallow copy of the Polygon object.\n\n Parameters\n ----------\n exterior : list of imgaug.Keypoint or list of tuple or (N,2) ndarray, optional\n List of points defining the polygon. See :func:`imgaug.Polygon.__init__` for details.\n\n label : None or str, optional\n If not None, then the label of the copied object will be set to this value.\n\n Returns\n -------\n imgaug.Polygon\n Shallow copy."} {"_id": "q_471", "text": "Create a deep copy of the Polygon object.\n\n Parameters\n ----------\n exterior : list of Keypoint or list of tuple or (N,2) ndarray, optional\n List of points defining the polygon. See `imgaug.Polygon.__init__` for details.\n\n label : None or str\n If not None, then the label of the copied object will be set to this value.\n\n Returns\n -------\n imgaug.Polygon\n Deep copy."} {"_id": "q_472", "text": "Remove all polygons that are fully or partially outside of the image.\n\n Parameters\n ----------\n fully : bool, optional\n Whether to remove polygons that are fully outside of the image.\n\n partly : bool, optional\n Whether to remove polygons that are partially outside of the image.\n\n Returns\n -------\n imgaug.PolygonsOnImage\n Reduced set of polygons, with those that were fully/partially\n outside of the image removed."} {"_id": "q_473", "text": "Clip off all parts from all polygons that are outside of the image.\n\n NOTE: The result can contain less polygons than the input did. That\n happens when a polygon is fully outside of the image plane.\n\n NOTE: The result can also contain *more* polygons than the input\n did. That happens when distinct parts of a polygon are only\n connected by areas that are outside of the image plane and hence will\n be clipped off, resulting in two or more unconnected polygon parts that\n are left in the image plane.\n\n Returns\n -------\n imgaug.PolygonsOnImage\n Polygons, clipped to fall within the image dimensions. Count of\n output polygons may differ from the input count."} {"_id": "q_474", "text": "Create a deep copy of the PolygonsOnImage object.\n\n Returns\n -------\n imgaug.PolygonsOnImage\n Deep copy."} {"_id": "q_475", "text": "Create a MultiPolygon from a Shapely MultiPolygon, a Shapely Polygon or a Shapely GeometryCollection.\n\n This also creates all necessary Polygons contained by this MultiPolygon.\n\n Parameters\n ----------\n geometry : shapely.geometry.MultiPolygon or shapely.geometry.Polygon\\\n or shapely.geometry.collection.GeometryCollection\n The object to convert to a MultiPolygon.\n\n label : None or str, optional\n A label assigned to all Polygons within the MultiPolygon.\n\n Returns\n -------\n imgaug.MultiPolygon\n The derived MultiPolygon."} {"_id": "q_476", "text": "Return a list of unordered intersection points."} {"_id": "q_477", "text": "Get predecessor to key, raises KeyError if key is min key\n or key does not exist."} {"_id": "q_478", "text": "Get successor to key, raises KeyError if key is max key\n or key does not exist."} {"_id": "q_479", "text": "Generate 2D OpenSimplex noise from X,Y coordinates."} {"_id": "q_480", "text": "Get the height of a bounding box encapsulating the line."} {"_id": "q_481", "text": "Get the width of a bounding box encapsulating the line."} {"_id": "q_482", "text": "Get for each point whether it is inside of the given image plane.\n\n Parameters\n ----------\n image : ndarray or tuple of int\n Either an image with shape ``(H,W,[C])`` or a tuple denoting\n such an image shape.\n\n Returns\n -------\n ndarray\n Boolean array with one value per point indicating whether it is\n inside of the provided image plane (``True``) or not (``False``)."} {"_id": "q_483", "text": "Get the euclidean distance between each two consecutive points.\n\n Returns\n -------\n ndarray\n Euclidean distances between point pairs.\n Same order as in `coords`. For ``N`` points, ``N-1`` distances\n are returned."} {"_id": "q_484", "text": "Compute the minimal distance between the line string and `other`.\n\n Parameters\n ----------\n other : tuple of number \\\n or imgaug.augmentables.kps.Keypoint \\\n or imgaug.augmentables.LineString\n Other object to which to compute the distance.\n\n default\n Value to return if this line string or `other` contain no points.\n\n Returns\n -------\n float\n Distance to `other` or `default` if not distance could be computed."} {"_id": "q_485", "text": "Project the line string onto a differently shaped image.\n\n E.g. if a point of the line string is on its original image at\n ``x=(10 of 100 pixels)`` and ``y=(20 of 100 pixels)`` and is projected\n onto a new image with size ``(width=200, height=200)``, its new\n position will be ``(x=20, y=40)``.\n\n This is intended for cases where the original image is resized.\n It cannot be used for more complex changes (e.g. padding, cropping).\n\n Parameters\n ----------\n from_shape : tuple of int or ndarray\n Shape of the original image. (Before resize.)\n\n to_shape : tuple of int or ndarray\n Shape of the new image. (After resize.)\n\n Returns\n -------\n out : imgaug.augmentables.lines.LineString\n Line string with new coordinates."} {"_id": "q_486", "text": "Estimate whether the line string is fully inside the image area.\n\n Parameters\n ----------\n image : ndarray or tuple of int\n Either an image with shape ``(H,W,[C])`` or a tuple denoting\n such an image shape.\n\n default\n Default value to return if the line string contains no points.\n\n Returns\n -------\n bool\n True if the line string is fully inside the image area.\n False otherwise."} {"_id": "q_487", "text": "Draw the line segments of the line string as a heatmap array.\n\n Parameters\n ----------\n image_shape : tuple of int\n The shape of the image onto which to draw the line mask.\n\n alpha : float, optional\n Opacity of the line string. Higher values denote a more visible\n line string.\n\n size : int, optional\n Thickness of the line segments.\n\n antialiased : bool, optional\n Whether to draw the line with anti-aliasing activated.\n\n raise_if_out_of_image : bool, optional\n Whether to raise an error if the line string is fully\n outside of the image. If set to False, no error will be raised and\n only the parts inside the image will be drawn.\n\n Returns\n -------\n ndarray\n Float array of shape `image_shape` (no channel axis) with drawn\n line string. All values are in the interval ``[0.0, 1.0]``."} {"_id": "q_488", "text": "Draw the points of the line string as a heatmap array.\n\n Parameters\n ----------\n image_shape : tuple of int\n The shape of the image onto which to draw the point mask.\n\n alpha : float, optional\n Opacity of the line string points. Higher values denote a more\n visible points.\n\n size : int, optional\n Size of the points in pixels.\n\n raise_if_out_of_image : bool, optional\n Whether to raise an error if the line string is fully\n outside of the image. If set to False, no error will be raised and\n only the parts inside the image will be drawn.\n\n Returns\n -------\n ndarray\n Float array of shape `image_shape` (no channel axis) with drawn\n line string points. All values are in the interval ``[0.0, 1.0]``."} {"_id": "q_489", "text": "Draw the line segments and points of the line string as a heatmap array.\n\n Parameters\n ----------\n image_shape : tuple of int\n The shape of the image onto which to draw the line mask.\n\n alpha_lines : float, optional\n Opacity of the line string. Higher values denote a more visible\n line string.\n\n alpha_points : float, optional\n Opacity of the line string points. Higher values denote a more\n visible points.\n\n size_lines : int, optional\n Thickness of the line segments.\n\n size_points : int, optional\n Size of the points in pixels.\n\n antialiased : bool, optional\n Whether to draw the line with anti-aliasing activated.\n\n raise_if_out_of_image : bool, optional\n Whether to raise an error if the line string is fully\n outside of the image. If set to False, no error will be raised and\n only the parts inside the image will be drawn.\n\n Returns\n -------\n ndarray\n Float array of shape `image_shape` (no channel axis) with drawn\n line segments and points. All values are in the\n interval ``[0.0, 1.0]``."} {"_id": "q_490", "text": "Draw the line string on an image.\n\n Parameters\n ----------\n image : ndarray\n The `(H,W,C)` `uint8` image onto which to draw the line string.\n\n color : iterable of int, optional\n Color to use as RGB, i.e. three values.\n The color of the line and points are derived from this value,\n unless they are set.\n\n color_lines : None or iterable of int\n Color to use for the line segments as RGB, i.e. three values.\n If ``None``, this value is derived from `color`.\n\n color_points : None or iterable of int\n Color to use for the points as RGB, i.e. three values.\n If ``None``, this value is derived from ``0.5 * color``.\n\n alpha : float, optional\n Opacity of the line string. Higher values denote more visible\n points.\n The alphas of the line and points are derived from this value,\n unless they are set.\n\n alpha_lines : None or float, optional\n Opacity of the line string. Higher values denote more visible\n line string.\n If ``None``, this value is derived from `alpha`.\n\n alpha_points : None or float, optional\n Opacity of the line string points. Higher values denote more\n visible points.\n If ``None``, this value is derived from `alpha`.\n\n size : int, optional\n Size of the line string.\n The sizes of the line and points are derived from this value,\n unless they are set.\n\n size_lines : None or int, optional\n Thickness of the line segments.\n If ``None``, this value is derived from `size`.\n\n size_points : None or int, optional\n Size of the points in pixels.\n If ``None``, this value is derived from ``3 * size``.\n\n antialiased : bool, optional\n Whether to draw the line with anti-aliasing activated.\n This does currently not affect the point drawing.\n\n raise_if_out_of_image : bool, optional\n Whether to raise an error if the line string is fully\n outside of the image. If set to False, no error will be raised and\n only the parts inside the image will be drawn.\n\n Returns\n -------\n ndarray\n Image with line string drawn on it."} {"_id": "q_491", "text": "Extract the image pixels covered by the line string.\n\n It will only extract pixels overlapped by the line string.\n\n This function will by default zero-pad the image if the line string is\n partially/fully outside of the image. This is for consistency with\n the same implementations for bounding boxes and polygons.\n\n Parameters\n ----------\n image : ndarray\n The image of shape `(H,W,[C])` from which to extract the pixels\n within the line string.\n\n size : int, optional\n Thickness of the line.\n\n pad : bool, optional\n Whether to zero-pad the image if the object is partially/fully\n outside of it.\n\n pad_max : None or int, optional\n The maximum number of pixels that may be zero-paded on any side,\n i.e. if this has value ``N`` the total maximum of added pixels\n is ``4*N``.\n This option exists to prevent extremely large images as a result of\n single points being moved very far away during augmentation.\n\n antialiased : bool, optional\n Whether to apply anti-aliasing to the line string.\n\n prevent_zero_size : bool, optional\n Whether to prevent height or width of the extracted image from\n becoming zero. If this is set to True and height or width of the\n line string is below 1, the height/width will be increased to 1.\n This can be useful to prevent problems, e.g. with image saving or\n plotting. If it is set to False, images will be returned as\n ``(H', W')`` or ``(H', W', 3)`` with ``H`` or ``W`` potentially\n being 0.\n\n Returns\n -------\n image : (H',W') ndarray or (H',W',C) ndarray\n Pixels overlapping with the line string. Zero-padded if the\n line string is partially/fully outside of the image and\n ``pad=True``. If `prevent_zero_size` is activated, it is\n guarantueed that ``H'>0`` and ``W'>0``, otherwise only\n ``H'>=0`` and ``W'>=0``."} {"_id": "q_492", "text": "Concatenate this line string with another one.\n\n This will add a line segment between the end point of this line string\n and the start point of `other`.\n\n Parameters\n ----------\n other : imgaug.augmentables.lines.LineString or ndarray \\\n or iterable of tuple of number\n The points to add to this line string.\n\n Returns\n -------\n imgaug.augmentables.lines.LineString\n New line string with concatenated points.\n The `label` of this line string will be kept."} {"_id": "q_493", "text": "Generate a bounding box encapsulating the line string.\n\n Returns\n -------\n None or imgaug.augmentables.bbs.BoundingBox\n Bounding box encapsulating the line string.\n ``None`` if the line string contained no points."} {"_id": "q_494", "text": "Generate a heatmap object from the line string.\n\n This is similar to\n :func:`imgaug.augmentables.lines.LineString.draw_lines_heatmap_array`\n executed with ``alpha=1.0``. The result is wrapped in a\n ``HeatmapsOnImage`` object instead of just an array.\n No points are drawn.\n\n Parameters\n ----------\n image_shape : tuple of int\n The shape of the image onto which to draw the line mask.\n\n size_lines : int, optional\n Thickness of the line.\n\n size_points : int, optional\n Size of the points in pixels.\n\n antialiased : bool, optional\n Whether to draw the line with anti-aliasing activated.\n\n raise_if_out_of_image : bool, optional\n Whether to raise an error if the line string is fully\n outside of the image. If set to False, no error will be raised and\n only the parts inside the image will be drawn.\n\n Returns\n -------\n imgaug.augmentables.heatmaps.HeatmapOnImage\n Heatmap object containing drawn line string."} {"_id": "q_495", "text": "Generate a segmentation map object from the line string.\n\n This is similar to\n :func:`imgaug.augmentables.lines.LineString.draw_mask`.\n The result is wrapped in a ``SegmentationMapOnImage`` object\n instead of just an array.\n\n Parameters\n ----------\n image_shape : tuple of int\n The shape of the image onto which to draw the line mask.\n\n size_lines : int, optional\n Thickness of the line.\n\n size_points : int, optional\n Size of the points in pixels.\n\n raise_if_out_of_image : bool, optional\n Whether to raise an error if the line string is fully\n outside of the image. If set to False, no error will be raised and\n only the parts inside the image will be drawn.\n\n Returns\n -------\n imgaug.augmentables.segmaps.SegmentationMapOnImage\n Segmentation map object containing drawn line string."} {"_id": "q_496", "text": "Compare this and another LineString's coordinates.\n\n This is an approximate method based on pointwise distances and can\n in rare corner cases produce wrong outputs.\n\n Parameters\n ----------\n other : imgaug.augmentables.lines.LineString \\\n or tuple of number \\\n or ndarray \\\n or list of ndarray \\\n or list of tuple of number\n The other line string or its coordinates.\n\n max_distance : float\n Max distance of any point from the other line string before\n the two line strings are evaluated to be unequal.\n\n points_per_edge : int, optional\n How many points to interpolate on each edge.\n\n Returns\n -------\n bool\n Whether the two LineString's coordinates are almost identical,\n i.e. the max distance is below the threshold.\n If both have no coordinates, ``True`` is returned.\n If only one has no coordinates, ``False`` is returned.\n Beyond that, the number of points is not evaluated."} {"_id": "q_497", "text": "Compare this and another LineString.\n\n Parameters\n ----------\n other: imgaug.augmentables.lines.LineString\n The other line string. Must be a LineString instance, not just\n its coordinates.\n\n max_distance : float, optional\n See :func:`imgaug.augmentables.lines.LineString.coords_almost_equals`.\n\n points_per_edge : int, optional\n See :func:`imgaug.augmentables.lines.LineString.coords_almost_equals`.\n\n Returns\n -------\n bool\n ``True`` if the coordinates are almost equal according to\n :func:`imgaug.augmentables.lines.LineString.coords_almost_equals`\n and additionally the labels are identical. Otherwise ``False``."} {"_id": "q_498", "text": "Create a shallow copy of the LineString object.\n\n Parameters\n ----------\n coords : None or iterable of tuple of number or ndarray\n If not ``None``, then the coords of the copied object will be set\n to this value.\n\n label : None or str\n If not ``None``, then the label of the copied object will be set to\n this value.\n\n Returns\n -------\n imgaug.augmentables.lines.LineString\n Shallow copy."} {"_id": "q_499", "text": "Clip off all parts of the line strings that are outside of the image.\n\n Returns\n -------\n imgaug.augmentables.lines.LineStringsOnImage\n Line strings, clipped to fall within the image dimensions."} {"_id": "q_500", "text": "Create a shallow copy of the LineStringsOnImage object.\n\n Parameters\n ----------\n line_strings : None \\\n or list of imgaug.augmentables.lines.LineString, optional\n List of line strings on the image.\n If not ``None``, then the ``line_strings`` attribute of the copied\n object will be set to this value.\n\n shape : None or tuple of int or ndarray, optional\n The shape of the image on which the objects are placed.\n Either an image with shape ``(H,W,[C])`` or a tuple denoting\n such an image shape.\n If not ``None``, then the ``shape`` attribute of the copied object\n will be set to this value.\n\n Returns\n -------\n imgaug.augmentables.lines.LineStringsOnImage\n Shallow copy."} {"_id": "q_501", "text": "Create a deep copy of the LineStringsOnImage object.\n\n Parameters\n ----------\n line_strings : None \\\n or list of imgaug.augmentables.lines.LineString, optional\n List of line strings on the image.\n If not ``None``, then the ``line_strings`` attribute of the copied\n object will be set to this value.\n\n shape : None or tuple of int or ndarray, optional\n The shape of the image on which the objects are placed.\n Either an image with shape ``(H,W,[C])`` or a tuple denoting\n such an image shape.\n If not ``None``, then the ``shape`` attribute of the copied object\n will be set to this value.\n\n Returns\n -------\n imgaug.augmentables.lines.LineStringsOnImage\n Deep copy."} {"_id": "q_502", "text": "Blend two images using an alpha blending.\n\n In an alpha blending, the two images are naively mixed. Let ``A`` be the foreground image\n and ``B`` the background image and ``a`` is the alpha value. Each pixel intensity is then\n computed as ``a * A_ij + (1-a) * B_ij``.\n\n dtype support::\n\n * ``uint8``: yes; fully tested\n * ``uint16``: yes; fully tested\n * ``uint32``: yes; fully tested\n * ``uint64``: yes; fully tested (1)\n * ``int8``: yes; fully tested\n * ``int16``: yes; fully tested\n * ``int32``: yes; fully tested\n * ``int64``: yes; fully tested (1)\n * ``float16``: yes; fully tested\n * ``float32``: yes; fully tested\n * ``float64``: yes; fully tested (1)\n * ``float128``: no (2)\n * ``bool``: yes; fully tested (2)\n\n - (1) Tests show that these dtypes work, but a conversion to float128 happens, which only\n has 96 bits of size instead of true 128 bits and hence not twice as much resolution.\n It is possible that these dtypes result in inaccuracies, though the tests did not\n indicate that.\n - (2) Not available due to the input dtype having to be increased to an equivalent float\n dtype with two times the input resolution.\n - (3) Mapped internally to ``float16``.\n\n Parameters\n ----------\n image_fg : (H,W,[C]) ndarray\n Foreground image. Shape and dtype kind must match the one of the\n background image.\n\n image_bg : (H,W,[C]) ndarray\n Background image. Shape and dtype kind must match the one of the\n foreground image.\n\n alpha : number or iterable of number or ndarray\n The blending factor, between 0.0 and 1.0. Can be interpreted as the opacity of the\n foreground image. Values around 1.0 result in only the foreground image being visible.\n Values around 0.0 result in only the background image being visible.\n Multiple alphas may be provided. In these cases, there must be exactly one alpha per\n channel in the foreground/background image. Alternatively, for ``(H,W,C)`` images,\n either one ``(H,W)`` array or an ``(H,W,C)`` array of alphas may be provided,\n denoting the elementwise alpha value.\n\n eps : number, optional\n Controls when an alpha is to be interpreted as exactly 1.0 or exactly 0.0, resulting\n in only the foreground/background being visible and skipping the actual computation.\n\n Returns\n -------\n image_blend : (H,W,C) ndarray\n Blend of foreground and background image."} {"_id": "q_503", "text": "Augmenter that sharpens images and overlays the result with the original image.\n\n dtype support::\n\n See ``imgaug.augmenters.convolutional.Convolve``.\n\n Parameters\n ----------\n k : int or tuple of int or list of int or imgaug.parameters.StochasticParameter, optional\n Kernel size to use.\n\n * If a single int, then that value will be used for the height\n and width of the kernel.\n * If a tuple of two ints ``(a, b)``, then the kernel size will be\n sampled from the interval ``[a..b]``.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, then ``N`` samples will be drawn from\n that parameter per ``N`` input images, each representing the kernel\n size for the nth image.\n\n angle : number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional\n Angle of the motion blur in degrees (clockwise, relative to top center direction).\n\n * If a number, exactly that value will be used.\n * If a tuple ``(a, b)``, a random value from the range ``a <= x <= b`` will\n be sampled per image.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, a value will be sampled from the\n parameter per image.\n\n direction : number or tuple of number or list of number or imgaug.parameters.StochasticParameter, optional\n Forward/backward direction of the motion blur. Lower values towards -1.0 will point the motion blur towards\n the back (with angle provided via `angle`). Higher values towards 1.0 will point the motion blur forward.\n A value of 0.0 leads to a uniformly (but still angled) motion blur.\n\n * If a number, exactly that value will be used.\n * If a tuple ``(a, b)``, a random value from the range ``a <= x <= b`` will\n be sampled per image.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, a value will be sampled from the\n parameter per image.\n\n order : int or iterable of int or imgaug.ALL or imgaug.parameters.StochasticParameter, optional\n Interpolation order to use when rotating the kernel according to `angle`.\n See :func:`imgaug.augmenters.geometric.Affine.__init__`.\n Recommended to be ``0`` or ``1``, with ``0`` being faster, but less continuous/smooth as `angle` is changed,\n particularly around multiple of 45 degrees.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.MotionBlur(k=15)\n\n Create a motion blur augmenter with kernel size of 15x15.\n\n >>> aug = iaa.MotionBlur(k=15, angle=[-45, 45])\n\n Create a motion blur augmenter with kernel size of 15x15 and a blur angle of either -45 or 45 degrees (randomly\n picked per image)."} {"_id": "q_504", "text": "Augmenter to draw clouds in images.\n\n This is a wrapper around ``CloudLayer``. It executes 1 to 2 layers per image, leading to varying densities\n and frequency patterns of clouds.\n\n This augmenter seems to be fairly robust w.r.t. the image size. Tested with ``96x128``, ``192x256``\n and ``960x1280``.\n\n dtype support::\n\n * ``uint8``: yes; tested\n * ``uint16``: no (1)\n * ``uint32``: no (1)\n * ``uint64``: no (1)\n * ``int8``: no (1)\n * ``int16``: no (1)\n * ``int32``: no (1)\n * ``int64``: no (1)\n * ``float16``: no (1)\n * ``float32``: no (1)\n * ``float64``: no (1)\n * ``float128``: no (1)\n * ``bool``: no (1)\n\n - (1) Parameters of this augmenter are optimized for the value range of uint8.\n While other dtypes may be accepted, they will lead to images augmented in\n ways inappropriate for the respective dtype.\n\n Parameters\n ----------\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.Clouds()\n\n Creates an augmenter that adds clouds to images."} {"_id": "q_505", "text": "Augmenter to draw fog in images.\n\n This is a wrapper around ``CloudLayer``. It executes a single layer per image with a configuration leading\n to fairly dense clouds with low-frequency patterns.\n\n This augmenter seems to be fairly robust w.r.t. the image size. Tested with ``96x128``, ``192x256``\n and ``960x1280``.\n\n dtype support::\n\n * ``uint8``: yes; tested\n * ``uint16``: no (1)\n * ``uint32``: no (1)\n * ``uint64``: no (1)\n * ``int8``: no (1)\n * ``int16``: no (1)\n * ``int32``: no (1)\n * ``int64``: no (1)\n * ``float16``: no (1)\n * ``float32``: no (1)\n * ``float64``: no (1)\n * ``float128``: no (1)\n * ``bool``: no (1)\n\n - (1) Parameters of this augmenter are optimized for the value range of uint8.\n While other dtypes may be accepted, they will lead to images augmented in\n ways inappropriate for the respective dtype.\n\n Parameters\n ----------\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.Fog()\n\n Creates an augmenter that adds fog to images."} {"_id": "q_506", "text": "Augmenter to add falling snowflakes to images.\n\n This is a wrapper around ``SnowflakesLayer``. It executes 1 to 3 layers per image.\n\n dtype support::\n\n * ``uint8``: yes; tested\n * ``uint16``: no (1)\n * ``uint32``: no (1)\n * ``uint64``: no (1)\n * ``int8``: no (1)\n * ``int16``: no (1)\n * ``int32``: no (1)\n * ``int64``: no (1)\n * ``float16``: no (1)\n * ``float32``: no (1)\n * ``float64``: no (1)\n * ``float128``: no (1)\n * ``bool``: no (1)\n\n - (1) Parameters of this augmenter are optimized for the value range of uint8.\n While other dtypes may be accepted, they will lead to images augmented in\n ways inappropriate for the respective dtype.\n\n Parameters\n ----------\n density : number or tuple of number or list of number or imgaug.parameters.StochasticParameter\n Density of the snowflake layer, as a probability of each pixel in low resolution space to be a snowflake.\n Valid value range is ``(0.0, 1.0)``. Recommended to be around ``(0.01, 0.075)``.\n\n * If a number, then that value will be used for all images.\n * If a tuple ``(a, b)``, then a value from the continuous range ``[a, b]`` will be used.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, then a value will be sampled per image from that parameter.\n\n density_uniformity : number or tuple of number or list of number or imgaug.parameters.StochasticParameter\n Size uniformity of the snowflakes. Higher values denote more similarly sized snowflakes.\n Valid value range is ``(0.0, 1.0)``. Recommended to be around ``0.5``.\n\n * If a number, then that value will be used for all images.\n * If a tuple ``(a, b)``, then a value from the continuous range ``[a, b]`` will be used.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, then a value will be sampled per image from that parameter.\n\n flake_size : number or tuple of number or list of number or imgaug.parameters.StochasticParameter\n Size of the snowflakes. This parameter controls the resolution at which snowflakes are sampled.\n Higher values mean that the resolution is closer to the input image's resolution and hence each sampled\n snowflake will be smaller (because of the smaller pixel size).\n\n Valid value range is ``[0.0, 1.0)``. Recommended values:\n\n * On ``96x128`` a value of ``(0.1, 0.4)`` worked well.\n * On ``192x256`` a value of ``(0.2, 0.7)`` worked well.\n * On ``960x1280`` a value of ``(0.7, 0.95)`` worked well.\n\n Allowed datatypes:\n\n * If a number, then that value will be used for all images.\n * If a tuple ``(a, b)``, then a value from the continuous range ``[a, b]`` will be used.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, then a value will be sampled per image from that parameter.\n\n flake_size_uniformity : number or tuple of number or list of number or imgaug.parameters.StochasticParameter\n Controls the size uniformity of the snowflakes. Higher values mean that the snowflakes are more similarly\n sized. Valid value range is ``(0.0, 1.0)``. Recommended to be around ``0.5``.\n\n * If a number, then that value will be used for all images.\n * If a tuple ``(a, b)``, then a value from the continuous range ``[a, b]`` will be used.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, then a value will be sampled per image from that parameter.\n\n angle : number or tuple of number or list of number or imgaug.parameters.StochasticParameter\n Angle in degrees of motion blur applied to the snowflakes, where ``0.0`` is motion blur that points straight\n upwards. Recommended to be around ``(-30, 30)``.\n See also :func:`imgaug.augmenters.blur.MotionBlur.__init__`.\n\n * If a number, then that value will be used for all images.\n * If a tuple ``(a, b)``, then a value from the continuous range ``[a, b]`` will be used.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, then a value will be sampled per image from that parameter.\n\n speed : number or tuple of number or list of number or imgaug.parameters.StochasticParameter\n Perceived falling speed of the snowflakes. This parameter controls the motion blur's kernel size.\n It follows roughly the form ``kernel_size = image_size * speed``. Hence,\n Values around ``1.0`` denote that the motion blur should \"stretch\" each snowflake over the whole image.\n\n Valid value range is ``(0.0, 1.0)``. Recommended values:\n\n * On ``96x128`` a value of ``(0.01, 0.05)`` worked well.\n * On ``192x256`` a value of ``(0.007, 0.03)`` worked well.\n * On ``960x1280`` a value of ``(0.001, 0.03)`` worked well.\n\n\n Allowed datatypes:\n\n * If a number, then that value will be used for all images.\n * If a tuple ``(a, b)``, then a value from the continuous range ``[a, b]`` will be used.\n * If a list, then a random value will be sampled from that list per image.\n * If a StochasticParameter, then a value will be sampled per image from that parameter.\n\n name : None or str, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n deterministic : bool, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n random_state : None or int or numpy.random.RandomState, optional\n See :func:`imgaug.augmenters.meta.Augmenter.__init__`.\n\n Examples\n --------\n >>> aug = iaa.Snowflakes(flake_size=(0.1, 0.4), speed=(0.01, 0.05))\n\n Adds snowflakes to small images (around ``96x128``).\n\n >>> aug = iaa.Snowflakes(flake_size=(0.2, 0.7), speed=(0.007, 0.03))\n\n Adds snowflakes to medium-sized images (around ``192x256``).\n\n >>> aug = iaa.Snowflakes(flake_size=(0.7, 0.95), speed=(0.001, 0.03))\n\n Adds snowflakes to large images (around ``960x1280``)."} {"_id": "q_507", "text": "Draw the segmentation map as an overlay over an image.\n\n Parameters\n ----------\n image : (H,W,3) ndarray\n Image onto which to draw the segmentation map. Dtype is expected to be uint8.\n\n alpha : float, optional\n Alpha/opacity value to use for the mixing of image and segmentation map.\n Higher values mean that the segmentation map will be more visible and the image less visible.\n\n resize : {'segmentation_map', 'image'}, optional\n In case of size differences between the image and segmentation map, either the image or\n the segmentation map can be resized. This parameter controls which of the two will be\n resized to the other's size.\n\n background_threshold : float, optional\n See :func:`imgaug.SegmentationMapOnImage.get_arr_int`.\n\n background_class_id : None or int, optional\n See :func:`imgaug.SegmentationMapOnImage.get_arr_int`.\n\n colors : None or list of tuple of int, optional\n Colors to use. One for each class to draw. If None, then default colors will be used.\n\n draw_background : bool, optional\n If True, the background will be drawn like any other class.\n If False, the background will not be drawn, i.e. the respective background pixels\n will be identical with the image's RGB color at the corresponding spatial location\n and no color overlay will be applied.\n\n Returns\n -------\n mix : (H,W,3) ndarray\n Rendered overlays (dtype is uint8)."} {"_id": "q_508", "text": "Pad the segmentation map on its sides so that its matches a target aspect ratio.\n\n Depending on which dimension is smaller (height or width), only the corresponding\n sides (left/right or top/bottom) will be padded. In each case, both of the sides will\n be padded equally.\n\n Parameters\n ----------\n aspect_ratio : float\n Target aspect ratio, given as width/height. E.g. 2.0 denotes the image having twice\n as much width as height.\n\n mode : str, optional\n Padding mode to use. See :func:`numpy.pad` for details.\n\n cval : number, optional\n Value to use for padding if `mode` is ``constant``. See :func:`numpy.pad` for details.\n\n return_pad_amounts : bool, optional\n If False, then only the padded image will be returned. If True, a tuple with two\n entries will be returned, where the first entry is the padded image and the second\n entry are the amounts by which each image side was padded. These amounts are again a\n tuple of the form (top, right, bottom, left), with each value being an integer.\n\n Returns\n -------\n segmap : imgaug.SegmentationMapOnImage\n Padded segmentation map as SegmentationMapOnImage object.\n\n pad_amounts : tuple of int\n Amounts by which the segmentation map was padded on each side, given as a\n tuple ``(top, right, bottom, left)``.\n This tuple is only returned if `return_pad_amounts` was set to True."} {"_id": "q_509", "text": "Resize the segmentation map array to the provided size given the provided interpolation.\n\n Parameters\n ----------\n sizes : float or iterable of int or iterable of float\n New size of the array in ``(height, width)``.\n See :func:`imgaug.imgaug.imresize_single_image` for details.\n\n interpolation : None or str or int, optional\n The interpolation to use during resize.\n See :func:`imgaug.imgaug.imresize_single_image` for details.\n Note: The segmentation map is internally stored as multiple float-based heatmaps,\n making smooth interpolations potentially more reasonable than nearest neighbour\n interpolation.\n\n Returns\n -------\n segmap : imgaug.SegmentationMapOnImage\n Resized segmentation map object."} {"_id": "q_510", "text": "Offer a new event ``s`` at point ``p`` in this queue."} {"_id": "q_511", "text": "Render the heatmaps as RGB images.\n\n Parameters\n ----------\n size : None or float or iterable of int or iterable of float, optional\n Size of the rendered RGB image as ``(height, width)``.\n See :func:`imgaug.imgaug.imresize_single_image` for details.\n If set to None, no resizing is performed and the size of the heatmaps array is used.\n\n cmap : str or None, optional\n Color map of ``matplotlib`` to use in order to convert the heatmaps to RGB images.\n If set to None, no color map will be used and the heatmaps will be converted\n to simple intensity maps.\n\n Returns\n -------\n heatmaps_drawn : list of (H,W,3) ndarray\n Rendered heatmaps. One per heatmap array channel. Dtype is uint8."} {"_id": "q_512", "text": "Draw the heatmaps as overlays over an image.\n\n Parameters\n ----------\n image : (H,W,3) ndarray\n Image onto which to draw the heatmaps. Expected to be of dtype uint8.\n\n alpha : float, optional\n Alpha/opacity value to use for the mixing of image and heatmaps.\n Higher values mean that the heatmaps will be more visible and the image less visible.\n\n cmap : str or None, optional\n Color map to use. See :func:`imgaug.HeatmapsOnImage.draw` for details.\n\n resize : {'heatmaps', 'image'}, optional\n In case of size differences between the image and heatmaps, either the image or\n the heatmaps can be resized. This parameter controls which of the two will be resized\n to the other's size.\n\n Returns\n -------\n mix : list of (H,W,3) ndarray\n Rendered overlays. One per heatmap array channel. Dtype is uint8."} {"_id": "q_513", "text": "Pad the heatmaps on their sides so that they match a target aspect ratio.\n\n Depending on which dimension is smaller (height or width), only the corresponding\n sides (left/right or top/bottom) will be padded. In each case, both of the sides will\n be padded equally.\n\n Parameters\n ----------\n aspect_ratio : float\n Target aspect ratio, given as width/height. E.g. 2.0 denotes the image having twice\n as much width as height.\n\n mode : str, optional\n Padding mode to use. See :func:`numpy.pad` for details.\n\n cval : number, optional\n Value to use for padding if `mode` is ``constant``. See :func:`numpy.pad` for details.\n\n return_pad_amounts : bool, optional\n If False, then only the padded image will be returned. If True, a tuple with two\n entries will be returned, where the first entry is the padded image and the second\n entry are the amounts by which each image side was padded. These amounts are again a\n tuple of the form (top, right, bottom, left), with each value being an integer.\n\n Returns\n -------\n heatmaps : imgaug.HeatmapsOnImage\n Padded heatmaps as HeatmapsOnImage object.\n\n pad_amounts : tuple of int\n Amounts by which the heatmaps were padded on each side, given as a tuple ``(top, right, bottom, left)``.\n This tuple is only returned if `return_pad_amounts` was set to True."} {"_id": "q_514", "text": "Create a heatmaps object from an heatmap array containing values ranging from 0.0 to 1.0.\n\n Parameters\n ----------\n arr_0to1 : (H,W) or (H,W,C) ndarray\n Heatmap(s) array, where ``H`` is height, ``W`` is width and ``C`` is the number of heatmap channels.\n Expected dtype is float32.\n\n shape : tuple of ints\n Shape of the image on which the heatmap(s) is/are placed. NOT the shape of the\n heatmap(s) array, unless it is identical to the image shape (note the likely\n difference between the arrays in the number of channels).\n If there is not a corresponding image, use the shape of the heatmaps array.\n\n min_value : float, optional\n Minimum value for the heatmaps that the 0-to-1 array represents. This will usually\n be 0.0. It is used when calling :func:`imgaug.HeatmapsOnImage.get_arr`, which converts the\n underlying ``(0.0, 1.0)`` array to value range ``(min_value, max_value)``.\n E.g. if you started with heatmaps in the range ``(-1.0, 1.0)`` and projected these\n to (0.0, 1.0), you should call this function with ``min_value=-1.0``, ``max_value=1.0``\n so that :func:`imgaug.HeatmapsOnImage.get_arr` returns heatmap arrays having value\n range (-1.0, 1.0).\n\n max_value : float, optional\n Maximum value for the heatmaps that to 0-to-255 array represents.\n See parameter min_value for details.\n\n Returns\n -------\n heatmaps : imgaug.HeatmapsOnImage\n Heatmaps object."} {"_id": "q_515", "text": "Create a deep copy of the Heatmaps object.\n\n Returns\n -------\n imgaug.HeatmapsOnImage\n Deep copy."} {"_id": "q_516", "text": "Convert a ``PIL Image`` or ``numpy.ndarray`` to tensor.\n\n See ``ToTensor`` for more details.\n\n Args:\n pic (PIL Image or numpy.ndarray): Image to be converted to tensor.\n\n Returns:\n Tensor: Converted image."} {"_id": "q_517", "text": "Normalize a tensor image with mean and standard deviation.\n\n .. note::\n This transform acts out of place by default, i.e., it does not mutates the input tensor.\n\n See :class:`~torchvision.transforms.Normalize` for more details.\n\n Args:\n tensor (Tensor): Tensor image of size (C, H, W) to be normalized.\n mean (sequence): Sequence of means for each channel.\n std (sequence): Sequence of standard deviations for each channel.\n\n Returns:\n Tensor: Normalized Tensor image."} {"_id": "q_518", "text": "r\"\"\"Resize the input PIL Image to the given size.\n\n Args:\n img (PIL Image): Image to be resized.\n size (sequence or int): Desired output size. If size is a sequence like\n (h, w), the output size will be matched to this. If size is an int,\n the smaller edge of the image will be matched to this number maintaing\n the aspect ratio. i.e, if height > width, then image will be rescaled to\n :math:`\\left(\\text{size} \\times \\frac{\\text{height}}{\\text{width}}, \\text{size}\\right)`\n interpolation (int, optional): Desired interpolation. Default is\n ``PIL.Image.BILINEAR``\n\n Returns:\n PIL Image: Resized image."} {"_id": "q_519", "text": "r\"\"\"Pad the given PIL Image on all sides with specified padding mode and fill value.\n\n Args:\n img (PIL Image): Image to be padded.\n padding (int or tuple): Padding on each border. If a single int is provided this\n is used to pad all borders. If tuple of length 2 is provided this is the padding\n on left/right and top/bottom respectively. If a tuple of length 4 is provided\n this is the padding for the left, top, right and bottom borders\n respectively.\n fill: Pixel fill value for constant fill. Default is 0. If a tuple of\n length 3, it is used to fill R, G, B channels respectively.\n This value is only used when the padding_mode is constant\n padding_mode: Type of padding. Should be: constant, edge, reflect or symmetric. Default is constant.\n\n - constant: pads with a constant value, this value is specified with fill\n\n - edge: pads with the last value on the edge of the image\n\n - reflect: pads with reflection of image (without repeating the last value on the edge)\n\n padding [1, 2, 3, 4] with 2 elements on both sides in reflect mode\n will result in [3, 2, 1, 2, 3, 4, 3, 2]\n\n - symmetric: pads with reflection of image (repeating the last value on the edge)\n\n padding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode\n will result in [2, 1, 1, 2, 3, 4, 4, 3]\n\n Returns:\n PIL Image: Padded image."} {"_id": "q_520", "text": "Crop the given PIL Image.\n\n Args:\n img (PIL Image): Image to be cropped.\n i (int): i in (i,j) i.e coordinates of the upper left corner.\n j (int): j in (i,j) i.e coordinates of the upper left corner.\n h (int): Height of the cropped image.\n w (int): Width of the cropped image.\n\n Returns:\n PIL Image: Cropped image."} {"_id": "q_521", "text": "Crop the given PIL Image and resize it to desired size.\n\n Notably used in :class:`~torchvision.transforms.RandomResizedCrop`.\n\n Args:\n img (PIL Image): Image to be cropped.\n i (int): i in (i,j) i.e coordinates of the upper left corner\n j (int): j in (i,j) i.e coordinates of the upper left corner\n h (int): Height of the cropped image.\n w (int): Width of the cropped image.\n size (sequence or int): Desired output size. Same semantics as ``resize``.\n interpolation (int, optional): Desired interpolation. Default is\n ``PIL.Image.BILINEAR``.\n Returns:\n PIL Image: Cropped image."} {"_id": "q_522", "text": "Horizontally flip the given PIL Image.\n\n Args:\n img (PIL Image): Image to be flipped.\n\n Returns:\n PIL Image: Horizontall flipped image."} {"_id": "q_523", "text": "Perform perspective transform of the given PIL Image.\n\n Args:\n img (PIL Image): Image to be transformed.\n coeffs (tuple) : 8-tuple (a, b, c, d, e, f, g, h) which contains the coefficients.\n for a perspective transform.\n interpolation: Default- Image.BICUBIC\n Returns:\n PIL Image: Perspectively transformed Image."} {"_id": "q_524", "text": "Vertically flip the given PIL Image.\n\n Args:\n img (PIL Image): Image to be flipped.\n\n Returns:\n PIL Image: Vertically flipped image."} {"_id": "q_525", "text": "Crop the given PIL Image into four corners and the central crop.\n\n .. Note::\n This transform returns a tuple of images and there may be a\n mismatch in the number of inputs and targets your ``Dataset`` returns.\n\n Args:\n size (sequence or int): Desired output size of the crop. If size is an\n int instead of sequence like (h, w), a square crop (size, size) is\n made.\n\n Returns:\n tuple: tuple (tl, tr, bl, br, center)\n Corresponding top left, top right, bottom left, bottom right and center crop."} {"_id": "q_526", "text": "Adjust brightness of an Image.\n\n Args:\n img (PIL Image): PIL Image to be adjusted.\n brightness_factor (float): How much to adjust the brightness. Can be\n any non negative number. 0 gives a black image, 1 gives the\n original image while 2 increases the brightness by a factor of 2.\n\n Returns:\n PIL Image: Brightness adjusted image."} {"_id": "q_527", "text": "Rotate the image by angle.\n\n\n Args:\n img (PIL Image): PIL Image to be rotated.\n angle (float or int): In degrees degrees counter clockwise order.\n resample (``PIL.Image.NEAREST`` or ``PIL.Image.BILINEAR`` or ``PIL.Image.BICUBIC``, optional):\n An optional resampling filter. See `filters`_ for more information.\n If omitted, or if the image has mode \"1\" or \"P\", it is set to ``PIL.Image.NEAREST``.\n expand (bool, optional): Optional expansion flag.\n If true, expands the output image to make it large enough to hold the entire rotated image.\n If false or omitted, make the output image the same size as the input image.\n Note that the expand flag assumes rotation around the center and no translation.\n center (2-tuple, optional): Optional center of rotation.\n Origin is the upper left corner.\n Default is the center of the image.\n\n .. _filters: https://pillow.readthedocs.io/en/latest/handbook/concepts.html#filters"} {"_id": "q_528", "text": "Apply affine transformation on the image keeping image center invariant\n\n Args:\n img (PIL Image): PIL Image to be rotated.\n angle (float or int): rotation angle in degrees between -180 and 180, clockwise direction.\n translate (list or tuple of integers): horizontal and vertical translations (post-rotation translation)\n scale (float): overall scale\n shear (float): shear angle value in degrees between -180 to 180, clockwise direction.\n resample (``PIL.Image.NEAREST`` or ``PIL.Image.BILINEAR`` or ``PIL.Image.BICUBIC``, optional):\n An optional resampling filter.\n See `filters`_ for more information.\n If omitted, or if the image has mode \"1\" or \"P\", it is set to ``PIL.Image.NEAREST``.\n fillcolor (int): Optional fill color for the area outside the transform in the output image. (Pillow>=5.0.0)"} {"_id": "q_529", "text": "Finds the class folders in a dataset.\n\n Args:\n dir (string): Root directory path.\n\n Returns:\n tuple: (classes, class_to_idx) where classes are relative to (dir), and class_to_idx is a dictionary.\n\n Ensures:\n No class is a subdirectory of another."} {"_id": "q_530", "text": "Return a Tensor containing the list of labels\n Read the file and keep only the ID of the 3D point."} {"_id": "q_531", "text": "Computes the accuracy over the k top predictions for the specified values of k"} {"_id": "q_532", "text": "This function disables printing when not in master process"} {"_id": "q_533", "text": "Download a file from a url and place it in root.\n\n Args:\n url (str): URL to download file from\n root (str): Directory to place downloaded file in\n filename (str, optional): Name to save the file under. If None, use the basename of the URL\n md5 (str, optional): MD5 checksum of the download. If None, do not check"} {"_id": "q_534", "text": "List all directories at a given root\n\n Args:\n root (str): Path to directory whose folders need to be listed\n prefix (bool, optional): If true, prepends the path to each result, otherwise\n only returns the name of the directories found"} {"_id": "q_535", "text": "List all files ending with a suffix at a given root\n\n Args:\n root (str): Path to directory whose folders need to be listed\n suffix (str or tuple): Suffix of the files to match, e.g. '.png' or ('.jpg', '.png').\n It uses the Python \"str.endswith\" method and is passed directly\n prefix (bool, optional): If true, prepends the path to each result, otherwise\n only returns the name of the files found"} {"_id": "q_536", "text": "Get parameters for ``crop`` for a random crop.\n\n Args:\n img (PIL Image): Image to be cropped.\n output_size (tuple): Expected output size of the crop.\n\n Returns:\n tuple: params (i, j, h, w) to be passed to ``crop`` for random crop."} {"_id": "q_537", "text": "Get parameters for ``perspective`` for a random perspective transform.\n\n Args:\n width : width of the image.\n height : height of the image.\n\n Returns:\n List containing [top-left, top-right, bottom-right, bottom-left] of the orignal image,\n List containing [top-left, top-right, bottom-right, bottom-left] of the transformed image."} {"_id": "q_538", "text": "Get parameters for ``crop`` for a random sized crop.\n\n Args:\n img (PIL Image): Image to be cropped.\n scale (tuple): range of size of the origin size cropped\n ratio (tuple): range of aspect ratio of the origin aspect ratio cropped\n\n Returns:\n tuple: params (i, j, h, w) to be passed to ``crop`` for a random\n sized crop."} {"_id": "q_539", "text": "Get a randomized transform to be applied on image.\n\n Arguments are same as that of __init__.\n\n Returns:\n Transform which randomly adjusts brightness, contrast and\n saturation in a random order."} {"_id": "q_540", "text": "Get parameters for affine transformation\n\n Returns:\n sequence: params to be passed to the affine transformation"} {"_id": "q_541", "text": "Download the MNIST data if it doesn't exist in processed_folder already."} {"_id": "q_542", "text": "Returns theme name.\n\n Checks in this order:\n 1. override\n 2. cookies\n 3. settings"} {"_id": "q_543", "text": "Return autocompleter results"} {"_id": "q_544", "text": "Render preferences page && save user preferences"} {"_id": "q_545", "text": "check if the searchQuery contain a bang, and create fitting autocompleter results"} {"_id": "q_546", "text": "Eight-schools joint log-prob."} {"_id": "q_547", "text": "Runs HMC on the eight-schools unnormalized posterior."} {"_id": "q_548", "text": "Decorator to programmatically expand the docstring.\n\n Args:\n **kwargs: Keyword arguments to set. For each key-value pair `k` and `v`,\n the key is found as `${k}` in the docstring and replaced with `v`.\n\n Returns:\n Decorated function."} {"_id": "q_549", "text": "Infer the original name passed into a distribution constructor.\n\n Distributions typically follow the pattern of\n with.name_scope(name) as name:\n super(name=name)\n so we attempt to reverse the name-scope transformation to allow\n addressing of RVs by the distribution's original, user-visible\n name kwarg.\n\n Args:\n distribution: a tfd.Distribution instance.\n Returns:\n simple_name: the original name passed into the Distribution.\n\n #### Example\n\n ```\n d1 = tfd.Normal(0., 1., name='x') # d1.name = 'x/'\n d2 = tfd.Normal(0., 1., name='x') # d2.name = 'x_2/'\n _simple_name(d2) # returns 'x'\n\n ```"} {"_id": "q_550", "text": "RandomVariable constructor with a dummy name argument."} {"_id": "q_551", "text": "Factory function to make random variable given distribution class."} {"_id": "q_552", "text": "Compute one-step-ahead predictive distributions for all timesteps.\n\n Given samples from the posterior over parameters, return the predictive\n distribution over observations at each time `T`, given observations up\n through time `T-1`.\n\n Args:\n model: An instance of `StructuralTimeSeries` representing a\n time-series model. This represents a joint distribution over\n time-series and their parameters with batch shape `[b1, ..., bN]`.\n observed_time_series: `float` `Tensor` of shape\n `concat([sample_shape, model.batch_shape, [num_timesteps, 1]]) where\n `sample_shape` corresponds to i.i.d. observations, and the trailing `[1]`\n dimension may (optionally) be omitted if `num_timesteps > 1`. May\n optionally be an instance of `tfp.sts.MaskedTimeSeries` including a\n mask `Tensor` to encode the locations of missing observations.\n parameter_samples: Python `list` of `Tensors` representing posterior samples\n of model parameters, with shapes `[concat([[num_posterior_draws],\n param.prior.batch_shape, param.prior.event_shape]) for param in\n model.parameters]`. This may optionally also be a map (Python `dict`) of\n parameter names to `Tensor` values.\n\n Returns:\n forecast_dist: a `tfd.MixtureSameFamily` instance with event shape\n [num_timesteps] and\n batch shape `concat([sample_shape, model.batch_shape])`, with\n `num_posterior_draws` mixture components. The `t`th step represents the\n forecast distribution `p(observed_time_series[t] |\n observed_time_series[0:t-1], parameter_samples)`.\n\n #### Examples\n\n Suppose we've built a model and fit it to data using HMC:\n\n ```python\n day_of_week = tfp.sts.Seasonal(\n num_seasons=7,\n observed_time_series=observed_time_series,\n name='day_of_week')\n local_linear_trend = tfp.sts.LocalLinearTrend(\n observed_time_series=observed_time_series,\n name='local_linear_trend')\n model = tfp.sts.Sum(components=[day_of_week, local_linear_trend],\n observed_time_series=observed_time_series)\n\n samples, kernel_results = tfp.sts.fit_with_hmc(model, observed_time_series)\n ```\n\n Passing the posterior samples into `one_step_predictive`, we construct a\n one-step-ahead predictive distribution:\n\n ```python\n one_step_predictive_dist = tfp.sts.one_step_predictive(\n model, observed_time_series, parameter_samples=samples)\n\n predictive_means = one_step_predictive_dist.mean()\n predictive_scales = one_step_predictive_dist.stddev()\n ```\n\n If using variational inference instead of HMC, we'd construct a forecast using\n samples from the variational posterior:\n\n ```python\n (variational_loss,\n variational_distributions) = tfp.sts.build_factored_variational_loss(\n model=model, observed_time_series=observed_time_series)\n\n # OMITTED: take steps to optimize variational loss\n\n samples = {k: q.sample(30) for (k, q) in variational_distributions.items()}\n one_step_predictive_dist = tfp.sts.one_step_predictive(\n model, observed_time_series, parameter_samples=samples)\n ```\n\n We can visualize the forecast by plotting:\n\n ```python\n from matplotlib import pylab as plt\n def plot_one_step_predictive(observed_time_series,\n forecast_mean,\n forecast_scale):\n plt.figure(figsize=(12, 6))\n num_timesteps = forecast_mean.shape[-1]\n c1, c2 = (0.12, 0.47, 0.71), (1.0, 0.5, 0.05)\n plt.plot(observed_time_series, label=\"observed time series\", color=c1)\n plt.plot(forecast_mean, label=\"one-step prediction\", color=c2)\n plt.fill_between(np.arange(num_timesteps),\n forecast_mean - 2 * forecast_scale,\n forecast_mean + 2 * forecast_scale,\n alpha=0.1, color=c2)\n plt.legend()\n\n plot_one_step_predictive(observed_time_series,\n forecast_mean=predictive_means,\n forecast_scale=predictive_scales)\n ```\n\n To detect anomalous timesteps, we check whether the observed value at each\n step is within a 95% predictive interval, i.e., two standard deviations from\n the mean:\n\n ```python\n z_scores = ((observed_time_series[..., 1:] - predictive_means[..., :-1])\n / predictive_scales[..., :-1])\n anomalous_timesteps = tf.boolean_mask(\n tf.range(1, num_timesteps),\n tf.abs(z_scores) > 2.0)\n ```"} {"_id": "q_553", "text": "Construct predictive distribution over future observations.\n\n Given samples from the posterior over parameters, return the predictive\n distribution over future observations for num_steps_forecast timesteps.\n\n Args:\n model: An instance of `StructuralTimeSeries` representing a\n time-series model. This represents a joint distribution over\n time-series and their parameters with batch shape `[b1, ..., bN]`.\n observed_time_series: `float` `Tensor` of shape\n `concat([sample_shape, model.batch_shape, [num_timesteps, 1]])` where\n `sample_shape` corresponds to i.i.d. observations, and the trailing `[1]`\n dimension may (optionally) be omitted if `num_timesteps > 1`. May\n optionally be an instance of `tfp.sts.MaskedTimeSeries` including a\n mask `Tensor` to encode the locations of missing observations.\n parameter_samples: Python `list` of `Tensors` representing posterior samples\n of model parameters, with shapes `[concat([[num_posterior_draws],\n param.prior.batch_shape, param.prior.event_shape]) for param in\n model.parameters]`. This may optionally also be a map (Python `dict`) of\n parameter names to `Tensor` values.\n num_steps_forecast: scalar `int` `Tensor` number of steps to forecast.\n\n Returns:\n forecast_dist: a `tfd.MixtureSameFamily` instance with event shape\n [num_steps_forecast, 1] and batch shape\n `concat([sample_shape, model.batch_shape])`, with `num_posterior_draws`\n mixture components.\n\n #### Examples\n\n Suppose we've built a model and fit it to data using HMC:\n\n ```python\n day_of_week = tfp.sts.Seasonal(\n num_seasons=7,\n observed_time_series=observed_time_series,\n name='day_of_week')\n local_linear_trend = tfp.sts.LocalLinearTrend(\n observed_time_series=observed_time_series,\n name='local_linear_trend')\n model = tfp.sts.Sum(components=[day_of_week, local_linear_trend],\n observed_time_series=observed_time_series)\n\n samples, kernel_results = tfp.sts.fit_with_hmc(model, observed_time_series)\n ```\n\n Passing the posterior samples into `forecast`, we construct a forecast\n distribution:\n\n ```python\n forecast_dist = tfp.sts.forecast(model, observed_time_series,\n parameter_samples=samples,\n num_steps_forecast=50)\n\n forecast_mean = forecast_dist.mean()[..., 0] # shape: [50]\n forecast_scale = forecast_dist.stddev()[..., 0] # shape: [50]\n forecast_samples = forecast_dist.sample(10)[..., 0] # shape: [10, 50]\n ```\n\n If using variational inference instead of HMC, we'd construct a forecast using\n samples from the variational posterior:\n\n ```python\n (variational_loss,\n variational_distributions) = tfp.sts.build_factored_variational_loss(\n model=model, observed_time_series=observed_time_series)\n\n # OMITTED: take steps to optimize variational loss\n\n samples = {k: q.sample(30) for (k, q) in variational_distributions.items()}\n forecast_dist = tfp.sts.forecast(model, observed_time_series,\n parameter_samples=samples,\n num_steps_forecast=50)\n ```\n\n We can visualize the forecast by plotting:\n\n ```python\n from matplotlib import pylab as plt\n def plot_forecast(observed_time_series,\n forecast_mean,\n forecast_scale,\n forecast_samples):\n plt.figure(figsize=(12, 6))\n\n num_steps = observed_time_series.shape[-1]\n num_steps_forecast = forecast_mean.shape[-1]\n num_steps_train = num_steps - num_steps_forecast\n\n c1, c2 = (0.12, 0.47, 0.71), (1.0, 0.5, 0.05)\n plt.plot(np.arange(num_steps), observed_time_series,\n lw=2, color=c1, label='ground truth')\n\n forecast_steps = np.arange(num_steps_train,\n num_steps_train+num_steps_forecast)\n plt.plot(forecast_steps, forecast_samples.T, lw=1, color=c2, alpha=0.1)\n plt.plot(forecast_steps, forecast_mean, lw=2, ls='--', color=c2,\n label='forecast')\n plt.fill_between(forecast_steps,\n forecast_mean - 2 * forecast_scale,\n forecast_mean + 2 * forecast_scale, color=c2, alpha=0.2)\n\n plt.xlim([0, num_steps])\n plt.legend()\n\n plot_forecast(observed_time_series,\n forecast_mean=forecast_mean,\n forecast_scale=forecast_scale,\n forecast_samples=forecast_samples)\n ```"} {"_id": "q_554", "text": "Returns `max` or `mask` if `max` is not finite."} {"_id": "q_555", "text": "Assert `x` has rank equal to `rank` or smaller.\n\n Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.assert_rank_at_most(x, 2)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n rank: Scalar `Tensor`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional).\n Defaults to \"assert_rank_at_most\".\n\n Returns:\n Op raising `InvalidArgumentError` unless `x` has specified rank or lower.\n If static checks determine `x` has correct rank, a `no_op` is returned.\n\n Raises:\n ValueError: If static checks determine `x` has wrong rank."} {"_id": "q_556", "text": "OneHotCategorical helper computing probs, cdf, etc over its support."} {"_id": "q_557", "text": "Return a convert-to-tensor func, given a name, config, callable, etc."} {"_id": "q_558", "text": "Yields the top-most interceptor on the thread-local interceptor stack.\n\n Operations may be intercepted by multiple nested interceptors. Once reached,\n an operation can be forwarded through nested interceptors until resolved.\n To allow for nesting, implement interceptors by re-wrapping their first\n argument (`f`) as an `interceptable`. To avoid nesting, manipulate the\n computation without using `interceptable`.\n\n This function allows for nesting by manipulating the thread-local interceptor\n stack, so that operations are intercepted in the order of interceptor nesting.\n\n #### Examples\n\n ```python\n from tensorflow_probability import edward2 as ed\n\n def model():\n x = ed.Normal(loc=0., scale=1., name=\"x\")\n y = ed.Normal(loc=x, scale=1., name=\"y\")\n return x + y\n\n def double(f, *args, **kwargs):\n return 2. * interceptable(f)(*args, **kwargs)\n\n def set_y(f, *args, **kwargs):\n if kwargs.get(\"name\") == \"y\":\n kwargs[\"value\"] = 0.42\n return interceptable(f)(*args, **kwargs)\n\n with interception(double):\n with interception(set_y):\n z = model()\n ```\n\n This will firstly put `double` on the stack, and then `set_y`,\n resulting in the stack:\n (TOP) set_y -> double -> apply (BOTTOM)\n\n The execution of `model` is then (top lines are current stack state):\n 1) (TOP) set_y -> double -> apply (BOTTOM);\n `ed.Normal(0., 1., \"x\")` is intercepted by `set_y`, and as the name is not \"y\"\n the operation is simply forwarded to the next interceptor on the stack.\n\n 2) (TOP) double -> apply (BOTTOM);\n `ed.Normal(0., 1., \"x\")` is intercepted by `double`, to produce\n `2*ed.Normal(0., 1., \"x\")`, with the operation being forwarded down the stack.\n\n 3) (TOP) apply (BOTTOM);\n `ed.Normal(0., 1., \"x\")` is intercepted by `apply`, which simply calls the\n constructor.\n\n (At this point, the nested calls to `get_next_interceptor()`, produced by\n forwarding operations, exit, and the current stack is again:\n (TOP) set_y -> double -> apply (BOTTOM))\n\n 4) (TOP) set_y -> double -> apply (BOTTOM);\n `ed.Normal(0., 1., \"y\")` is intercepted by `set_y`,\n the value of `y` is set to 0.42 and the operation is forwarded down the stack.\n\n 5) (TOP) double -> apply (BOTTOM);\n `ed.Normal(0., 1., \"y\")` is intercepted by `double`, to produce\n `2*ed.Normal(0., 1., \"y\")`, with the operation being forwarded down the stack.\n\n 6) (TOP) apply (BOTTOM);\n `ed.Normal(0., 1., \"y\")` is intercepted by `apply`, which simply calls the\n constructor.\n\n The final values for `x` and `y` inside of `model()` are tensors where `x` is\n a random draw from Normal(0., 1.) doubled, and `y` is a constant 0.84, thus\n z = 2 * Normal(0., 1.) + 0.84."} {"_id": "q_559", "text": "Decorator that wraps `func` so that its execution is intercepted.\n\n The wrapper passes `func` to the interceptor for the current thread.\n\n If there is no next interceptor, we perform an \"immediate\" call to `func`.\n That is, `func` terminates without forwarding its execution to another\n interceptor.\n\n Args:\n func: Function to wrap.\n\n Returns:\n The decorated function."} {"_id": "q_560", "text": "Generates synthetic data for binary classification.\n\n Args:\n num_examples: The number of samples to generate (scalar Python `int`).\n input_size: The input space dimension (scalar Python `int`).\n weights_prior_stddev: The prior standard deviation of the weight\n vector. (scalar Python `float`).\n\n Returns:\n random_weights: Sampled weights as a Numpy `array` of shape\n `[input_size]`.\n random_bias: Sampled bias as a scalar Python `float`.\n design_matrix: Points sampled uniformly from the cube `[-1,\n 1]^{input_size}`, as a Numpy `array` of shape `(num_examples,\n input_size)`.\n labels: Labels sampled from the logistic model `p(label=1) =\n logistic(dot(features, random_weights) + random_bias)`, as a Numpy\n `int32` `array` of shape `(num_examples, 1)`."} {"_id": "q_561", "text": "Build a Dataset iterator for supervised classification.\n\n Args:\n x: Numpy `array` of features, indexed by the first dimension.\n y: Numpy `array` of labels, with the same first dimension as `x`.\n batch_size: Number of elements in each training batch.\n\n Returns:\n batch_features: `Tensor` feed features, of shape\n `[batch_size] + x.shape[1:]`.\n batch_labels: `Tensor` feed of labels, of shape\n `[batch_size] + y.shape[1:]`."} {"_id": "q_562", "text": "Validate `map_values` if `validate_args`==True."} {"_id": "q_563", "text": "Calls `fn` and returns the gradients with respect to `fn`'s first output.\n\n Args:\n fn: A `TransitionOperator`.\n args: Arguments to `fn`\n\n Returns:\n ret: First output of `fn`.\n extra: Second output of `fn`.\n grads: Gradients of `ret` with respect to `args`."} {"_id": "q_564", "text": "Maybe broadcasts `from_structure` to `to_structure`.\n\n If `from_structure` is a singleton, it is tiled to match the structure of\n `to_structure`. Note that the elements in `from_structure` are not copied if\n this tiling occurs.\n\n Args:\n from_structure: A structure.\n to_structure: A structure.\n\n Returns:\n new_from_structure: Same structure as `to_structure`."} {"_id": "q_565", "text": "Transforms a log-prob function using a bijector.\n\n This takes a log-prob function and creates a new log-prob function that now\n takes takes state in the domain of the bijector, forward transforms that state\n and calls the original log-prob function. It then returns the log-probability\n that correctly accounts for this transformation.\n\n The forward-transformed state is pre-pended to the original log-prob\n function's extra returns and returned as the new extra return.\n\n For convenience you can also pass the initial state (in the original space),\n and this function will return the inverse transformed as the 2nd return value.\n You'd use this to initialize MCMC operators that operate in the transformed\n space.\n\n Args:\n log_prob_fn: Log prob fn.\n bijector: Bijector(s), must be of the same structure as the `log_prob_fn`\n inputs.\n init_state: Initial state, in the original space.\n\n Returns:\n transformed_log_prob_fn: Transformed log prob fn.\n transformed_init_state: If `init_state` is provided. Initial state in the\n transformed space."} {"_id": "q_566", "text": "Leapfrog `TransitionOperator`.\n\n Args:\n leapfrog_step_state: LeapFrogStepState.\n step_size: Step size, structure broadcastable to the `target_log_prob_fn`\n state.\n target_log_prob_fn: Target log prob fn.\n kinetic_energy_fn: Kinetic energy fn.\n\n Returns:\n leapfrog_step_state: LeapFrogStepState.\n leapfrog_step_extras: LeapFrogStepExtras."} {"_id": "q_567", "text": "Metropolis-Hastings step.\n\n This probabilistically chooses between `current_state` and `proposed_state`\n based on the `energy_change` so as to preserve detailed balance.\n\n Energy change is the negative of `log_accept_ratio`.\n\n Args:\n current_state: Current state.\n proposed_state: Proposed state.\n energy_change: E(proposed_state) - E(previous_state).\n seed: For reproducibility.\n\n Returns:\n new_state: The chosen state.\n is_accepted: Whether the proposed state was accepted.\n log_uniform: The random number that was used to select between the two\n states."} {"_id": "q_568", "text": "Construct `scale` from various components.\n\n Args:\n identity_multiplier: floating point rank 0 `Tensor` representing a scaling\n done to the identity matrix.\n diag: Floating-point `Tensor` representing the diagonal matrix.`diag` has\n shape `[N1, N2, ... k]`, which represents a k x k diagonal matrix.\n tril: Floating-point `Tensor` representing the lower triangular matrix.\n `tril` has shape `[N1, N2, ... k, k]`, which represents a k x k lower\n triangular matrix.\n perturb_diag: Floating-point `Tensor` representing the diagonal matrix of\n the low rank update.\n perturb_factor: Floating-point `Tensor` representing factor matrix.\n shift: Floating-point `Tensor` representing `shift in `scale @ X + shift`.\n validate_args: Python `bool` indicating whether arguments should be\n checked for correctness.\n dtype: `DType` for arg `Tensor` conversions.\n\n Returns:\n scale. In the case of scaling by a constant, scale is a\n floating point `Tensor`. Otherwise, scale is a `LinearOperator`.\n\n Raises:\n ValueError: if all of `tril`, `diag` and `identity_multiplier` are `None`."} {"_id": "q_569", "text": "Returns a callable that adds a random normal perturbation to the input.\n\n This function returns a callable that accepts a Python `list` of `Tensor`s of\n any shapes and `dtypes` representing the state parts of the `current_state`\n and a random seed. The supplied argument `scale` must be a `Tensor` or Python\n `list` of `Tensor`s representing the scale of the generated\n proposal. `scale` must broadcast with the state parts of `current_state`.\n The callable adds a sample from a zero-mean normal distribution with the\n supplied scales to each state part and returns a same-type `list` of `Tensor`s\n as the state parts of `current_state`.\n\n Args:\n scale: a `Tensor` or Python `list` of `Tensor`s of any shapes and `dtypes`\n controlling the scale of the normal proposal distribution.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: 'random_walk_normal_fn'.\n\n Returns:\n random_walk_normal_fn: A callable accepting a Python `list` of `Tensor`s\n representing the state parts of the `current_state` and an `int`\n representing the random seed to be used to generate the proposal. The\n callable returns the same-type `list` of `Tensor`s as the input and\n represents the proposal for the RWM algorithm."} {"_id": "q_570", "text": "Returns a callable that adds a random uniform perturbation to the input.\n\n For more details on `random_walk_uniform_fn`, see\n `random_walk_normal_fn`. `scale` might\n be a `Tensor` or a list of `Tensor`s that should broadcast with state parts\n of the `current_state`. The generated uniform perturbation is sampled as a\n uniform point on the rectangle `[-scale, scale]`.\n\n Args:\n scale: a `Tensor` or Python `list` of `Tensor`s of any shapes and `dtypes`\n controlling the upper and lower bound of the uniform proposal\n distribution.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: 'random_walk_uniform_fn'.\n\n Returns:\n random_walk_uniform_fn: A callable accepting a Python `list` of `Tensor`s\n representing the state parts of the `current_state` and an `int`\n representing the random seed used to generate the proposal. The callable\n returns the same-type `list` of `Tensor`s as the input and represents the\n proposal for the RWM algorithm."} {"_id": "q_571", "text": "Get a list of num_components batchwise probabilities."} {"_id": "q_572", "text": "Validate `outcomes`, `logits` and `probs`'s shapes."} {"_id": "q_573", "text": "Bayesian logistic regression, which returns labels given features."} {"_id": "q_574", "text": "Builds the Covertype data set."} {"_id": "q_575", "text": "Cholesky factor of the covariance matrix of vector-variate random samples.\n\n This function can be use to fit a multivariate normal to data.\n\n ```python\n tf.enable_eager_execution()\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n # Assume data.shape = (1000, 2). 1000 samples of a random variable in R^2.\n observed_data = read_data_samples(...)\n\n # The mean is easy\n mu = tf.reduce_mean(observed_data, axis=0)\n\n # Get the scale matrix\n L = tfp.stats.cholesky_covariance(observed_data)\n\n # Make the best fit multivariate normal (under maximum likelihood condition).\n mvn = tfd.MultivariateNormalTriL(loc=mu, scale_tril=L)\n\n # Plot contours of the pdf.\n xs, ys = tf.meshgrid(\n tf.linspace(-5., 5., 50), tf.linspace(-5., 5., 50), indexing='ij')\n xy = tf.stack((tf.reshape(xs, [-1]), tf.reshape(ys, [-1])), axis=-1)\n pdf = tf.reshape(mvn.prob(xy), (50, 50))\n CS = plt.contour(xs, ys, pdf, 10)\n plt.clabel(CS, inline=1, fontsize=10)\n ```\n\n Why does this work?\n Given vector-variate random variables `X = (X1, ..., Xd)`, one may obtain the\n sample covariance matrix in `R^{d x d}` (see `tfp.stats.covariance`).\n\n The [Cholesky factor](https://en.wikipedia.org/wiki/Cholesky_decomposition)\n of this matrix is analogous to standard deviation for scalar random variables:\n Suppose `X` has covariance matrix `C`, with Cholesky factorization `C = L L^T`\n Then multiplying a vector of iid random variables which have unit variance by\n `L` produces a vector with covariance `L L^T`, which is the same as `X`.\n\n ```python\n observed_data = read_data_samples(...)\n L = tfp.stats.cholesky_covariance(observed_data, sample_axis=0)\n\n # Make fake_data with the same covariance as observed_data.\n uncorrelated_normal = tf.random_normal(shape=(500, 10))\n fake_data = tf.linalg.matvec(L, uncorrelated_normal)\n ```\n\n Args:\n x: Numeric `Tensor`. The rightmost dimension of `x` indexes events. E.g.\n dimensions of a random vector.\n sample_axis: Scalar or vector `Tensor` designating axis holding samples.\n Default value: `0` (leftmost dimension). Cannot be the rightmost dimension\n (since this indexes events).\n keepdims: Boolean. Whether to keep the sample axis as singletons.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., `'covariance'`).\n\n Returns:\n chol: `Tensor` of same `dtype` as `x`. The last two dimensions hold\n lower triangular matrices (the Cholesky factors)."} {"_id": "q_576", "text": "Estimate standard deviation using samples.\n\n Given `N` samples of scalar valued random variable `X`, standard deviation may\n be estimated as\n\n ```none\n Stddev[X] := Sqrt[Var[X]],\n Var[X] := N^{-1} sum_{n=1}^N (X_n - Xbar) Conj{(X_n - Xbar)},\n Xbar := N^{-1} sum_{n=1}^N X_n\n ```\n\n ```python\n x = tf.random_normal(shape=(100, 2, 3))\n\n # stddev[i, j] is the sample standard deviation of the (i, j) batch member.\n stddev = tfp.stats.stddev(x, sample_axis=0)\n ```\n\n Scaling a unit normal by a standard deviation produces normal samples\n with that standard deviation.\n\n ```python\n observed_data = read_data_samples(...)\n stddev = tfp.stats.stddev(observed_data)\n\n # Make fake_data with the same standard deviation as observed_data.\n fake_data = stddev * tf.random_normal(shape=(100,))\n ```\n\n Notice we divide by `N` (the numpy default), which does not create `NaN`\n when `N = 1`, but is slightly biased.\n\n Args:\n x: A numeric `Tensor` holding samples.\n sample_axis: Scalar or vector `Tensor` designating axis holding samples, or\n `None` (meaning all axis hold samples).\n Default value: `0` (leftmost dimension).\n keepdims: Boolean. Whether to keep the sample axis as singletons.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., `'stddev'`).\n\n Returns:\n stddev: A `Tensor` of same `dtype` as the `x`, and rank equal to\n `rank(x) - len(sample_axis)`"} {"_id": "q_577", "text": "Rectify possibly negatively axis. Prefer return Python list."} {"_id": "q_578", "text": "A version of squeeze that works with dynamic axis."} {"_id": "q_579", "text": "Standardize input `x` to a unit normal."} {"_id": "q_580", "text": "r\"\"\"Returns a sample from the `dim` dimensional Halton sequence.\n\n Warning: The sequence elements take values only between 0 and 1. Care must be\n taken to appropriately transform the domain of a function if it differs from\n the unit cube before evaluating integrals using Halton samples. It is also\n important to remember that quasi-random numbers without randomization are not\n a replacement for pseudo-random numbers in every context. Quasi random numbers\n are completely deterministic and typically have significant negative\n autocorrelation unless randomization is used.\n\n Computes the members of the low discrepancy Halton sequence in dimension\n `dim`. The `dim`-dimensional sequence takes values in the unit hypercube in\n `dim` dimensions. Currently, only dimensions up to 1000 are supported. The\n prime base for the k-th axes is the k-th prime starting from 2. For example,\n if `dim` = 3, then the bases will be [2, 3, 5] respectively and the first\n element of the non-randomized sequence will be: [0.5, 0.333, 0.2]. For a more\n complete description of the Halton sequences see\n [here](https://en.wikipedia.org/wiki/Halton_sequence). For low discrepancy\n sequences and their applications see\n [here](https://en.wikipedia.org/wiki/Low-discrepancy_sequence).\n\n If `randomized` is true, this function produces a scrambled version of the\n Halton sequence introduced by [Owen (2017)][1]. For the advantages of\n randomization of low discrepancy sequences see [here](\n https://en.wikipedia.org/wiki/Quasi-Monte_Carlo_method#Randomization_of_quasi-Monte_Carlo).\n\n The number of samples produced is controlled by the `num_results` and\n `sequence_indices` parameters. The user must supply either `num_results` or\n `sequence_indices` but not both.\n The former is the number of samples to produce starting from the first\n element. If `sequence_indices` is given instead, the specified elements of\n the sequence are generated. For example, sequence_indices=tf.range(10) is\n equivalent to specifying n=10.\n\n #### Examples\n\n ```python\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n # Produce the first 1000 members of the Halton sequence in 3 dimensions.\n num_results = 1000\n dim = 3\n sample = tfp.mcmc.sample_halton_sequence(\n dim,\n num_results=num_results,\n seed=127)\n\n # Evaluate the integral of x_1 * x_2^2 * x_3^3 over the three dimensional\n # hypercube.\n powers = tf.range(1.0, limit=dim + 1)\n integral = tf.reduce_mean(tf.reduce_prod(sample ** powers, axis=-1))\n true_value = 1.0 / tf.reduce_prod(powers + 1.0)\n with tf.Session() as session:\n values = session.run((integral, true_value))\n\n # Produces a relative absolute error of 1.7%.\n print (\"Estimated: %f, True Value: %f\" % values)\n\n # Now skip the first 1000 samples and recompute the integral with the next\n # thousand samples. The sequence_indices argument can be used to do this.\n\n\n sequence_indices = tf.range(start=1000, limit=1000 + num_results,\n dtype=tf.int32)\n sample_leaped = tfp.mcmc.sample_halton_sequence(\n dim,\n sequence_indices=sequence_indices,\n seed=111217)\n\n integral_leaped = tf.reduce_mean(tf.reduce_prod(sample_leaped ** powers,\n axis=-1))\n with tf.Session() as session:\n values = session.run((integral_leaped, true_value))\n # Now produces a relative absolute error of 0.05%.\n print (\"Leaped Estimated: %f, True Value: %f\" % values)\n ```\n\n Args:\n dim: Positive Python `int` representing each sample's `event_size.` Must\n not be greater than 1000.\n num_results: (Optional) Positive scalar `Tensor` of dtype int32. The number\n of samples to generate. Either this parameter or sequence_indices must\n be specified but not both. If this parameter is None, then the behaviour\n is determined by the `sequence_indices`.\n Default value: `None`.\n sequence_indices: (Optional) `Tensor` of dtype int32 and rank 1. The\n elements of the sequence to compute specified by their position in the\n sequence. The entries index into the Halton sequence starting with 0 and\n hence, must be whole numbers. For example, sequence_indices=[0, 5, 6] will\n produce the first, sixth and seventh elements of the sequence. If this\n parameter is None, then the `num_results` parameter must be specified\n which gives the number of desired samples starting from the first sample.\n Default value: `None`.\n dtype: (Optional) The dtype of the sample. One of: `float16`, `float32` or\n `float64`.\n Default value: `tf.float32`.\n randomized: (Optional) bool indicating whether to produce a randomized\n Halton sequence. If True, applies the randomization described in\n [Owen (2017)][1].\n Default value: `True`.\n seed: (Optional) Python integer to seed the random number generator. Only\n used if `randomized` is True. If not supplied and `randomized` is True,\n no seed is set.\n Default value: `None`.\n name: (Optional) Python `str` describing ops managed by this function. If\n not supplied the name of this function is used.\n Default value: \"sample_halton_sequence\".\n\n Returns:\n halton_elements: Elements of the Halton sequence. `Tensor` of supplied dtype\n and `shape` `[num_results, dim]` if `num_results` was specified or shape\n `[s, dim]` where s is the size of `sequence_indices` if `sequence_indices`\n were specified.\n\n Raises:\n ValueError: if both `sequence_indices` and `num_results` were specified or\n if dimension `dim` is less than 1 or greater than 1000.\n\n #### References\n\n [1]: Art B. Owen. A randomized Halton algorithm in R. _arXiv preprint\n arXiv:1706.02808_, 2017. https://arxiv.org/abs/1706.02808"} {"_id": "q_581", "text": "Uniform iid sample from the space of permutations.\n\n Draws a sample of size `num_results` from the group of permutations of degrees\n specified by the `dims` tensor. These are packed together into one tensor\n such that each row is one sample from each of the dimensions in `dims`. For\n example, if dims = [2,3] and num_results = 2, the result is a tensor of shape\n [2, 2 + 3] and the first row of the result might look like:\n [1, 0, 2, 0, 1]. The first two elements are a permutation over 2 elements\n while the next three are a permutation over 3 elements.\n\n Args:\n num_results: A positive scalar `Tensor` of integral type. The number of\n draws from the discrete uniform distribution over the permutation groups.\n dims: A 1D `Tensor` of the same dtype as `num_results`. The degree of the\n permutation groups from which to sample.\n seed: (Optional) Python integer to seed the random number generator.\n\n Returns:\n permutations: A `Tensor` of shape `[num_results, sum(dims)]` and the same\n dtype as `dims`."} {"_id": "q_582", "text": "Generates starting points for the Halton sequence procedure.\n\n The k'th element of the sequence is generated starting from a positive integer\n which must be distinct for each `k`. It is conventional to choose the starting\n point as `k` itself (or `k+1` if k is zero based). This function generates\n the starting integers for the required elements and reshapes the result for\n later use.\n\n Args:\n num_results: Positive scalar `Tensor` of dtype int32. The number of samples\n to generate. If this parameter is supplied, then `sequence_indices`\n should be None.\n sequence_indices: `Tensor` of dtype int32 and rank 1. The entries\n index into the Halton sequence starting with 0 and hence, must be whole\n numbers. For example, sequence_indices=[0, 5, 6] will produce the first,\n sixth and seventh elements of the sequence. If this parameter is not None\n then `n` must be None.\n dtype: The dtype of the sample. One of `float32` or `float64`.\n Default is `float32`.\n name: Python `str` name which describes ops created by this function.\n\n Returns:\n indices: `Tensor` of dtype `dtype` and shape = `[n, 1, 1]`."} {"_id": "q_583", "text": "Computes the number of terms in the place value expansion.\n\n Let num = a0 + a1 b + a2 b^2 + ... ak b^k be the place value expansion of\n `num` in base b (ak <> 0). This function computes and returns `k+1` for each\n base `b` specified in `bases`.\n\n This can be inferred from the base `b` logarithm of `num` as follows:\n $$k = Floor(log_b (num)) + 1 = Floor( log(num) / log(b)) + 1$$\n\n Args:\n num: Scalar `Tensor` of dtype either `float32` or `float64`. The number to\n compute the base expansion size of.\n bases: `Tensor` of the same dtype as num. The bases to compute the size\n against.\n\n Returns:\n Tensor of same dtype and shape as `bases` containing the size of num when\n written in that base."} {"_id": "q_584", "text": "Returns sorted array of primes such that `2 <= prime < n`."} {"_id": "q_585", "text": "Returns the machine epsilon for the supplied dtype."} {"_id": "q_586", "text": "The Hager Zhang line search algorithm.\n\n Performs an inexact line search based on the algorithm of\n [Hager and Zhang (2006)][2].\n The univariate objective function `value_and_gradients_function` is typically\n generated by projecting a multivariate objective function along a search\n direction. Suppose the multivariate function to be minimized is\n `g(x1,x2, .. xn)`. Let (d1, d2, ..., dn) be the direction along which we wish\n to perform a line search. Then the projected univariate function to be used\n for line search is\n\n ```None\n f(a) = g(x1 + d1 * a, x2 + d2 * a, ..., xn + dn * a)\n ```\n\n The directional derivative along (d1, d2, ..., dn) is needed for this\n procedure. This also corresponds to the derivative of the projected function\n `f(a)` with respect to `a`. Note that this derivative must be negative for\n `a = 0` if the direction is a descent direction.\n\n The usual stopping criteria for the line search is the satisfaction of the\n (weak) Wolfe conditions. For details of the Wolfe conditions, see\n ref. [3]. On a finite precision machine, the exact Wolfe conditions can\n be difficult to satisfy when one is very close to the minimum and as argued\n by [Hager and Zhang (2005)][1], one can only expect the minimum to be\n determined within square root of machine precision. To improve the situation,\n they propose to replace the Wolfe conditions with an approximate version\n depending on the derivative of the function which is applied only when one\n is very close to the minimum. The following algorithm implements this\n enhanced scheme.\n\n ### Usage:\n\n Primary use of line search methods is as an internal component of a class of\n optimization algorithms (called line search based methods as opposed to\n trust region methods). Hence, the end user will typically not want to access\n line search directly. In particular, inexact line search should not be\n confused with a univariate minimization method. The stopping criteria of line\n search is the satisfaction of Wolfe conditions and not the discovery of the\n minimum of the function.\n\n With this caveat in mind, the following example illustrates the standalone\n usage of the line search.\n\n ```python\n # Define value and gradient namedtuple\n ValueAndGradient = namedtuple('ValueAndGradient', ['x', 'f', 'df'])\n # Define a quadratic target with minimum at 1.3.\n def value_and_gradients_function(x):\n return ValueAndGradient(x=x, f=(x - 1.3) ** 2, df=2 * (x-1.3))\n # Set initial step size.\n step_size = tf.constant(0.1)\n ls_result = tfp.optimizer.linesearch.hager_zhang(\n value_and_gradients_function, initial_step_size=step_size)\n # Evaluate the results.\n with tf.Session() as session:\n results = session.run(ls_result)\n # Ensure convergence.\n assert results.converged\n # If the line search converged, the left and the right ends of the\n # bracketing interval are identical.\n assert results.left.x == result.right.x\n # Print the number of evaluations and the final step size.\n print (\"Final Step Size: %f, Evaluations: %d\" % (results.left.x,\n results.func_evals))\n ```\n\n ### References:\n [1]: William Hager, Hongchao Zhang. A new conjugate gradient method with\n guaranteed descent and an efficient line search. SIAM J. Optim., Vol 16. 1,\n pp. 170-172. 2005.\n https://www.math.lsu.edu/~hozhang/papers/cg_descent.pdf\n\n [2]: William Hager, Hongchao Zhang. Algorithm 851: CG_DESCENT, a conjugate\n gradient method with guaranteed descent. ACM Transactions on Mathematical\n Software, Vol 32., 1, pp. 113-137. 2006.\n http://users.clas.ufl.edu/hager/papers/CG/cg_compare.pdf\n\n [3]: Jorge Nocedal, Stephen Wright. Numerical Optimization. Springer Series in\n Operations Research. pp 33-36. 2006\n\n Args:\n value_and_gradients_function: A Python callable that accepts a real scalar\n tensor and returns a namedtuple with the fields 'x', 'f', and 'df' that\n correspond to scalar tensors of real dtype containing the point at which\n the function was evaluated, the value of the function, and its\n derivative at that point. The other namedtuple fields, if present,\n should be tensors or sequences (possibly nested) of tensors.\n In usual optimization application, this function would be generated by\n projecting the multivariate objective function along some specific\n direction. The direction is determined by some other procedure but should\n be a descent direction (i.e. the derivative of the projected univariate\n function must be negative at 0.).\n Alternatively, the function may represent the batching of `n` such line\n functions (e.g. projecting a single multivariate objective function along\n `n` distinct directions at once) accepting n points as input, i.e. a\n tensor of shape [n], and the fields 'x', 'f' and 'df' in the returned\n namedtuple should each be a tensor of shape [n], with the corresponding\n input points, function values, and derivatives at those input points.\n initial_step_size: (Optional) Scalar positive `Tensor` of real dtype, or\n a tensor of shape [n] in batching mode. The initial value (or values) to\n try to bracket the minimum. Default is `1.` as a float32.\n Note that this point need not necessarily bracket the minimum for the line\n search to work correctly but the supplied value must be greater than 0.\n A good initial value will make the search converge faster.\n value_at_initial_step: (Optional) The full return value of evaluating\n value_and_gradients_function at initial_step_size, i.e. a namedtuple with\n 'x', 'f', 'df', if already known by the caller. If supplied the value of\n `initial_step_size` will be ignored, otherwise the tuple will be computed\n by evaluating value_and_gradients_function.\n value_at_zero: (Optional) The full return value of\n value_and_gradients_function at `0.`, i.e. a namedtuple with\n 'x', 'f', 'df', if already known by the caller. If not supplied the tuple\n will be computed by evaluating value_and_gradients_function.\n converged: (Optional) In batching mode a tensor of shape [n], indicating\n batch members which have already converged and no further search should\n be performed. These batch members are also reported as converged in the\n output, and both their `left` and `right` are set to the\n `value_at_initial_step`.\n threshold_use_approximate_wolfe_condition: Scalar positive `Tensor`\n of real dtype. Corresponds to the parameter 'epsilon' in\n [Hager and Zhang (2006)][2]. Used to estimate the\n threshold at which the line search switches to approximate Wolfe\n conditions.\n shrinkage_param: Scalar positive Tensor of real dtype. Must be less than\n `1.`. Corresponds to the parameter `gamma` in\n [Hager and Zhang (2006)][2].\n If the secant**2 step does not shrink the bracketing interval by this\n proportion, a bisection step is performed to reduce the interval width.\n expansion_param: Scalar positive `Tensor` of real dtype. Must be greater\n than `1.`. Used to expand the initial interval in case it does not bracket\n a minimum. Corresponds to `rho` in [Hager and Zhang (2006)][2].\n sufficient_decrease_param: Positive scalar `Tensor` of real dtype.\n Bounded above by the curvature param. Corresponds to `delta` in the\n terminology of [Hager and Zhang (2006)][2].\n curvature_param: Positive scalar `Tensor` of real dtype. Bounded above\n by `1.`. Corresponds to 'sigma' in the terminology of\n [Hager and Zhang (2006)][2].\n step_size_shrink_param: Positive scalar `Tensor` of real dtype. Bounded\n above by `1`. If the supplied step size is too big (i.e. either the\n objective value or the gradient at that point is infinite), this factor\n is used to shrink the step size until it is finite.\n max_iterations: Positive scalar `Tensor` of integral dtype or None. The\n maximum number of iterations to perform in the line search. The number of\n iterations used to bracket the minimum are also counted against this\n parameter.\n name: (Optional) Python str. The name prefixed to the ops created by this\n function. If not supplied, the default name 'hager_zhang' is used.\n\n Returns:\n results: A namedtuple containing the following attributes.\n converged: Boolean `Tensor` of shape [n]. Whether a point satisfying\n Wolfe/Approx wolfe was found.\n failed: Boolean `Tensor` of shape [n]. Whether line search failed e.g.\n if either the objective function or the gradient are not finite at\n an evaluation point.\n iterations: Scalar int32 `Tensor`. Number of line search iterations made.\n func_evals: Scalar int32 `Tensor`. Number of function evaluations made.\n left: A namedtuple, as returned by value_and_gradients_function,\n of the left end point of the final bracketing interval. Values are\n equal to those of `right` on batch members where converged is True.\n Otherwise, it corresponds to the last interval computed.\n right: A namedtuple, as returned by value_and_gradients_function,\n of the right end point of the final bracketing interval. Values are\n equal to those of `left` on batch members where converged is True.\n Otherwise, it corresponds to the last interval computed."} {"_id": "q_587", "text": "Shrinks the input step size until the value and grad become finite."} {"_id": "q_588", "text": "The main loop of line search after the minimum has been bracketed.\n\n Args:\n value_and_gradients_function: A Python callable that accepts a real scalar\n tensor and returns a namedtuple with the fields 'x', 'f', and 'df' that\n correspond to scalar tensors of real dtype containing the point at which\n the function was evaluated, the value of the function, and its\n derivative at that point. The other namedtuple fields, if present,\n should be tensors or sequences (possibly nested) of tensors.\n In usual optimization application, this function would be generated by\n projecting the multivariate objective function along some specific\n direction. The direction is determined by some other procedure but should\n be a descent direction (i.e. the derivative of the projected univariate\n function must be negative at 0.).\n Alternatively, the function may represent the batching of `n` such line\n functions (e.g. projecting a single multivariate objective function along\n `n` distinct directions at once) accepting n points as input, i.e. a\n tensor of shape [n], and the fields 'x', 'f' and 'df' in the returned\n namedtuple should each be a tensor of shape [n], with the corresponding\n input points, function values, and derivatives at those input points.\n search_interval: Instance of `HagerZhangLineSearchResults` containing\n the current line search interval.\n val_0: A namedtuple as returned by value_and_gradients_function evaluated\n at `0.`. The gradient must be negative (i.e. must be a descent direction).\n f_lim: Scalar `Tensor` of float dtype.\n max_iterations: Positive scalar `Tensor` of integral dtype. The maximum\n number of iterations to perform in the line search. The number of\n iterations used to bracket the minimum are also counted against this\n parameter.\n sufficient_decrease_param: Positive scalar `Tensor` of real dtype.\n Bounded above by the curvature param. Corresponds to `delta` in the\n terminology of [Hager and Zhang (2006)][2].\n curvature_param: Positive scalar `Tensor` of real dtype. Bounded above\n by `1.`. Corresponds to 'sigma' in the terminology of\n [Hager and Zhang (2006)][2].\n shrinkage_param: Scalar positive Tensor of real dtype. Must be less than\n `1.`. Corresponds to the parameter `gamma` in [Hager and Zhang (2006)][2].\n\n Returns:\n A namedtuple containing the following fields.\n converged: Boolean `Tensor` of shape [n]. Whether a point satisfying\n Wolfe/Approx wolfe was found.\n failed: Boolean `Tensor` of shape [n]. Whether line search failed e.g.\n if either the objective function or the gradient are not finite at\n an evaluation point.\n iterations: Scalar int32 `Tensor`. Number of line search iterations made.\n func_evals: Scalar int32 `Tensor`. Number of function evaluations made.\n left: A namedtuple, as returned by value_and_gradients_function,\n of the left end point of the updated bracketing interval.\n right: A namedtuple, as returned by value_and_gradients_function,\n of the right end point of the updated bracketing interval."} {"_id": "q_589", "text": "Performs bisection and updates the interval."} {"_id": "q_590", "text": "Wrapper for tf.Print which supports lists and namedtuples for printing."} {"_id": "q_591", "text": "Use Gauss-Hermite quadrature to form quadrature on `K - 1` simplex.\n\n A `SoftmaxNormal` random variable `Y` may be generated via\n\n ```\n Y = SoftmaxCentered(X),\n X = Normal(normal_loc, normal_scale)\n ```\n\n Note: for a given `quadrature_size`, this method is generally less accurate\n than `quadrature_scheme_softmaxnormal_quantiles`.\n\n Args:\n normal_loc: `float`-like `Tensor` with shape `[b1, ..., bB, K-1]`, B>=0.\n The location parameter of the Normal used to construct the SoftmaxNormal.\n normal_scale: `float`-like `Tensor`. Broadcastable with `normal_loc`.\n The scale parameter of the Normal used to construct the SoftmaxNormal.\n quadrature_size: Python `int` scalar representing the number of quadrature\n points.\n validate_args: Python `bool`, default `False`. When `True` distribution\n parameters are checked for validity despite possibly degrading runtime\n performance. When `False` invalid inputs may silently render incorrect\n outputs.\n name: Python `str` name prefixed to Ops created by this class.\n\n Returns:\n grid: Shape `[b1, ..., bB, K, quadrature_size]` `Tensor` representing the\n convex combination of affine parameters for `K` components.\n `grid[..., :, n]` is the `n`-th grid point, living in the `K - 1` simplex.\n probs: Shape `[b1, ..., bB, K, quadrature_size]` `Tensor` representing the\n associated with each grid point."} {"_id": "q_592", "text": "Helper which checks validity of `loc` and `scale` init args."} {"_id": "q_593", "text": "Helper which interpolates between two locs."} {"_id": "q_594", "text": "Helper which interpolates between two scales."} {"_id": "q_595", "text": "Multiply tensor of vectors by matrices assuming values stored are logs."} {"_id": "q_596", "text": "Tabulate log probabilities from a batch of distributions."} {"_id": "q_597", "text": "Compute marginal posterior distribution for each state.\n\n This function computes, for each time step, the marginal\n conditional probability that the hidden Markov model was in\n each possible state given the observations that were made\n at each time step.\n So if the hidden states are `z[0],...,z[num_steps - 1]` and\n the observations are `x[0], ..., x[num_steps - 1]`, then\n this function computes `P(z[i] | x[0], ..., x[num_steps - 1])`\n for all `i` from `0` to `num_steps - 1`.\n\n This operation is sometimes called smoothing. It uses a form\n of the forward-backward algorithm.\n\n Note: the behavior of this function is undefined if the\n `observations` argument represents impossible observations\n from the model.\n\n Args:\n observations: A tensor representing a batch of observations\n made on the hidden Markov model. The rightmost dimension of this tensor\n gives the steps in a sequence of observations from a single sample from\n the hidden Markov model. The size of this dimension should match the\n `num_steps` parameter of the hidden Markov model object. The other\n dimensions are the dimensions of the batch and these are broadcast with\n the hidden Markov model's parameters.\n name: Python `str` name prefixed to Ops created by this class.\n Default value: \"HiddenMarkovModel\".\n\n Returns:\n posterior_marginal: A `Categorical` distribution object representing the\n marginal probability of the hidden Markov model being in each state at\n each step. The rightmost dimension of the `Categorical` distributions\n batch will equal the `num_steps` parameter providing one marginal\n distribution for each step. The other dimensions are the dimensions\n corresponding to the batch of observations.\n\n Raises:\n ValueError: if rightmost dimension of `observations` does not\n have size `num_steps`."} {"_id": "q_598", "text": "Chooses a random direction in the event space."} {"_id": "q_599", "text": "Applies a single iteration of slice sampling update.\n\n Applies hit and run style slice sampling. Chooses a uniform random direction\n on the unit sphere in the event space. Applies the one dimensional slice\n sampling update along that direction.\n\n Args:\n target_log_prob_fn: Python callable which takes an argument like\n `*current_state_parts` and returns its (possibly unnormalized) log-density\n under the target distribution.\n current_state_parts: Python `list` of `Tensor`s representing the current\n state(s) of the Markov chain(s). The first `independent_chain_ndims` of\n the `Tensor`(s) index different chains.\n step_sizes: Python `list` of `Tensor`s. Provides a measure of the width\n of the density. Used to find the slice bounds. Must broadcast with the\n shape of `current_state_parts`.\n max_doublings: Integer number of doublings to allow while locating the slice\n boundaries.\n current_target_log_prob: `Tensor` representing the value of\n `target_log_prob_fn(*current_state_parts)`. The only reason to specify\n this argument is to reduce TF graph size.\n batch_rank: Integer. The number of axes in the state that correspond to\n independent batches.\n seed: Python integer to seed random number generators.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'find_slice_bounds').\n\n Returns:\n proposed_state_parts: Tensor or Python list of `Tensor`s representing the\n state(s) of the Markov chain(s) at each result step. Has same shape as\n input `current_state_parts`.\n proposed_target_log_prob: `Tensor` representing the value of\n `target_log_prob_fn` at `next_state`.\n bounds_satisfied: Boolean `Tensor` of the same shape as the log density.\n True indicates whether the an interval containing the slice for that\n batch was found successfully.\n direction: `Tensor` or Python list of `Tensors`s representing the direction\n along which the slice was sampled. Has the same shape and dtype(s) as\n `current_state_parts`.\n upper_bounds: `Tensor` of batch shape and the dtype of the input state. The\n upper bounds of the slices along the sampling direction.\n lower_bounds: `Tensor` of batch shape and the dtype of the input state. The\n lower bounds of the slices along the sampling direction."} {"_id": "q_600", "text": "Helper which computes `fn_result` if needed."} {"_id": "q_601", "text": "Pads the shape of x to the right to be of rank final_rank.\n\n Expands the dims of `x` to the right such that its rank is equal to\n final_rank. For example, if `x` is of shape [1, 5, 7, 2] and `final_rank` is\n 7, we return padded_x, which is of shape [1, 5, 7, 2, 1, 1, 1].\n\n Args:\n x: The tensor whose shape is to be padded.\n final_rank: Scalar int32 `Tensor` or Python `int`. The desired rank of x.\n\n Returns:\n padded_x: A tensor of rank final_rank."} {"_id": "q_602", "text": "Runs one iteration of Slice Sampler.\n\n Args:\n current_state: `Tensor` or Python `list` of `Tensor`s representing the\n current state(s) of the Markov chain(s). The first `r` dimensions\n index independent chains,\n `r = tf.rank(target_log_prob_fn(*current_state))`.\n previous_kernel_results: `collections.namedtuple` containing `Tensor`s\n representing values from previous calls to this function (or from the\n `bootstrap_results` function.)\n\n Returns:\n next_state: Tensor or Python list of `Tensor`s representing the state(s)\n of the Markov chain(s) after taking exactly one step. Has same type and\n shape as `current_state`.\n kernel_results: `collections.namedtuple` of internal calculations used to\n advance the chain.\n\n Raises:\n ValueError: if there isn't one `step_size` or a list with same length as\n `current_state`.\n TypeError: if `not target_log_prob.dtype.is_floating`."} {"_id": "q_603", "text": "Build a loss function for variational inference in STS models.\n\n Variational inference searches for the distribution within some family of\n approximate posteriors that minimizes a divergence between the approximate\n posterior `q(z)` and true posterior `p(z|observed_time_series)`. By converting\n inference to optimization, it's generally much faster than sampling-based\n inference algorithms such as HMC. The tradeoff is that the approximating\n family rarely contains the true posterior, so it may miss important aspects of\n posterior structure (in particular, dependence between variables) and should\n not be blindly trusted. Results may vary; it's generally wise to compare to\n HMC to evaluate whether inference quality is sufficient for your task at hand.\n\n This method constructs a loss function for variational inference using the\n Kullback-Liebler divergence `KL[q(z) || p(z|observed_time_series)]`, with an\n approximating family given by independent Normal distributions transformed to\n the appropriate parameter space for each parameter. Minimizing this loss (the\n negative ELBO) maximizes a lower bound on the log model evidence `-log\n p(observed_time_series)`. This is equivalent to the 'mean-field' method\n implemented in [1]. and is a standard approach. The resulting posterior\n approximations are unimodal; they will tend to underestimate posterior\n uncertainty when the true posterior contains multiple modes (the `KL[q||p]`\n divergence encourages choosing a single mode) or dependence between variables.\n\n Args:\n model: An instance of `StructuralTimeSeries` representing a\n time-series model. This represents a joint distribution over\n time-series and their parameters with batch shape `[b1, ..., bN]`.\n observed_time_series: `float` `Tensor` of shape\n `concat([sample_shape, model.batch_shape, [num_timesteps, 1]]) where\n `sample_shape` corresponds to i.i.d. observations, and the trailing `[1]`\n dimension may (optionally) be omitted if `num_timesteps > 1`. May\n optionally be an instance of `tfp.sts.MaskedTimeSeries`, which includes\n a mask `Tensor` to specify timesteps with missing observations.\n init_batch_shape: Batch shape (Python `tuple`, `list`, or `int`) of initial\n states to optimize in parallel.\n Default value: `()`. (i.e., just run a single optimization).\n seed: Python integer to seed the random number generator.\n name: Python `str` name prefixed to ops created by this function.\n Default value: `None` (i.e., 'build_factored_variational_loss').\n\n Returns:\n variational_loss: `float` `Tensor` of shape\n `concat([init_batch_shape, model.batch_shape])`, encoding a stochastic\n estimate of an upper bound on the negative model evidence `-log p(y)`.\n Minimizing this loss performs variational inference; the gap between the\n variational bound and the true (generally unknown) model evidence\n corresponds to the divergence `KL[q||p]` between the approximate and true\n posterior.\n variational_distributions: `collections.OrderedDict` giving\n the approximate posterior for each model parameter. The keys are\n Python `str` parameter names in order, corresponding to\n `[param.name for param in model.parameters]`. The values are\n `tfd.Distribution` instances with batch shape\n `concat([init_batch_shape, model.batch_shape])`; these will typically be\n of the form `tfd.TransformedDistribution(tfd.Normal(...),\n bijector=param.bijector)`.\n\n #### Examples\n\n Assume we've built a structural time-series model:\n\n ```python\n day_of_week = tfp.sts.Seasonal(\n num_seasons=7,\n observed_time_series=observed_time_series,\n name='day_of_week')\n local_linear_trend = tfp.sts.LocalLinearTrend(\n observed_time_series=observed_time_series,\n name='local_linear_trend')\n model = tfp.sts.Sum(components=[day_of_week, local_linear_trend],\n observed_time_series=observed_time_series)\n ```\n\n To run variational inference, we simply construct the loss and optimize\n it:\n\n ```python\n (variational_loss,\n variational_distributions) = tfp.sts.build_factored_variational_loss(\n model=model, observed_time_series=observed_time_series)\n\n train_op = tf.train.AdamOptimizer(0.1).minimize(variational_loss)\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n\n for step in range(200):\n _, loss_ = sess.run((train_op, variational_loss))\n\n if step % 20 == 0:\n print(\"step {} loss {}\".format(step, loss_))\n\n posterior_samples_ = sess.run({\n param_name: q.sample(50)\n for param_name, q in variational_distributions.items()})\n ```\n\n As a more complex example, we might try to avoid local optima by optimizing\n from multiple initializations in parallel, and selecting the result with the\n lowest loss:\n\n ```python\n (variational_loss,\n variational_distributions) = tfp.sts.build_factored_variational_loss(\n model=model, observed_time_series=observed_time_series,\n init_batch_shape=[10])\n\n train_op = tf.train.AdamOptimizer(0.1).minimize(variational_loss)\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n\n for step in range(200):\n _, loss_ = sess.run((train_op, variational_loss))\n\n if step % 20 == 0:\n print(\"step {} losses {}\".format(step, loss_))\n\n # Draw multiple samples to reduce Monte Carlo error in the optimized\n # variational bounds.\n avg_loss = np.mean(\n [sess.run(variational_loss) for _ in range(25)], axis=0)\n best_posterior_idx = np.argmin(avg_loss, axis=0).astype(np.int32)\n ```\n\n #### References\n\n [1]: Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, and\n David M. Blei. Automatic Differentiation Variational Inference. In\n _Journal of Machine Learning Research_, 2017.\n https://arxiv.org/abs/1603.00788"} {"_id": "q_604", "text": "Run an optimizer within the graph to minimize a loss function."} {"_id": "q_605", "text": "Compute mean and variance, accounting for a mask.\n\n Args:\n time_series_tensor: float `Tensor` time series of shape\n `concat([batch_shape, [num_timesteps]])`.\n broadcast_mask: bool `Tensor` of the same shape as `time_series`.\n Returns:\n mean: float `Tensor` of shape `batch_shape`.\n variance: float `Tensor` of shape `batch_shape`."} {"_id": "q_606", "text": "Get broadcast batch shape from distributions, statically if possible."} {"_id": "q_607", "text": "Combine MultivariateNormals into a factored joint distribution.\n\n Given a list of multivariate normal distributions\n `dist[i] = Normal(loc[i], scale[i])`, construct the joint\n distribution given by concatenating independent samples from these\n distributions. This is multivariate normal with mean vector given by the\n concatenation of the component mean vectors, and block-diagonal covariance\n matrix in which the blocks are the component covariances.\n\n Note that for computational efficiency, multivariate normals are represented\n by a 'scale' (factored covariance) linear operator rather than the full\n covariance matrix.\n\n Args:\n distributions: Python `iterable` of MultivariateNormal distribution\n instances (e.g., `tfd.MultivariateNormalDiag`,\n `tfd.MultivariateNormalTriL`, etc.). These must be broadcastable to a\n consistent batch shape, but may have different event shapes\n (i.e., defined over spaces of different dimension).\n\n Returns:\n joint_distribution: An instance of `tfd.MultivariateNormalLinearOperator`\n representing the joint distribution constructed by concatenating\n an independent sample from each input distributions."} {"_id": "q_608", "text": "Compute statistics of a provided time series, as heuristic initialization.\n\n Args:\n observed_time_series: `Tensor` representing a time series, or batch of time\n series, of shape either `batch_shape + [num_timesteps, 1]` or\n `batch_shape + [num_timesteps]` (allowed if `num_timesteps > 1`).\n\n Returns:\n observed_mean: `Tensor` of shape `batch_shape`, giving the empirical\n mean of each time series in the batch.\n observed_stddev: `Tensor` of shape `batch_shape`, giving the empirical\n standard deviation of each time series in the batch.\n observed_initial_centered: `Tensor of shape `batch_shape`, giving the\n initial value of each time series in the batch after centering\n (subtracting the mean)."} {"_id": "q_609", "text": "Ensures `observed_time_series_tensor` has a trailing dimension of size 1.\n\n The `tfd.LinearGaussianStateSpaceModel` Distribution has event shape of\n `[num_timesteps, observation_size]`, but canonical BSTS models\n are univariate, so their observation_size is always `1`. The extra trailing\n dimension gets annoying, so this method allows arguments with or without the\n extra dimension. There is no ambiguity except in the trivial special case\n where `num_timesteps = 1`; this can be avoided by specifying any unit-length\n series in the explicit `[num_timesteps, 1]` style.\n\n Most users should not call this method directly, and instead call\n `canonicalize_observed_time_series_with_mask`, which handles converting\n to `Tensor` and specifying an optional missingness mask.\n\n Args:\n observed_time_series_tensor: `Tensor` of shape\n `batch_shape + [num_timesteps, 1]` or `batch_shape + [num_timesteps]`,\n where `num_timesteps > 1`.\n\n Returns:\n expanded_time_series: `Tensor` of shape `batch_shape + [num_timesteps, 1]`."} {"_id": "q_610", "text": "Construct a predictive normal distribution that mixes over posterior draws.\n\n Args:\n means: float `Tensor` of shape\n `[num_posterior_draws, ..., num_timesteps]`.\n variances: float `Tensor` of shape\n `[num_posterior_draws, ..., num_timesteps]`.\n\n Returns:\n mixture_dist: `tfd.MixtureSameFamily(tfd.Independent(tfd.Normal))` instance\n representing a uniform mixture over the posterior samples, with\n `batch_shape = ...` and `event_shape = [num_timesteps]`."} {"_id": "q_611", "text": "Uses arg names to resolve distribution names."} {"_id": "q_612", "text": "Calculate the KL divergence between two `JointDistributionSequential`s.\n\n Args:\n d0: instance of a `JointDistributionSequential` object.\n d1: instance of a `JointDistributionSequential` object.\n name: (optional) Name to use for created operations.\n Default value: `\"kl_joint_joint\"`.\n\n Returns:\n kl_joint_joint: `Tensor` The sum of KL divergences between elemental\n distributions of two joint distributions.\n\n Raises:\n ValueError: when joint distributions have a different number of elemental\n distributions.\n ValueError: when either joint distribution has a distribution with dynamic\n dependency, i.e., when either joint distribution is not a collection of\n independent distributions."} {"_id": "q_613", "text": "Creates `dist_fn`, `dist_fn_wrapped`, `dist_fn_args`."} {"_id": "q_614", "text": "Creates a `tuple` of `tuple`s of dependencies.\n\n This function is **experimental**. That said, we encourage its use\n and ask that you report problems to `tfprobability@tensorflow.org`.\n\n Args:\n distribution_names: `list` of `str` or `None` names corresponding to each\n of `model` elements. (`None`s are expanding into the\n appropriate `str`.)\n leaf_name: `str` used when no maker depends on a particular\n `model` element.\n\n Returns:\n graph: `tuple` of `(str tuple)` pairs representing the name of each\n distribution (maker) and the names of its dependencies.\n\n #### Example\n\n ```python\n d = tfd.JointDistributionSequential([\n tfd.Independent(tfd.Exponential(rate=[100, 120]), 1),\n lambda e: tfd.Gamma(concentration=e[..., 0], rate=e[..., 1]),\n tfd.Normal(loc=0, scale=2.),\n lambda n, g: tfd.Normal(loc=n, scale=g),\n ])\n d._resolve_graph()\n # ==> (\n # ('e', ()),\n # ('g', ('e',)),\n # ('n', ()),\n # ('x', ('n', 'g')),\n # )\n ```"} {"_id": "q_615", "text": "Decorator function for argument bounds checking.\n\n This decorator is meant to be used with methods that require the first\n argument to be in the support of the distribution. If `validate_args` is\n `True`, the method is wrapped with an assertion that the first argument is\n greater than or equal to `loc`, since the support of the half-Cauchy\n distribution is given by `[loc, infinity)`.\n\n\n Args:\n f: method to be decorated.\n\n Returns:\n Returns a decorated method that, when `validate_args` attribute of the class\n is `True`, will assert that all elements in the first argument are within\n the support of the distribution before executing the original method."} {"_id": "q_616", "text": "Visualizes sequences as TensorBoard summaries.\n\n Args:\n seqs: A tensor of shape [n, t, h, w, c].\n name: String name of this summary.\n num: Integer for the number of examples to visualize. Defaults to\n all examples."} {"_id": "q_617", "text": "Visualizes the reconstruction of inputs in TensorBoard.\n\n Args:\n inputs: A tensor of the original inputs, of shape [batch, timesteps,\n h, w, c].\n reconstruct: A tensor of a reconstruction of inputs, of shape\n [batch, timesteps, h, w, c].\n num: Integer for the number of examples to visualize.\n name: String name of this summary."} {"_id": "q_618", "text": "Summarize the parameters of a distribution.\n\n Args:\n dist: A Distribution object with mean and standard deviation\n parameters.\n name: The name of the distribution.\n name_scope: The name scope of this summary."} {"_id": "q_619", "text": "Runs the model to generate a distribution for a single timestep.\n\n This generates a batched MultivariateNormalDiag distribution using\n the output of the recurrent model at the current timestep to\n parameterize the distribution.\n\n Args:\n inputs: The sampled value of `z` at the previous timestep, i.e.,\n `z_{t-1}`, of shape [..., dimensions].\n `z_0` should be set to the empty matrix.\n state: A tuple containing the (hidden, cell) state.\n\n Returns:\n A tuple of a MultivariateNormalDiag distribution, and the state of\n the recurrent function at the end of the current timestep. The\n distribution will have event shape [dimensions], batch shape\n [...], and sample shape [sample_shape, ..., dimensions]."} {"_id": "q_620", "text": "Static batch shape of models represented by this component.\n\n Returns:\n batch_shape: A `tf.TensorShape` giving the broadcast batch shape of\n all model parameters. This should match the batch shape of\n derived state space models, i.e.,\n `self.make_state_space_model(...).batch_shape`. It may be partially\n defined or unknown."} {"_id": "q_621", "text": "Instantiate this model as a Distribution over specified `num_timesteps`.\n\n Args:\n num_timesteps: Python `int` number of timesteps to model.\n param_vals: a list of `Tensor` parameter values in order corresponding to\n `self.parameters`, or a dict mapping from parameter names to values.\n initial_state_prior: an optional `Distribution` instance overriding the\n default prior on the model's initial state. This is used in forecasting\n (\"today's prior is yesterday's posterior\").\n initial_step: optional `int` specifying the initial timestep to model.\n This is relevant when the model contains time-varying components,\n e.g., holidays or seasonality.\n\n Returns:\n dist: a `LinearGaussianStateSpaceModel` Distribution object."} {"_id": "q_622", "text": "Sample from the joint prior over model parameters and trajectories.\n\n Args:\n num_timesteps: Scalar `int` `Tensor` number of timesteps to model.\n initial_step: Optional scalar `int` `Tensor` specifying the starting\n timestep.\n Default value: 0.\n params_sample_shape: Number of possible worlds to sample iid from the\n parameter prior, or more generally, `Tensor` `int` shape to fill with\n iid samples.\n Default value: [] (i.e., draw a single sample and don't expand the\n shape).\n trajectories_sample_shape: For each sampled set of parameters, number\n of trajectories to sample, or more generally, `Tensor` `int` shape to\n fill with iid samples.\n Default value: [] (i.e., draw a single sample and don't expand the\n shape).\n seed: Python `int` random seed.\n\n Returns:\n trajectories: `float` `Tensor` of shape\n `trajectories_sample_shape + params_sample_shape + [num_timesteps, 1]`\n containing all sampled trajectories.\n param_samples: list of sampled parameter value `Tensor`s, in order\n corresponding to `self.parameters`, each of shape\n `params_sample_shape + prior.batch_shape + prior.event_shape`."} {"_id": "q_623", "text": "Numpy implementation of `tf.argsort`."} {"_id": "q_624", "text": "Numpy implementation of `tf.sort`."} {"_id": "q_625", "text": "Normal distribution function.\n\n Returns the area under the Gaussian probability density function, integrated\n from minus infinity to x:\n\n ```\n 1 / x\n ndtr(x) = ---------- | exp(-0.5 t**2) dt\n sqrt(2 pi) /-inf\n\n = 0.5 (1 + erf(x / sqrt(2)))\n = 0.5 erfc(x / sqrt(2))\n ```\n\n Args:\n x: `Tensor` of type `float32`, `float64`.\n name: Python string. A name for the operation (default=\"ndtr\").\n\n Returns:\n ndtr: `Tensor` with `dtype=x.dtype`.\n\n Raises:\n TypeError: if `x` is not floating-type."} {"_id": "q_626", "text": "Implements ndtr core logic."} {"_id": "q_627", "text": "The inverse of the CDF of the Normal distribution function.\n\n Returns x such that the area under the pdf from minus infinity to x is equal\n to p.\n\n A piece-wise rational approximation is done for the function.\n This is a port of the implementation in netlib.\n\n Args:\n p: `Tensor` of type `float32`, `float64`.\n name: Python string. A name for the operation (default=\"ndtri\").\n\n Returns:\n x: `Tensor` with `dtype=p.dtype`.\n\n Raises:\n TypeError: if `p` is not floating-type."} {"_id": "q_628", "text": "Log Normal distribution function.\n\n For details of the Normal distribution function see `ndtr`.\n\n This function calculates `(log o ndtr)(x)` by either calling `log(ndtr(x))` or\n using an asymptotic series. Specifically:\n - For `x > upper_segment`, use the approximation `-ndtr(-x)` based on\n `log(1-x) ~= -x, x << 1`.\n - For `lower_segment < x <= upper_segment`, use the existing `ndtr` technique\n and take a log.\n - For `x <= lower_segment`, we use the series approximation of erf to compute\n the log CDF directly.\n\n The `lower_segment` is set based on the precision of the input:\n\n ```\n lower_segment = { -20, x.dtype=float64\n { -10, x.dtype=float32\n upper_segment = { 8, x.dtype=float64\n { 5, x.dtype=float32\n ```\n\n When `x < lower_segment`, the `ndtr` asymptotic series approximation is:\n\n ```\n ndtr(x) = scale * (1 + sum) + R_N\n scale = exp(-0.5 x**2) / (-x sqrt(2 pi))\n sum = Sum{(-1)^n (2n-1)!! / (x**2)^n, n=1:N}\n R_N = O(exp(-0.5 x**2) (2N+1)!! / |x|^{2N+3})\n ```\n\n where `(2n-1)!! = (2n-1) (2n-3) (2n-5) ... (3) (1)` is a\n [double-factorial](https://en.wikipedia.org/wiki/Double_factorial).\n\n\n Args:\n x: `Tensor` of type `float32`, `float64`.\n series_order: Positive Python `integer`. Maximum depth to\n evaluate the asymptotic expansion. This is the `N` above.\n name: Python string. A name for the operation (default=\"log_ndtr\").\n\n Returns:\n log_ndtr: `Tensor` with `dtype=x.dtype`.\n\n Raises:\n TypeError: if `x.dtype` is not handled.\n TypeError: if `series_order` is a not Python `integer.`\n ValueError: if `series_order` is not in `[0, 30]`."} {"_id": "q_629", "text": "Calculates the asymptotic series used in log_ndtr."} {"_id": "q_630", "text": "Log Laplace distribution function.\n\n This function calculates `Log[L(x)]`, where `L(x)` is the cumulative\n distribution function of the Laplace distribution, i.e.\n\n ```L(x) := 0.5 * int_{-infty}^x e^{-|t|} dt```\n\n For numerical accuracy, `L(x)` is computed in different ways depending on `x`,\n\n ```\n x <= 0:\n Log[L(x)] = Log[0.5] + x, which is exact\n\n 0 < x:\n Log[L(x)] = Log[1 - 0.5 * e^{-x}], which is exact\n ```\n\n Args:\n x: `Tensor` of type `float32`, `float64`.\n name: Python string. A name for the operation (default=\"log_ndtr\").\n\n Returns:\n `Tensor` with `dtype=x.dtype`.\n\n Raises:\n TypeError: if `x.dtype` is not handled."} {"_id": "q_631", "text": "Joint log probability function."} {"_id": "q_632", "text": "Runs HMC on the text-messages unnormalized posterior."} {"_id": "q_633", "text": "Compute the marginal of this GP over function values at `index_points`.\n\n Args:\n index_points: `float` `Tensor` representing finite (batch of) vector(s) of\n points in the index set over which the GP is defined. Shape has the form\n `[b1, ..., bB, e, f1, ..., fF]` where `F` is the number of feature\n dimensions and must equal `kernel.feature_ndims` and `e` is the number\n (size) of index points in each batch. Ultimately this distribution\n corresponds to a `e`-dimensional multivariate normal. The batch shape\n must be broadcastable with `kernel.batch_shape` and any batch dims\n yielded by `mean_fn`.\n\n Returns:\n marginal: a `Normal` or `MultivariateNormalLinearOperator` distribution,\n according to whether `index_points` consists of one or many index\n points, respectively."} {"_id": "q_634", "text": "Return `index_points` if not None, else `self._index_points`.\n\n Args:\n index_points: if given, this is what is returned; else,\n `self._index_points`\n\n Returns:\n index_points: the given arg, if not None, else the class member\n `self._index_points`.\n\n Rases:\n ValueError: if `index_points` and `self._index_points` are both `None`."} {"_id": "q_635", "text": "Creates an stacked IAF bijector.\n\n This bijector operates on vector-valued events.\n\n Args:\n total_event_size: Number of dimensions to operate over.\n num_hidden_layers: How many hidden layers to use in each IAF.\n seed: Random seed for the initializers.\n dtype: DType for the variables.\n\n Returns:\n bijector: The created bijector."} {"_id": "q_636", "text": "Runs one iteration of NeuTra.\n\n Args:\n current_state: `Tensor` or Python `list` of `Tensor`s representing the\n current state(s) of the Markov chain(s). The first `r` dimensions index\n independent chains, `r = tf.rank(target_log_prob_fn(*current_state))`.\n previous_kernel_results: `collections.namedtuple` containing `Tensor`s\n representing values from previous calls to this function (or from the\n `bootstrap_results` function.)\n\n Returns:\n next_state: Tensor or Python list of `Tensor`s representing the state(s)\n of the Markov chain(s) after taking exactly one step. Has same type and\n shape as `current_state`.\n kernel_results: `collections.namedtuple` of internal calculations used to\n advance the chain."} {"_id": "q_637", "text": "Trains the bijector and creates initial `previous_kernel_results`.\n\n The supplied `state` is only used to determine the number of chains to run\n in parallel_iterations\n\n Args:\n state: `Tensor` or Python `list` of `Tensor`s representing the initial\n state(s) of the Markov chain(s). The first `r` dimensions index\n independent chains, `r = tf.rank(target_log_prob_fn(*state))`.\n\n Returns:\n kernel_results: Instance of\n `UncalibratedHamiltonianMonteCarloKernelResults` inside\n `MetropolisHastingsResults` inside `TransformedTransitionKernelResults`\n inside `SimpleStepSizeAdaptationResults`."} {"_id": "q_638", "text": "Convenience function analogous to tf.squared_difference."} {"_id": "q_639", "text": "Performs distributional transform of the mixture samples.\n\n Distributional transform removes the parameters from samples of a\n multivariate distribution by applying conditional CDFs:\n (F(x_1), F(x_2 | x1_), ..., F(x_d | x_1, ..., x_d-1))\n (the indexing is over the \"flattened\" event dimensions).\n The result is a sample of product of Uniform[0, 1] distributions.\n\n We assume that the components are factorized, so the conditional CDFs become\n F(x_i | x_1, ..., x_i-1) = sum_k w_i^k F_k (x_i),\n where w_i^k is the posterior mixture weight: for i > 0\n w_i^k = w_k prob_k(x_1, ..., x_i-1) / sum_k' w_k' prob_k'(x_1, ..., x_i-1)\n and w_0^k = w_k is the mixture probability of the k-th component.\n\n Arguments:\n x: Sample of mixture distribution\n\n Returns:\n Result of the distributional transform"} {"_id": "q_640", "text": "Utility method to decompose a joint posterior into components.\n\n Args:\n model: `tfp.sts.Sum` instance defining an additive STS model.\n posterior_means: float `Tensor` of shape `concat(\n [[num_posterior_draws], batch_shape, num_timesteps, latent_size])`\n representing the posterior mean over latents in an\n `AdditiveStateSpaceModel`.\n posterior_covs: float `Tensor` of shape `concat(\n [[num_posterior_draws], batch_shape, num_timesteps,\n latent_size, latent_size])`\n representing the posterior marginal covariances over latents in an\n `AdditiveStateSpaceModel`.\n parameter_samples: Python `list` of `Tensors` representing posterior\n samples of model parameters, with shapes `[concat([\n [num_posterior_draws], param.prior.batch_shape,\n param.prior.event_shape]) for param in model.parameters]`. This may\n optionally also be a map (Python `dict`) of parameter names to\n `Tensor` values.\n\n Returns:\n component_dists: A `collections.OrderedDict` instance mapping\n component StructuralTimeSeries instances (elements of `model.components`)\n to `tfd.Distribution` instances representing the posterior marginal\n distributions on the process modeled by each component. Each distribution\n has batch shape matching that of `posterior_means`/`posterior_covs`, and\n event shape of `[num_timesteps]`."} {"_id": "q_641", "text": "Decompose a forecast distribution into contributions from each component.\n\n Args:\n model: An instance of `tfp.sts.Sum` representing a structural time series\n model.\n forecast_dist: A `Distribution` instance returned by `tfp.sts.forecast()`.\n (specifically, must be a `tfd.MixtureSameFamily` over a\n `tfd.LinearGaussianStateSpaceModel` parameterized by posterior samples).\n parameter_samples: Python `list` of `Tensors` representing posterior samples\n of model parameters, with shapes `[concat([[num_posterior_draws],\n param.prior.batch_shape, param.prior.event_shape]) for param in\n model.parameters]`. This may optionally also be a map (Python `dict`) of\n parameter names to `Tensor` values.\n Returns:\n component_forecasts: A `collections.OrderedDict` instance mapping\n component StructuralTimeSeries instances (elements of `model.components`)\n to `tfd.Distribution` instances representing the marginal forecast for\n each component. Each distribution has batch and event shape matching\n `forecast_dist` (specifically, the event shape is\n `[num_steps_forecast]`).\n\n #### Examples\n\n Suppose we've built a model, fit it to data, and constructed a forecast\n distribution:\n\n ```python\n day_of_week = tfp.sts.Seasonal(\n num_seasons=7,\n observed_time_series=observed_time_series,\n name='day_of_week')\n local_linear_trend = tfp.sts.LocalLinearTrend(\n observed_time_series=observed_time_series,\n name='local_linear_trend')\n model = tfp.sts.Sum(components=[day_of_week, local_linear_trend],\n observed_time_series=observed_time_series)\n\n num_steps_forecast = 50\n samples, kernel_results = tfp.sts.fit_with_hmc(model, observed_time_series)\n forecast_dist = tfp.sts.forecast(model, observed_time_series,\n parameter_samples=samples,\n num_steps_forecast=num_steps_forecast)\n ```\n\n To extract the forecast for individual components, pass the forecast\n distribution into `decompose_forecast_by_components`:\n\n ```python\n component_forecasts = decompose_forecast_by_component(\n model, forecast_dist, samples)\n\n # Component mean and stddev have shape `[num_steps_forecast]`.\n day_of_week_effect_mean = forecast_components[day_of_week].mean()\n day_of_week_effect_stddev = forecast_components[day_of_week].stddev()\n ```\n\n Using the component forecasts, we can visualize the uncertainty for each\n component:\n\n ```\n from matplotlib import pylab as plt\n num_components = len(component_forecasts)\n xs = np.arange(num_steps_forecast)\n fig = plt.figure(figsize=(12, 3 * num_components))\n for i, (component, component_dist) in enumerate(component_forecasts.items()):\n\n # If in graph mode, replace `.numpy()` with `.eval()` or `sess.run()`.\n component_mean = component_dist.mean().numpy()\n component_stddev = component_dist.stddev().numpy()\n\n ax = fig.add_subplot(num_components, 1, 1 + i)\n ax.plot(xs, component_mean, lw=2)\n ax.fill_between(xs,\n component_mean - 2 * component_stddev,\n component_mean + 2 * component_stddev,\n alpha=0.5)\n ax.set_title(component.name)\n ```"} {"_id": "q_642", "text": "Get tensor that the random variable corresponds to."} {"_id": "q_643", "text": "In a session, computes and returns the value of this random variable.\n\n This is not a graph construction method, it does not add ops to the graph.\n\n This convenience method requires a session where the graph\n containing this variable has been launched. If no session is\n passed, the default session is used.\n\n Args:\n session: tf.BaseSession.\n The `tf.Session` to use to evaluate this random variable. If\n none, the default session is used.\n feed_dict: dict.\n A dictionary that maps `tf.Tensor` objects to feed values. See\n `tf.Session.run()` for a description of the valid feed values.\n\n Returns:\n Value of the random variable.\n\n #### Examples\n\n ```python\n x = Normal(0.0, 1.0)\n with tf.Session() as sess:\n # Usage passing the session explicitly.\n print(x.eval(sess))\n # Usage with the default session. The 'with' block\n # above makes 'sess' the default session.\n print(x.eval())\n ```"} {"_id": "q_644", "text": "Value as NumPy array, only available for TF Eager."} {"_id": "q_645", "text": "Posterior Normal distribution with conjugate prior on the mean.\n\n This model assumes that `n` observations (with sum `s`) come from a\n Normal with unknown mean `loc` (described by the Normal `prior`)\n and known variance `scale**2`. The \"known scale posterior\" is\n the distribution of the unknown `loc`.\n\n Accepts a prior Normal distribution object, having parameters\n `loc0` and `scale0`, as well as known `scale` values of the predictive\n distribution(s) (also assumed Normal),\n and statistical estimates `s` (the sum(s) of the observations) and\n `n` (the number(s) of observations).\n\n Returns a posterior (also Normal) distribution object, with parameters\n `(loc', scale'**2)`, where:\n\n ```\n mu ~ N(mu', sigma'**2)\n sigma'**2 = 1/(1/sigma0**2 + n/sigma**2),\n mu' = (mu0/sigma0**2 + s/sigma**2) * sigma'**2.\n ```\n\n Distribution parameters from `prior`, as well as `scale`, `s`, and `n`.\n will broadcast in the case of multidimensional sets of parameters.\n\n Args:\n prior: `Normal` object of type `dtype`:\n the prior distribution having parameters `(loc0, scale0)`.\n scale: tensor of type `dtype`, taking values `scale > 0`.\n The known stddev parameter(s).\n s: Tensor of type `dtype`. The sum(s) of observations.\n n: Tensor of type `int`. The number(s) of observations.\n\n Returns:\n A new Normal posterior distribution object for the unknown observation\n mean `loc`.\n\n Raises:\n TypeError: if dtype of `s` does not match `dtype`, or `prior` is not a\n Normal object."} {"_id": "q_646", "text": "Build a scale-and-shift function using a multi-layer neural network.\n\n This will be wrapped in a make_template to ensure the variables are only\n created once. It takes the `d`-dimensional input x[0:d] and returns the `D-d`\n dimensional outputs `loc` (\"mu\") and `log_scale` (\"alpha\").\n\n The default template does not support conditioning and will raise an\n exception if `condition_kwargs` are passed to it. To use conditioning in\n real nvp bijector, implement a conditioned shift/scale template that\n handles the `condition_kwargs`.\n\n Arguments:\n hidden_layers: Python `list`-like of non-negative integer, scalars\n indicating the number of units in each hidden layer. Default: `[512, 512].\n shift_only: Python `bool` indicating if only the `shift` term shall be\n computed (i.e. NICE bijector). Default: `False`.\n activation: Activation function (callable). Explicitly setting to `None`\n implies a linear activation.\n name: A name for ops managed by this function. Default:\n \"real_nvp_default_template\".\n *args: `tf.layers.dense` arguments.\n **kwargs: `tf.layers.dense` keyword arguments.\n\n Returns:\n shift: `Float`-like `Tensor` of shift terms (\"mu\" in\n [Papamakarios et al. (2016)][1]).\n log_scale: `Float`-like `Tensor` of log(scale) terms (\"alpha\" in\n [Papamakarios et al. (2016)][1]).\n\n Raises:\n NotImplementedError: if rightmost dimension of `inputs` is unknown prior to\n graph execution, or if `condition_kwargs` is not empty.\n\n #### References\n\n [1]: George Papamakarios, Theo Pavlakou, and Iain Murray. Masked\n Autoregressive Flow for Density Estimation. In _Neural Information\n Processing Systems_, 2017. https://arxiv.org/abs/1705.07057"} {"_id": "q_647", "text": "Returns a batch of points chosen uniformly from the unit hypersphere."} {"_id": "q_648", "text": "Returns the log normalization of an LKJ distribution.\n\n Args:\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n log_z: A Tensor of the same shape and dtype as `concentration`, containing\n the corresponding log normalizers."} {"_id": "q_649", "text": "Returns explict dtype from `args_list` if exists, else preferred_dtype."} {"_id": "q_650", "text": "Factory for implementing summary statistics, eg, mean, stddev, mode."} {"_id": "q_651", "text": "Estimate a lower bound on effective sample size for each independent chain.\n\n Roughly speaking, \"effective sample size\" (ESS) is the size of an iid sample\n with the same variance as `state`.\n\n More precisely, given a stationary sequence of possibly correlated random\n variables `X_1, X_2,...,X_N`, each identically distributed ESS is the number\n such that\n\n ```Variance{ N**-1 * Sum{X_i} } = ESS**-1 * Variance{ X_1 }.```\n\n If the sequence is uncorrelated, `ESS = N`. In general, one should expect\n `ESS <= N`, with more highly correlated sequences having smaller `ESS`.\n\n Args:\n states: `Tensor` or list of `Tensor` objects. Dimension zero should index\n identically distributed states.\n filter_threshold: `Tensor` or list of `Tensor` objects.\n Must broadcast with `state`. The auto-correlation sequence is truncated\n after the first appearance of a term less than `filter_threshold`.\n Setting to `None` means we use no threshold filter. Since `|R_k| <= 1`,\n setting to any number less than `-1` has the same effect.\n filter_beyond_lag: `Tensor` or list of `Tensor` objects. Must be\n `int`-like and scalar valued. The auto-correlation sequence is truncated\n to this length. Setting to `None` means we do not filter based on number\n of lags.\n name: `String` name to prepend to created ops.\n\n Returns:\n ess: `Tensor` or list of `Tensor` objects. The effective sample size of\n each component of `states`. Shape will be `states.shape[1:]`.\n\n Raises:\n ValueError: If `states` and `filter_threshold` or `states` and\n `filter_beyond_lag` are both lists with different lengths.\n\n #### Examples\n\n We use ESS to estimate standard error.\n\n ```\n import tensorflow as tf\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n target = tfd.MultivariateNormalDiag(scale_diag=[1., 2.])\n\n # Get 1000 states from one chain.\n states = tfp.mcmc.sample_chain(\n num_burnin_steps=200,\n num_results=1000,\n current_state=tf.constant([0., 0.]),\n kernel=tfp.mcmc.HamiltonianMonteCarlo(\n target_log_prob_fn=target.log_prob,\n step_size=0.05,\n num_leapfrog_steps=20))\n states.shape\n ==> (1000, 2)\n\n ess = effective_sample_size(states)\n ==> Shape (2,) Tensor\n\n mean, variance = tf.nn.moments(states, axis=0)\n standard_error = tf.sqrt(variance / ess)\n ```\n\n Some math shows that, with `R_k` the auto-correlation sequence,\n `R_k := Covariance{X_1, X_{1+k}} / Variance{X_1}`, we have\n\n ```ESS(N) = N / [ 1 + 2 * ( (N - 1) / N * R_1 + ... + 1 / N * R_{N-1} ) ]```\n\n This function estimates the above by first estimating the auto-correlation.\n Since `R_k` must be estimated using only `N - k` samples, it becomes\n progressively noisier for larger `k`. For this reason, the summation over\n `R_k` should be truncated at some number `filter_beyond_lag < N`. Since many\n MCMC methods generate chains where `R_k > 0`, a reasonable criteria is to\n truncate at the first index where the estimated auto-correlation becomes\n negative.\n\n The arguments `filter_beyond_lag`, `filter_threshold` are filters intended to\n remove noisy tail terms from `R_k`. They combine in an \"OR\" manner meaning\n terms are removed if they were to be filtered under the `filter_beyond_lag` OR\n `filter_threshold` criteria."} {"_id": "q_652", "text": "ESS computation for one single Tensor argument."} {"_id": "q_653", "text": "potential_scale_reduction for one single state `Tensor`."} {"_id": "q_654", "text": "Broadcast a listable secondary_arg to that of states."} {"_id": "q_655", "text": "Use LogNormal quantiles to form quadrature on positive-reals.\n\n Args:\n loc: `float`-like (batch of) scalar `Tensor`; the location parameter of\n the LogNormal prior.\n scale: `float`-like (batch of) scalar `Tensor`; the scale parameter of\n the LogNormal prior.\n quadrature_size: Python `int` scalar representing the number of quadrature\n points.\n validate_args: Python `bool`, default `False`. When `True` distribution\n parameters are checked for validity despite possibly degrading runtime\n performance. When `False` invalid inputs may silently render incorrect\n outputs.\n name: Python `str` name prefixed to Ops created by this class.\n\n Returns:\n grid: (Batch of) length-`quadrature_size` vectors representing the\n `log_rate` parameters of a `Poisson`.\n probs: (Batch of) length-`quadrature_size` vectors representing the\n weight associate with each `grid` value."} {"_id": "q_656", "text": "Helper to merge which handles merging one value."} {"_id": "q_657", "text": "Converts nested `tuple`, `list`, or `dict` to nested `tuple`."} {"_id": "q_658", "text": "Computes the doubling increments for the left end point.\n\n The doubling procedure expands an initial interval to find a superset of the\n true slice. At each doubling iteration, the interval width is doubled to\n either the left or the right hand side with equal probability.\n If, initially, the left end point is at `L(0)` and the width of the\n interval is `w(0)`, then the left end point and the width at the\n k-th iteration (denoted L(k) and w(k) respectively) are given by the following\n recursions:\n\n ```none\n w(k) = 2 * w(k-1)\n L(k) = L(k-1) - w(k-1) * X_k, X_k ~ Bernoulli(0.5)\n or, L(0) - L(k) = w(0) Sum(2^i * X(i+1), 0 <= i < k)\n ```\n\n This function computes the sequence of `L(0)-L(k)` and `w(k)` for k between 0\n and `max_doublings` independently for each chain.\n\n Args:\n batch_shape: Positive int32 `tf.Tensor`. The batch shape.\n max_doublings: Scalar positive int32 `tf.Tensor`. The maximum number of\n doublings to consider.\n step_size: A real `tf.Tensor` with shape compatible with [num_chains].\n The size of the initial interval.\n seed: (Optional) positive int. The random seed. If None, no seed is set.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'find_slice_bounds').\n\n Returns:\n left_increments: A tensor of shape (max_doublings+1, batch_shape). The\n relative position of the left end point after the doublings.\n widths: A tensor of shape (max_doublings+1, ones_like(batch_shape)). The\n widths of the intervals at each stage of the doubling."} {"_id": "q_659", "text": "Finds the index of the optimal set of bounds for each chain.\n\n For each chain, finds the smallest set of bounds for which both edges lie\n outside the slice. This is equivalent to the point at which a for loop\n implementation (P715 of Neal (2003)) of the algorithm would terminate.\n\n Performs the following calculation, where i is the number of doublings that\n have been performed and k is the max number of doublings:\n\n (2 * k - i) * flag + i\n\n The argmax of the above returns the earliest index where the bounds were\n outside the slice and if there is no such point, the widest bounds.\n\n Args:\n x: A tensor of shape (max_doublings+1, batch_shape). Type int32, with value\n 0 or 1. Indicates if this set of bounds is outside the slice.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'find_slice_bounds').\n\n Returns:\n indices: A tensor of shape batch_shape. Type int32, with the index of the\n first set of bounds outside the slice and if there are none, the index of\n the widest set."} {"_id": "q_660", "text": "Returns the bounds of the slice at each stage of doubling procedure.\n\n Precomputes the x coordinates of the left (L) and right (R) endpoints of the\n interval `I` produced in the \"doubling\" algorithm [Neal 2003][1] P713. Note\n that we simultaneously compute all possible doubling values for each chain,\n for the reason that at small-medium densities, the gains from parallel\n evaluation might cause a speed-up, but this will be benchmarked against the\n while loop implementation.\n\n Args:\n x_initial: `tf.Tensor` of any shape and any real dtype consumable by\n `target_log_prob`. The initial points.\n target_log_prob: A callable taking a `tf.Tensor` of shape and dtype as\n `x_initial` and returning a tensor of the same shape. The log density of\n the target distribution.\n log_slice_heights: `tf.Tensor` with the same shape as `x_initial` and the\n same dtype as returned by `target_log_prob`. The log of the height of the\n slice for each chain. The values must be bounded above by\n `target_log_prob(x_initial)`.\n max_doublings: Scalar positive int32 `tf.Tensor`. The maximum number of\n doublings to consider.\n step_size: `tf.Tensor` with same dtype as and shape compatible with\n `x_initial`. The size of the initial interval.\n seed: (Optional) positive int. The random seed. If None, no seed is set.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'find_slice_bounds').\n\n Returns:\n upper_bounds: A tensor of same shape and dtype as `x_initial`. Slice upper\n bounds for each chain.\n lower_bounds: A tensor of same shape and dtype as `x_initial`. Slice lower\n bounds for each chain.\n both_ok: A tensor of shape `x_initial` and boolean dtype. Indicates if both\n the chosen upper and lower bound lie outside of the slice.\n\n #### References\n\n [1]: Radford M. Neal. Slice Sampling. The Annals of Statistics. 2003, Vol 31,\n No. 3 , 705-767.\n https://projecteuclid.org/download/pdf_1/euclid.aos/1056562461"} {"_id": "q_661", "text": "Samples from the slice by applying shrinkage for rejected points.\n\n Implements the one dimensional slice sampling algorithm of Neal (2003), with a\n doubling algorithm (Neal 2003 P715 Fig. 4), which doubles the size of the\n interval at each iteration and shrinkage (Neal 2003 P716 Fig. 5), which\n reduces the width of the slice when a selected point is rejected, by setting\n the relevant bound that that value. Randomly sampled points are checked for\n two criteria: that they lie within the slice and that they pass the\n acceptability check (Neal 2003 P717 Fig. 6), which tests that the new state\n could have generated the previous one.\n\n Args:\n x_initial: A tensor of any shape. The initial positions of the chains. This\n function assumes that all the dimensions of `x_initial` are batch\n dimensions (i.e. the event shape is `[]`).\n target_log_prob: Callable accepting a tensor like `x_initial` and returning\n a tensor containing the log density at that point of the same shape.\n log_slice_heights: Tensor of the same shape and dtype as the return value\n of `target_log_prob` when applied to `x_initial`. The log of the height of\n the chosen slice.\n step_size: A tensor of shape and dtype compatible with `x_initial`. The min\n interval size in the doubling algorithm.\n lower_bounds: Tensor of same shape and dtype as `x_initial`. Slice lower\n bounds for each chain.\n upper_bounds: Tensor of same shape and dtype as `x_initial`. Slice upper\n bounds for each chain.\n seed: (Optional) positive int. The random seed. If None, no seed is set.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'find_slice_bounds').\n\n Returns:\n x_proposed: A tensor of the same shape and dtype as `x_initial`. The next\n proposed state of the chain."} {"_id": "q_662", "text": "Creates a value-setting interceptor.\n\n This function creates an interceptor that sets values of Edward2 random\n variable objects. This is useful for a range of tasks, including conditioning\n on observed data, sampling from posterior predictive distributions, and as a\n building block of inference primitives such as computing log joint\n probabilities (see examples below).\n\n Args:\n **model_kwargs: dict of str to Tensor. Keys are the names of random\n variables in the model to which this interceptor is being applied. Values\n are Tensors to set their value to. Variables not included in this dict\n will not be set and will maintain their existing value semantics (by\n default, a sample from the parent-conditional distribution).\n\n Returns:\n set_values: function that sets the value of intercepted ops.\n\n #### Examples\n\n Consider for illustration a model with latent `z` and\n observed `x`, and a corresponding trainable posterior model:\n\n ```python\n num_observations = 10\n def model():\n z = ed.Normal(loc=0, scale=1., name='z') # log rate\n x = ed.Poisson(rate=tf.exp(z) * tf.ones(num_observations), name='x')\n return x\n\n def variational_model():\n return ed.Normal(loc=tf.Variable(0.),\n scale=tf.nn.softplus(tf.Variable(-4.)),\n name='z') # for simplicity, match name of the model RV.\n ```\n\n We can use a value-setting interceptor to condition the model on observed\n data. This approach is slightly more cumbersome than that of partially\n evaluating the complete log-joint function, but has the potential advantage\n that it returns a new model callable, which may be used to sample downstream\n variables, passed into additional transformations, etc.\n\n ```python\n x_observed = np.array([6, 3, 1, 8, 7, 0, 6, 4, 7, 5])\n def observed_model():\n with ed.interception(make_value_setter(x=x_observed)):\n model()\n observed_log_joint_fn = ed.make_log_joint_fn(observed_model)\n\n # After fixing 'x', the observed log joint is now only a function of 'z'.\n # This enables us to define a variational lower bound,\n # `E_q[ log p(x, z) - log q(z)]`, simply by evaluating the observed and\n # variational log joints at variational samples.\n variational_log_joint_fn = ed.make_log_joint_fn(variational_model)\n with ed.tape() as variational_sample: # Sample trace from variational model.\n variational_model()\n elbo_loss = -(observed_log_joint_fn(**variational_sample) -\n variational_log_joint_fn(**variational_sample))\n ```\n\n After performing inference by minimizing the variational loss, a value-setting\n interceptor enables simulation from the posterior predictive distribution:\n\n ```python\n with ed.tape() as posterior_samples: # tape is a map {rv.name : rv}\n variational_model()\n with ed.interception(ed.make_value_setter(**posterior_samples)):\n x = model()\n # x is a sample from p(X | Z = z') where z' ~ q(z) (the variational model)\n ```\n\n As another example, using a value setter inside of `ed.tape` enables\n computing the log joint probability, by setting all variables to\n posterior values and then accumulating the log probs of those values under\n the induced parent-conditional distributions. This is one way that we could\n have implemented `ed.make_log_joint_fn`:\n\n ```python\n def make_log_joint_fn_demo(model):\n def log_joint_fn(**model_kwargs):\n with ed.tape() as model_tape:\n with ed.make_value_setter(**model_kwargs):\n model()\n\n # accumulate sum_i log p(X_i = x_i | X_{:i-1} = x_{:i-1})\n log_prob = 0.\n for rv in model_tape.values():\n log_prob += tf.reduce_sum(rv.log_prob(rv.value))\n\n return log_prob\n return log_joint_fn\n ```"} {"_id": "q_663", "text": "Filters inputs to be compatible with function `f`'s signature.\n\n Args:\n f: Function according to whose input signature we filter arguments.\n src_kwargs: Keyword arguments to filter according to `f`.\n\n Returns:\n kwargs: Dict of key-value pairs in `src_kwargs` which exist in `f`'s\n signature."} {"_id": "q_664", "text": "Network block for VGG."} {"_id": "q_665", "text": "Builds a tree at a given tree depth and at a given state.\n\n The `current` state is immediately adjacent to, but outside of,\n the subtrajectory spanned by the returned `forward` and `reverse` states.\n\n Args:\n value_and_gradients_fn: Python callable which takes an argument like\n `*current_state` and returns a tuple of its (possibly unnormalized)\n log-density under the target distribution and its gradient with respect to\n each state.\n current_state: List of `Tensor`s representing the current states of the\n NUTS trajectory.\n current_target_log_prob: Scalar `Tensor` representing the value of\n `target_log_prob_fn` at the `current_state`.\n current_grads_target_log_prob: List of `Tensor`s representing gradient of\n `current_target_log_prob` with respect to `current_state`. Must have same\n shape as `current_state`.\n current_momentum: List of `Tensor`s representing the momentums of\n `current_state`. Must have same shape as `current_state`.\n direction: int that is either -1 or 1. It determines whether to perform\n leapfrog integration backwards (reverse) or forward in time respectively.\n depth: non-negative int that indicates how deep of a tree to build.\n Each call to `_build_tree` takes `2**depth` leapfrog steps.\n step_size: List of `Tensor`s representing the step sizes for the leapfrog\n integrator. Must have same shape as `current_state`.\n log_slice_sample: The log of an auxiliary slice variable. It is used\n together with `max_simulation_error` to avoid simulating trajectories with\n too much numerical error.\n max_simulation_error: Maximum simulation error to tolerate before\n terminating the trajectory. Simulation error is the\n `log_slice_sample` minus the log-joint probability at the simulated state.\n seed: Integer to seed the random number generator.\n\n Returns:\n reverse_state: List of `Tensor`s representing the \"reverse\" states of the\n NUTS trajectory. Has same shape as `current_state`.\n reverse_target_log_prob: Scalar `Tensor` representing the value of\n `target_log_prob_fn` at the `reverse_state`.\n reverse_grads_target_log_prob: List of `Tensor`s representing gradient of\n `reverse_target_log_prob` with respect to `reverse_state`. Has same shape\n as `reverse_state`.\n reverse_momentum: List of `Tensor`s representing the momentums of\n `reverse_state`. Has same shape as `reverse_state`.\n forward_state: List of `Tensor`s representing the \"forward\" states of the\n NUTS trajectory. Has same shape as `current_state`.\n forward_target_log_prob: Scalar `Tensor` representing the value of\n `target_log_prob_fn` at the `forward_state`.\n forward_grads_target_log_prob: List of `Tensor`s representing gradient of\n `forward_target_log_prob` with respect to `forward_state`. Has same shape\n as `forward_state`.\n forward_momentum: List of `Tensor`s representing the momentums of\n `forward_state`. Has same shape as `forward_state`.\n next_state: List of `Tensor`s representing the next states of the NUTS\n trajectory. Has same shape as `current_state`.\n next_target_log_prob: Scalar `Tensor` representing the value of\n `target_log_prob_fn` at `next_state`.\n next_grads_target_log_prob: List of `Tensor`s representing the gradient of\n `next_target_log_prob` with respect to `next_state`.\n num_states: Number of acceptable candidate states in the subtree. A state is\n acceptable if it is \"in the slice\", that is, if its log-joint probability\n with its momentum is greater than `log_slice_sample`.\n continue_trajectory: bool determining whether to continue the simulation\n trajectory. The trajectory is continued if no U-turns are encountered\n within the built subtree, and if the log-probability accumulation due to\n integration error does not exceed `max_simulation_error`."} {"_id": "q_666", "text": "Wraps value and gradients function to assist with None gradients."} {"_id": "q_667", "text": "If two given states and momentum do not exhibit a U-turn pattern."} {"_id": "q_668", "text": "Runs one step of leapfrog integration."} {"_id": "q_669", "text": "Log-joint probability given a state's log-probability and momentum."} {"_id": "q_670", "text": "Returns samples from a Bernoulli distribution."} {"_id": "q_671", "text": "Creates multivariate standard `Normal` distribution.\n\n Args:\n dtype: Type of parameter's event.\n shape: Python `list`-like representing the parameter's event shape.\n name: Python `str` name prepended to any created (or existing)\n `tf.Variable`s.\n trainable: Python `bool` indicating all created `tf.Variable`s should be\n added to the graph collection `GraphKeys.TRAINABLE_VARIABLES`.\n add_variable_fn: `tf.get_variable`-like `callable` used to create (or\n access existing) `tf.Variable`s.\n\n Returns:\n Multivariate standard `Normal` distribution."} {"_id": "q_672", "text": "Deserializes the Keras-serialized function.\n\n (De)serializing Python functions from/to bytecode is unsafe. Therefore we\n also use the function's type as an anonymous function ('lambda') or named\n function in the Python environment ('function'). In the latter case, this lets\n us use the Python scope to obtain the function rather than reload it from\n bytecode. (Note that both cases are brittle!)\n\n Keras-deserialized functions do not perform lexical scoping. Any modules that\n the function requires must be imported within the function itself.\n\n This serialization mimicks the implementation in `tf.keras.layers.Lambda`.\n\n Args:\n serial: Serialized Keras object: typically a dict, string, or bytecode.\n function_type: Python string denoting 'function' or 'lambda'.\n\n Returns:\n function: Function the serialized Keras object represents.\n\n #### Examples\n\n ```python\n serial, function_type = serialize_function(lambda x: x)\n function = deserialize_function(serial, function_type)\n assert function(2.3) == 2.3 # function is identity\n ```"} {"_id": "q_673", "text": "Serializes function for Keras.\n\n (De)serializing Python functions from/to bytecode is unsafe. Therefore we\n return the function's type as an anonymous function ('lambda') or named\n function in the Python environment ('function'). In the latter case, this lets\n us use the Python scope to obtain the function rather than reload it from\n bytecode. (Note that both cases are brittle!)\n\n This serialization mimicks the implementation in `tf.keras.layers.Lambda`.\n\n Args:\n func: Python function to serialize.\n\n Returns:\n (serial, function_type): Serialized object, which is a tuple of its\n bytecode (if function is anonymous) or name (if function is named), and its\n function type."} {"_id": "q_674", "text": "Broadcasts `from_structure` to `to_structure`.\n\n This is useful for downstream usage of `zip` or `tf.nest.map_structure`.\n\n If `from_structure` is a singleton, it is tiled to match the structure of\n `to_structure`. Note that the elements in `from_structure` are not copied if\n this tiling occurs.\n\n Args:\n to_structure: A structure.\n from_structure: A structure.\n\n Returns:\n new_from_structure: Same structure as `to_structure`.\n\n #### Example:\n\n ```python\n a_structure = ['a', 'b', 'c']\n b_structure = broadcast_structure(a_structure, 'd')\n # -> ['d', 'd', 'd']\n c_structure = tf.nest.map_structure(\n lambda a, b: a + b, a_structure, b_structure)\n # -> ['ad', 'bd', 'cd']\n ```"} {"_id": "q_675", "text": "Eagerly converts struct to Tensor, recursing upon failure."} {"_id": "q_676", "text": "Returns `Tensor` attributes related to shape and Python builtins."} {"_id": "q_677", "text": "Creates the mixture of Gaussians prior distribution.\n\n Args:\n latent_size: The dimensionality of the latent representation.\n mixture_components: Number of elements of the mixture.\n\n Returns:\n random_prior: A `tfd.Distribution` instance representing the distribution\n over encodings in the absence of any evidence."} {"_id": "q_678", "text": "Helper utility to make a field of images."} {"_id": "q_679", "text": "Downloads a file."} {"_id": "q_680", "text": "Helper to validate block sizes."} {"_id": "q_681", "text": "Constructs a trainable `tfd.Bernoulli` distribution.\n\n This function creates a Bernoulli distribution parameterized by logits.\n Using default args, this function is mathematically equivalent to:\n\n ```none\n Y = Bernoulli(logits=matmul(W, x) + b)\n\n where,\n W in R^[d, n]\n b in R^d\n ```\n\n #### Examples\n\n This function can be used as a [logistic regression](\n https://en.wikipedia.org/wiki/Logistic_regression) loss.\n\n ```python\n # This example fits a logistic regression loss.\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n # Create fictitious training data.\n dtype = np.float32\n n = 3000 # number of samples\n x_size = 4 # size of single x\n def make_training_data():\n np.random.seed(142)\n x = np.random.randn(n, x_size).astype(dtype)\n w = np.random.randn(x_size).astype(dtype)\n b = np.random.randn(1).astype(dtype)\n true_logits = np.tensordot(x, w, axes=[[-1], [-1]]) + b\n noise = np.random.logistic(size=n).astype(dtype)\n y = dtype(true_logits + noise > 0.)\n return y, x\n y, x = make_training_data()\n\n # Build TF graph for fitting Bernoulli maximum likelihood estimator.\n bernoulli = tfp.trainable_distributions.bernoulli(x)\n loss = -tf.reduce_mean(bernoulli.log_prob(y))\n train_op = tf.train.AdamOptimizer(learning_rate=2.**-5).minimize(loss)\n mse = tf.reduce_mean(tf.squared_difference(y, bernoulli.mean()))\n init_op = tf.global_variables_initializer()\n\n # Run graph 1000 times.\n num_steps = 1000\n loss_ = np.zeros(num_steps) # Style: `_` to indicate sess.run result.\n mse_ = np.zeros(num_steps)\n with tf.Session() as sess:\n sess.run(init_op)\n for it in xrange(loss_.size):\n _, loss_[it], mse_[it] = sess.run([train_op, loss, mse])\n if it % 200 == 0 or it == loss_.size - 1:\n print(\"iteration:{} loss:{} mse:{}\".format(it, loss_[it], mse_[it]))\n\n # ==> iteration:0 loss:0.635675370693 mse:0.222526371479\n # iteration:200 loss:0.440077394247 mse:0.143687799573\n # iteration:400 loss:0.440077394247 mse:0.143687844276\n # iteration:600 loss:0.440077394247 mse:0.143687844276\n # iteration:800 loss:0.440077424049 mse:0.143687844276\n # iteration:999 loss:0.440077424049 mse:0.143687844276\n ```\n\n Args:\n x: `Tensor` with floating type. Must have statically defined rank and\n statically known right-most dimension.\n layer_fn: Python `callable` which takes input `x` and `int` scalar `d` and\n returns a transformation of `x` with shape\n `tf.concat([tf.shape(x)[:-1], [1]], axis=0)`.\n Default value: `tf.layers.dense`.\n name: A `name_scope` name for operations created by this function.\n Default value: `None` (i.e., \"bernoulli\").\n\n Returns:\n bernoulli: An instance of `tfd.Bernoulli`."} {"_id": "q_682", "text": "Constructs a trainable `tfd.Normal` distribution.\n\n\n This function creates a Normal distribution parameterized by loc and scale.\n Using default args, this function is mathematically equivalent to:\n\n ```none\n Y = Normal(loc=matmul(W, x) + b, scale=1)\n\n where,\n W in R^[d, n]\n b in R^d\n ```\n\n #### Examples\n\n This function can be used as a [linear regression](\n https://en.wikipedia.org/wiki/Linear_regression) loss.\n\n ```python\n # This example fits a linear regression loss.\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n # Create fictitious training data.\n dtype = np.float32\n n = 3000 # number of samples\n x_size = 4 # size of single x\n def make_training_data():\n np.random.seed(142)\n x = np.random.randn(n, x_size).astype(dtype)\n w = np.random.randn(x_size).astype(dtype)\n b = np.random.randn(1).astype(dtype)\n true_mean = np.tensordot(x, w, axes=[[-1], [-1]]) + b\n noise = np.random.randn(n).astype(dtype)\n y = true_mean + noise\n return y, x\n y, x = make_training_data()\n\n # Build TF graph for fitting Normal maximum likelihood estimator.\n normal = tfp.trainable_distributions.normal(x)\n loss = -tf.reduce_mean(normal.log_prob(y))\n train_op = tf.train.AdamOptimizer(learning_rate=2.**-5).minimize(loss)\n mse = tf.reduce_mean(tf.squared_difference(y, normal.mean()))\n init_op = tf.global_variables_initializer()\n\n # Run graph 1000 times.\n num_steps = 1000\n loss_ = np.zeros(num_steps) # Style: `_` to indicate sess.run result.\n mse_ = np.zeros(num_steps)\n with tf.Session() as sess:\n sess.run(init_op)\n for it in xrange(loss_.size):\n _, loss_[it], mse_[it] = sess.run([train_op, loss, mse])\n if it % 200 == 0 or it == loss_.size - 1:\n print(\"iteration:{} loss:{} mse:{}\".format(it, loss_[it], mse_[it]))\n\n # ==> iteration:0 loss:6.34114170074 mse:10.8444051743\n # iteration:200 loss:1.40146839619 mse:0.965059816837\n # iteration:400 loss:1.40052902699 mse:0.963181257248\n # iteration:600 loss:1.40052902699 mse:0.963181257248\n # iteration:800 loss:1.40052902699 mse:0.963181257248\n # iteration:999 loss:1.40052902699 mse:0.963181257248\n ```\n\n Args:\n x: `Tensor` with floating type. Must have statically defined rank and\n statically known right-most dimension.\n layer_fn: Python `callable` which takes input `x` and `int` scalar `d` and\n returns a transformation of `x` with shape\n `tf.concat([tf.shape(x)[:-1], [1]], axis=0)`.\n Default value: `tf.layers.dense`.\n loc_fn: Python `callable` which transforms the `loc` parameter. Takes a\n (batch of) length-`dims` vectors and returns a `Tensor` of same shape and\n `dtype`.\n Default value: `lambda x: x`.\n scale_fn: Python `callable` or `Tensor`. If a `callable` transforms the\n `scale` parameters; if `Tensor` is the `tfd.Normal` `scale` argument.\n Takes a (batch of) length-`dims` vectors and returns a `Tensor` of same\n size. (Taking a `callable` or `Tensor` is how `tf.Variable` intializers\n behave.)\n Default value: `1`.\n name: A `name_scope` name for operations created by this function.\n Default value: `None` (i.e., \"normal\").\n\n Returns:\n normal: An instance of `tfd.Normal`."} {"_id": "q_683", "text": "Constructs a trainable `tfd.Poisson` distribution.\n\n This function creates a Poisson distribution parameterized by log rate.\n Using default args, this function is mathematically equivalent to:\n\n ```none\n Y = Poisson(log_rate=matmul(W, x) + b)\n\n where,\n W in R^[d, n]\n b in R^d\n ```\n\n #### Examples\n\n This can be used as a [Poisson regression](\n https://en.wikipedia.org/wiki/Poisson_regression) loss.\n\n ```python\n # This example fits a poisson regression loss.\n import numpy as np\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n # Create fictitious training data.\n dtype = np.float32\n n = 3000 # number of samples\n x_size = 4 # size of single x\n def make_training_data():\n np.random.seed(142)\n x = np.random.randn(n, x_size).astype(dtype)\n w = np.random.randn(x_size).astype(dtype)\n b = np.random.randn(1).astype(dtype)\n true_log_rate = np.tensordot(x, w, axes=[[-1], [-1]]) + b\n y = np.random.poisson(lam=np.exp(true_log_rate)).astype(dtype)\n return y, x\n y, x = make_training_data()\n\n # Build TF graph for fitting Poisson maximum likelihood estimator.\n poisson = tfp.trainable_distributions.poisson(x)\n loss = -tf.reduce_mean(poisson.log_prob(y))\n train_op = tf.train.AdamOptimizer(learning_rate=2.**-5).minimize(loss)\n mse = tf.reduce_mean(tf.squared_difference(y, poisson.mean()))\n init_op = tf.global_variables_initializer()\n\n # Run graph 1000 times.\n num_steps = 1000\n loss_ = np.zeros(num_steps) # Style: `_` to indicate sess.run result.\n mse_ = np.zeros(num_steps)\n with tf.Session() as sess:\n sess.run(init_op)\n for it in xrange(loss_.size):\n _, loss_[it], mse_[it] = sess.run([train_op, loss, mse])\n if it % 200 == 0 or it == loss_.size - 1:\n print(\"iteration:{} loss:{} mse:{}\".format(it, loss_[it], mse_[it]))\n\n # ==> iteration:0 loss:37.0814208984 mse:6359.41259766\n # iteration:200 loss:1.42010736465 mse:40.7654914856\n # iteration:400 loss:1.39027583599 mse:8.77660560608\n # iteration:600 loss:1.3902695179 mse:8.78443241119\n # iteration:800 loss:1.39026939869 mse:8.78443622589\n # iteration:999 loss:1.39026939869 mse:8.78444766998\n ```\n\n Args:\n x: `Tensor` with floating type. Must have statically defined rank and\n statically known right-most dimension.\n layer_fn: Python `callable` which takes input `x` and `int` scalar `d` and\n returns a transformation of `x` with shape\n `tf.concat([tf.shape(x)[:-1], [1]], axis=0)`.\n Default value: `tf.layers.dense`.\n log_rate_fn: Python `callable` which transforms the `log_rate` parameter.\n Takes a (batch of) length-`dims` vectors and returns a `Tensor` of same\n shape and `dtype`.\n Default value: `lambda x: x`.\n name: A `name_scope` name for operations created by this function.\n Default value: `None` (i.e., \"poisson\").\n\n Returns:\n poisson: An instance of `tfd.Poisson`."} {"_id": "q_684", "text": "Compute diffusion drift at the current location `current_state`.\n\n The drift of the diffusion at is computed as\n\n ```none\n 0.5 * `step_size` * volatility_parts * `target_log_prob_fn(current_state)`\n + `step_size` * `grads_volatility`\n ```\n\n where `volatility_parts` = `volatility_fn(current_state)**2` and\n `grads_volatility` is a gradient of `volatility_parts` at the `current_state`.\n\n Args:\n step_size_parts: Python `list` of `Tensor`s representing the step size for\n Euler-Maruyama method. Must broadcast with the shape of\n `volatility_parts`. Larger step sizes lead to faster progress, but\n too-large step sizes make rejection exponentially more likely. When\n possible, it's often helpful to match per-variable step sizes to the\n standard deviations of the target distribution in each variable.\n volatility_parts: Python `list` of `Tensor`s representing the value of\n `volatility_fn(*state_parts)`.\n grads_volatility: Python list of `Tensor`s representing the value of the\n gradient of `volatility_parts**2` wrt the state of the chain.\n grads_target_log_prob: Python list of `Tensor`s representing\n gradient of `target_log_prob_fn(*state_parts`) wrt `state_parts`. Must\n have same shape as `volatility_parts`.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'mala_get_drift').\n\n Returns:\n drift_parts: Tensor or Python list of `Tensor`s representing the\n state(s) of the Markov chain(s) at each result step. Has same shape as\n input `current_state_parts`."} {"_id": "q_685", "text": "r\"\"\"Helper to `kernel` which computes the log acceptance-correction.\n\n Computes `log_acceptance_correction` as described in `MetropolisHastings`\n class. The proposal density is normal. More specifically,\n\n ```none\n q(proposed_state | current_state) \\sim N(current_state + current_drift,\n step_size * current_volatility**2)\n\n q(current_state | proposed_state) \\sim N(proposed_state + proposed_drift,\n step_size * proposed_volatility**2)\n ```\n\n The `log_acceptance_correction` is then\n\n ```none\n log_acceptance_correctio = q(current_state | proposed_state)\n - q(proposed_state | current_state)\n ```\n\n Args:\n current_state_parts: Python `list` of `Tensor`s representing the value(s) of\n the current state of the chain.\n proposed_state_parts: Python `list` of `Tensor`s representing the value(s)\n of the proposed state of the chain. Must broadcast with the shape of\n `current_state_parts`.\n current_volatility_parts: Python `list` of `Tensor`s representing the value\n of `volatility_fn(*current_volatility_parts)`. Must broadcast with the\n shape of `current_state_parts`.\n proposed_volatility_parts: Python `list` of `Tensor`s representing the value\n of `volatility_fn(*proposed_volatility_parts)`. Must broadcast with the\n shape of `current_state_parts`\n current_drift_parts: Python `list` of `Tensor`s representing value of the\n drift `_get_drift(*current_state_parts, ..)`. Must broadcast with the\n shape of `current_state_parts`.\n proposed_drift_parts: Python `list` of `Tensor`s representing value of the\n drift `_get_drift(*proposed_drift_parts, ..)`. Must broadcast with the\n shape of `current_state_parts`.\n step_size_parts: Python `list` of `Tensor`s representing the step size for\n Euler-Maruyama method. Must broadcast with the shape of\n `current_state_parts`.\n independent_chain_ndims: Scalar `int` `Tensor` representing the number of\n leftmost `Tensor` dimensions which index independent chains.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'compute_log_acceptance_correction').\n\n Returns:\n log_acceptance_correction: `Tensor` representing the `log`\n acceptance-correction. (See docstring for mathematical definition.)"} {"_id": "q_686", "text": "Helper which computes `volatility_fn` results and grads, if needed."} {"_id": "q_687", "text": "Helper to broadcast `volatility_parts` to the shape of `state_parts`."} {"_id": "q_688", "text": "Calls `fn`, appropriately reshaping its input `x` and output."} {"_id": "q_689", "text": "Calls `fn` and appropriately reshapes its output."} {"_id": "q_690", "text": "The binomial cumulative distribution function.\n\n Args:\n k: floating point `Tensor`.\n n: floating point `Tensor`.\n p: floating point `Tensor`.\n\n Returns:\n `sum_{j=0}^k p^j (1 - p)^(n - j)`."} {"_id": "q_691", "text": "Executes `model`, creating both samples and distributions."} {"_id": "q_692", "text": "Latent Dirichlet Allocation in terms of its generative process.\n\n The model posits a distribution over bags of words and is parameterized by\n a concentration and the topic-word probabilities. It collapses per-word\n topic assignments.\n\n Args:\n concentration: A Tensor of shape [1, num_topics], which parameterizes the\n Dirichlet prior over topics.\n topics_words: A Tensor of shape [num_topics, num_words], where each row\n (topic) denotes the probability of each word being in that topic.\n\n Returns:\n bag_of_words: A random variable capturing a sample from the model, of shape\n [1, num_words]. It represents one generated document as a bag of words."} {"_id": "q_693", "text": "20 newsgroups as a tf.data.Dataset."} {"_id": "q_694", "text": "Builds fake data for unit testing."} {"_id": "q_695", "text": "Builds iterators for train and evaluation data.\n\n Each object is represented as a bag-of-words vector.\n\n Arguments:\n data_dir: Folder in which to store the data.\n batch_size: Batch size for both train and evaluation.\n\n Returns:\n train_input_fn: A function that returns an iterator over the training data.\n eval_input_fn: A function that returns an iterator over the evaluation data.\n vocabulary: A mapping of word's integer index to the corresponding string."} {"_id": "q_696", "text": "Add control dependencies to the commmitment loss to update the codebook.\n\n Args:\n vector_quantizer: An instance of the VectorQuantizer class.\n one_hot_assignments: The one-hot vectors corresponding to the matched\n codebook entry for each code in the batch.\n codes: A `float`-like `Tensor` containing the latent vectors to be compared\n to the codebook.\n commitment_loss: The commitment loss from comparing the encoder outputs to\n their neighboring codebook entries.\n decay: Decay factor for exponential moving average.\n\n Returns:\n commitment_loss: Commitment loss with control dependencies."} {"_id": "q_697", "text": "Helper method to save a grid of images to a PNG file.\n\n Args:\n x: A numpy array of shape [n_images, height, width].\n fname: The filename to write to (including extension)."} {"_id": "q_698", "text": "Returns a `np.dtype` based on this `dtype`."} {"_id": "q_699", "text": "Returns whether this is a boolean data type."} {"_id": "q_700", "text": "Returns whether this is a complex floating point type."} {"_id": "q_701", "text": "Returns the string name for this `dtype`."} {"_id": "q_702", "text": "Validate and return float type based on `tensors` and `dtype`.\n\n For ops such as matrix multiplication, inputs and weights must be of the\n same float type. This function validates that all `tensors` are the same type,\n validates that type is `dtype` (if supplied), and returns the type. Type must\n be a floating point type. If neither `tensors` nor `dtype` is supplied,\n the function will return `dtypes.float32`.\n\n Args:\n tensors: Tensors of input values. Can include `None` elements, which will\n be ignored.\n dtype: Expected type.\n\n Returns:\n Validated type.\n\n Raises:\n ValueError: if neither `tensors` nor `dtype` is supplied, or result is not\n float, or the common type of the inputs is not a floating point type."} {"_id": "q_703", "text": "Creates the condition function pair for a reflection to be accepted."} {"_id": "q_704", "text": "Creates the condition function pair for an expansion."} {"_id": "q_705", "text": "Creates the condition function pair for an outside contraction."} {"_id": "q_706", "text": "Returns True if the simplex has converged.\n\n If the simplex size is smaller than the `position_tolerance` or the variation\n of the function value over the vertices of the simplex is smaller than the\n `func_tolerance` return True else False.\n\n Args:\n simplex: `Tensor` of real dtype. The simplex to test for convergence. For\n more details, see the docstring for `initial_simplex` argument\n of `minimize`.\n best_vertex: `Tensor` of real dtype and rank one less than `simplex`. The\n vertex with the best (i.e. smallest) objective value.\n best_objective: Scalar `Tensor` of real dtype. The best (i.e. smallest)\n value of the objective function at a vertex.\n worst_objective: Scalar `Tensor` of same dtype as `best_objective`. The\n worst (i.e. largest) value of the objective function at a vertex.\n func_tolerance: Scalar positive `Tensor`. The tolerance for the variation\n of the objective function value over the simplex. If the variation over\n the simplex vertices is below this threshold, convergence is True.\n position_tolerance: Scalar positive `Tensor`. The algorithm stops if the\n lengths (under the supremum norm) of edges connecting to the best vertex\n are below this threshold.\n\n Returns:\n has_converged: A scalar boolean `Tensor` indicating whether the algorithm\n is deemed to have converged."} {"_id": "q_707", "text": "Computes the initial simplex and the objective values at the simplex.\n\n Args:\n objective_function: A Python callable that accepts a point as a\n real `Tensor` and returns a `Tensor` of real dtype containing\n the value of the function at that point. The function\n to be evaluated at the simplex. If `batch_evaluate_objective` is `True`,\n the callable may be evaluated on a `Tensor` of shape `[n+1] + s `\n where `n` is the dimension of the problem and `s` is the shape of a\n single point in the domain (so `n` is the size of a `Tensor`\n representing a single point).\n In this case, the expected return value is a `Tensor` of shape `[n+1]`.\n initial_simplex: None or `Tensor` of real dtype. The initial simplex to\n start the search. If supplied, should be a `Tensor` of shape `[n+1] + s`\n where `n` is the dimension of the problem and `s` is the shape of a\n single point in the domain. Each row (i.e. the `Tensor` with a given\n value of the first index) is interpreted as a vertex of a simplex and\n hence the rows must be affinely independent. If not supplied, an axes\n aligned simplex is constructed using the `initial_vertex` and\n `step_sizes`. Only one and at least one of `initial_simplex` and\n `initial_vertex` must be supplied.\n initial_vertex: None or `Tensor` of real dtype and any shape that can\n be consumed by the `objective_function`. A single point in the domain that\n will be used to construct an axes aligned initial simplex.\n step_sizes: None or `Tensor` of real dtype and shape broadcasting\n compatible with `initial_vertex`. Supplies the simplex scale along each\n axes. Only used if `initial_simplex` is not supplied. See the docstring\n of `minimize` for more details.\n objective_at_initial_simplex: None or rank `1` `Tensor` of real dtype.\n The value of the objective function at the initial simplex.\n May be supplied only if `initial_simplex` is\n supplied. If not supplied, it will be computed.\n objective_at_initial_vertex: None or scalar `Tensor` of real dtype. The\n value of the objective function at the initial vertex. May be supplied\n only if the `initial_vertex` is also supplied.\n batch_evaluate_objective: Python `bool`. If True, the objective function\n will be evaluated on all the vertices of the simplex packed into a\n single tensor. If False, the objective will be mapped across each\n vertex separately.\n\n Returns:\n prepared_args: A tuple containing the following elements:\n dimension: Scalar `Tensor` of `int32` dtype. The dimension of the problem\n as inferred from the supplied arguments.\n num_vertices: Scalar `Tensor` of `int32` dtype. The number of vertices\n in the simplex.\n simplex: A `Tensor` of same dtype as `initial_simplex`\n (or `initial_vertex`). The first component of the shape of the\n `Tensor` is `num_vertices` and each element represents a vertex of\n the simplex.\n objective_at_simplex: A `Tensor` of same dtype as the dtype of the\n return value of objective_function. The shape is a vector of size\n `num_vertices`. The objective function evaluated at the simplex.\n num_evaluations: An `int32` scalar `Tensor`. The number of points on\n which the objective function was evaluated.\n\n Raises:\n ValueError: If any of the following conditions hold\n 1. If none or more than one of `initial_simplex` and `initial_vertex` are\n supplied.\n 2. If `initial_simplex` and `step_sizes` are both specified."} {"_id": "q_708", "text": "Evaluates the objective function at the specified initial simplex."} {"_id": "q_709", "text": "Constructs a standard axes aligned simplex."} {"_id": "q_710", "text": "Evaluates the objective function on a batch of points.\n\n If `batch_evaluate_objective` is True, returns\n `objective function(arg_batch)` else it maps the `objective_function`\n across the `arg_batch`.\n\n Args:\n objective_function: A Python callable that accepts a single `Tensor` of\n rank 'R > 1' and any shape 's' and returns a scalar `Tensor` of real dtype\n containing the value of the function at that point. If\n `batch a `Tensor` of shape `[batch_size] + s ` where `batch_size` is the\n size of the batch of args. In this case, the expected return value is a\n `Tensor` of shape `[batch_size]`.\n arg_batch: A `Tensor` of real dtype. The batch of arguments at which to\n evaluate the `objective_function`. If `batch_evaluate_objective` is False,\n `arg_batch` will be unpacked along the zeroth axis and the\n `objective_function` will be applied to each element.\n batch_evaluate_objective: `bool`. Whether the `objective_function` can\n evaluate a batch of arguments at once.\n\n Returns:\n A tuple containing:\n objective_values: A `Tensor` of real dtype and shape `[batch_size]`.\n The value of the objective function evaluated at the supplied\n `arg_batch`.\n num_evaluations: An `int32` scalar `Tensor`containing the number of\n points on which the objective function was evaluated (i.e `batch_size`)."} {"_id": "q_711", "text": "Save a PNG plot visualizing posterior uncertainty on heldout data.\n\n Args:\n input_vals: A `float`-like Numpy `array` of shape\n `[num_heldout] + IMAGE_SHAPE`, containing heldout input images.\n probs: A `float`-like Numpy array of shape `[num_monte_carlo,\n num_heldout, num_classes]` containing Monte Carlo samples of\n class probabilities for each heldout sample.\n fname: Python `str` filename to save the plot to.\n n: Python `int` number of datapoints to vizualize.\n title: Python `str` title for the plot."} {"_id": "q_712", "text": "Instantiates an initializer from a configuration dictionary."} {"_id": "q_713", "text": "Compute the log of the exponentially weighted moving mean of the exp.\n\n If `log_value` is a draw from a stationary random variable, this function\n approximates `log(E[exp(log_value)])`, i.e., a weighted log-sum-exp. More\n precisely, a `tf.Variable`, `log_mean_exp_var`, is updated by `log_value`\n using the following identity:\n\n ```none\n log_mean_exp_var =\n = log(decay exp(log_mean_exp_var) + (1 - decay) exp(log_value))\n = log(exp(log_mean_exp_var + log(decay)) + exp(log_value + log1p(-decay)))\n = log_mean_exp_var\n + log( exp(log_mean_exp_var - log_mean_exp_var + log(decay))\n + exp(log_value - log_mean_exp_var + log1p(-decay)))\n = log_mean_exp_var\n + log_sum_exp([log(decay), log_value - log_mean_exp_var + log1p(-decay)]).\n ```\n\n In addition to numerical stability, this formulation is advantageous because\n `log_mean_exp_var` can be updated in a lock-free manner, i.e., using\n `assign_add`. (Note: the updates are not thread-safe; it's just that the\n update to the tf.Variable is presumed efficient due to being lock-free.)\n\n Args:\n log_mean_exp_var: `float`-like `Variable` representing the log of the\n exponentially weighted moving mean of the exp. Same shape as `log_value`.\n log_value: `float`-like `Tensor` representing a new (streaming) observation.\n Same shape as `log_mean_exp_var`.\n decay: A `float`-like `Tensor`. The moving mean decay. Typically close to\n `1.`, e.g., `0.999`.\n name: Optional name of the returned operation.\n\n Returns:\n log_mean_exp_var: A reference to the input 'Variable' tensor with the\n `log_value`-updated log of the exponentially weighted moving mean of exp.\n\n Raises:\n TypeError: if `log_mean_exp_var` does not have float type `dtype`.\n TypeError: if `log_mean_exp_var`, `log_value`, `decay` have different\n `base_dtype`."} {"_id": "q_714", "text": "Ensures non-scalar input has at least one column.\n\n Example:\n If `x = [1, 2, 3]` then the output is `[[1], [2], [3]]`.\n\n If `x = [[1, 2, 3], [4, 5, 6]]` then the output is unchanged.\n\n If `x = 1` then the output is unchanged.\n\n Args:\n x: `Tensor`.\n\n Returns:\n columnar_x: `Tensor` with at least two dimensions."} {"_id": "q_715", "text": "Generates `Tensor` consisting of `-1` or `+1`, chosen uniformly at random.\n\n For more details, see [Rademacher distribution](\n https://en.wikipedia.org/wiki/Rademacher_distribution).\n\n Args:\n shape: Vector-shaped, `int` `Tensor` representing shape of output.\n dtype: (Optional) TF `dtype` representing `dtype` of output.\n seed: (Optional) Python integer to seed the random number generator.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'random_rademacher').\n\n Returns:\n rademacher: `Tensor` with specified `shape` and `dtype` consisting of `-1`\n or `+1` chosen uniformly-at-random."} {"_id": "q_716", "text": "Generates `Tensor` of positive reals drawn from a Rayleigh distributions.\n\n The probability density function of a Rayleigh distribution with `scale`\n parameter is given by:\n\n ```none\n f(x) = x scale**-2 exp(-x**2 0.5 scale**-2)\n ```\n\n For more details, see [Rayleigh distribution](\n https://en.wikipedia.org/wiki/Rayleigh_distribution)\n\n Args:\n shape: Vector-shaped, `int` `Tensor` representing shape of output.\n scale: (Optional) Positive `float` `Tensor` representing `Rayleigh` scale.\n Default value: `None` (i.e., `scale = 1.`).\n dtype: (Optional) TF `dtype` representing `dtype` of output.\n Default value: `tf.float32`.\n seed: (Optional) Python integer to seed the random number generator.\n Default value: `None` (i.e., no seed).\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., 'random_rayleigh').\n\n Returns:\n rayleigh: `Tensor` with specified `shape` and `dtype` consisting of positive\n real values drawn from a Rayleigh distribution with specified `scale`."} {"_id": "q_717", "text": "Convenience function which chooses the condition based on the predicate."} {"_id": "q_718", "text": "Finish computation of log_prob on one element of the inverse image."} {"_id": "q_719", "text": "Helper which rolls left event_dims left or right event_dims right."} {"_id": "q_720", "text": "r\"\"\"Inverse of tf.nn.batch_normalization.\n\n Args:\n x: Input `Tensor` of arbitrary dimensionality.\n mean: A mean `Tensor`.\n variance: A variance `Tensor`.\n offset: An offset `Tensor`, often denoted `beta` in equations, or\n None. If present, will be added to the normalized tensor.\n scale: A scale `Tensor`, often denoted `gamma` in equations, or\n `None`. If present, the scale is applied to the normalized tensor.\n variance_epsilon: A small `float` added to the minibatch `variance` to\n prevent dividing by zero.\n name: A name for this operation (optional).\n\n Returns:\n batch_unnormalized: The de-normalized, de-scaled, de-offset `Tensor`."} {"_id": "q_721", "text": "Check for valid BatchNormalization layer.\n\n Args:\n layer: Instance of `tf.layers.BatchNormalization`.\n Raises:\n ValueError: If batchnorm_layer argument is not an instance of\n `tf.layers.BatchNormalization`, or if `batchnorm_layer.renorm=True` or\n if `batchnorm_layer.virtual_batch_size` is specified."} {"_id": "q_722", "text": "Applies a single slicing step to `dist`, returning a new instance."} {"_id": "q_723", "text": "Runs multiple Fisher scoring steps.\n\n Args:\n model_matrix: (Batch of) `float`-like, matrix-shaped `Tensor` where each row\n represents a sample's features.\n response: (Batch of) vector-shaped `Tensor` where each element represents a\n sample's observed response (to the corresponding row of features). Must\n have same `dtype` as `model_matrix`.\n model: `tfp.glm.ExponentialFamily`-like instance which implicitly\n characterizes a negative log-likelihood loss by specifying the\n distribuion's `mean`, `gradient_mean`, and `variance`.\n model_coefficients_start: Optional (batch of) vector-shaped `Tensor`\n representing the initial model coefficients, one for each column in\n `model_matrix`. Must have same `dtype` as `model_matrix`.\n Default value: Zeros.\n predicted_linear_response_start: Optional `Tensor` with `shape`, `dtype`\n matching `response`; represents `offset` shifted initial linear\n predictions based on `model_coefficients_start`.\n Default value: `offset` if `model_coefficients is None`, and\n `tf.linalg.matvec(model_matrix, model_coefficients_start) + offset`\n otherwise.\n l2_regularizer: Optional scalar `Tensor` representing L2 regularization\n penalty, i.e.,\n `loss(w) = sum{-log p(y[i]|x[i],w) : i=1..n} + l2_regularizer ||w||_2^2`.\n Default value: `None` (i.e., no L2 regularization).\n dispersion: Optional (batch of) `Tensor` representing `response` dispersion,\n i.e., as in, `p(y|theta) := exp((y theta - A(theta)) / dispersion)`.\n Must broadcast with rows of `model_matrix`.\n Default value: `None` (i.e., \"no dispersion\").\n offset: Optional `Tensor` representing constant shift applied to\n `predicted_linear_response`. Must broadcast to `response`.\n Default value: `None` (i.e., `tf.zeros_like(response)`).\n convergence_criteria_fn: Python `callable` taking:\n `is_converged_previous`, `iter_`, `model_coefficients_previous`,\n `predicted_linear_response_previous`, `model_coefficients_next`,\n `predicted_linear_response_next`, `response`, `model`, `dispersion` and\n returning a `bool` `Tensor` indicating that Fisher scoring has converged.\n See `convergence_criteria_small_relative_norm_weights_change` as an\n example function.\n Default value: `None` (i.e.,\n `convergence_criteria_small_relative_norm_weights_change`).\n learning_rate: Optional (batch of) scalar `Tensor` used to dampen iterative\n progress. Typically only needed if optimization diverges, should be no\n larger than `1` and typically very close to `1`.\n Default value: `None` (i.e., `1`).\n fast_unsafe_numerics: Optional Python `bool` indicating if faster, less\n numerically accurate methods can be employed for computing the weighted\n least-squares solution.\n Default value: `True` (i.e., \"fast but possibly diminished accuracy\").\n maximum_iterations: Optional maximum number of iterations of Fisher scoring\n to run; \"and-ed\" with result of `convergence_criteria_fn`.\n Default value: `None` (i.e., `infinity`).\n name: Python `str` used as name prefix to ops created by this function.\n Default value: `\"fit\"`.\n\n Returns:\n model_coefficients: (Batch of) vector-shaped `Tensor`; represents the\n fitted model coefficients, one for each column in `model_matrix`.\n predicted_linear_response: `response`-shaped `Tensor` representing linear\n predictions based on new `model_coefficients`, i.e.,\n `tf.linalg.matvec(model_matrix, model_coefficients) + offset`.\n is_converged: `bool` `Tensor` indicating that the returned\n `model_coefficients` met the `convergence_criteria_fn` criteria within the\n `maximum_iterations` limit.\n iter_: `int32` `Tensor` indicating the number of iterations taken.\n\n #### Example\n\n ```python\n from __future__ import print_function\n import numpy as np\n import tensorflow as tf\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n def make_dataset(n, d, link, scale=1., dtype=np.float32):\n model_coefficients = tfd.Uniform(\n low=np.array(-1, dtype),\n high=np.array(1, dtype)).sample(d, seed=42)\n radius = np.sqrt(2.)\n model_coefficients *= radius / tf.linalg.norm(model_coefficients)\n model_matrix = tfd.Normal(\n loc=np.array(0, dtype),\n scale=np.array(1, dtype)).sample([n, d], seed=43)\n scale = tf.convert_to_tensor(scale, dtype)\n linear_response = tf.tensordot(\n model_matrix, model_coefficients, axes=[[1], [0]])\n if link == 'linear':\n response = tfd.Normal(loc=linear_response, scale=scale).sample(seed=44)\n elif link == 'probit':\n response = tf.cast(\n tfd.Normal(loc=linear_response, scale=scale).sample(seed=44) > 0,\n dtype)\n elif link == 'logit':\n response = tfd.Bernoulli(logits=linear_response).sample(seed=44)\n else:\n raise ValueError('unrecognized true link: {}'.format(link))\n return model_matrix, response, model_coefficients\n\n X, Y, w_true = make_dataset(n=int(1e6), d=100, link='probit')\n\n w, linear_response, is_converged, num_iter = tfp.glm.fit(\n model_matrix=X,\n response=Y,\n model=tfp.glm.BernoulliNormalCDF())\n log_likelihood = tfp.glm.BernoulliNormalCDF().log_prob(Y, linear_response)\n\n with tf.Session() as sess:\n [w_, linear_response_, is_converged_, num_iter_, Y_, w_true_,\n log_likelihood_] = sess.run([\n w, linear_response, is_converged, num_iter, Y, w_true,\n log_likelihood])\n\n print('is_converged: ', is_converged_)\n print(' num_iter: ', num_iter_)\n print(' accuracy: ', np.mean((linear_response_ > 0.) == Y_))\n print(' deviance: ', 2. * np.mean(log_likelihood_))\n print('||w0-w1||_2 / (1+||w0||_2): ', (np.linalg.norm(w_true_ - w_, ord=2) /\n (1. + np.linalg.norm(w_true_, ord=2))))\n\n # ==>\n # is_converged: True\n # num_iter: 6\n # accuracy: 0.804382\n # deviance: -0.820746600628\n # ||w0-w1||_2 / (1+||w0||_2): 0.00619245105309\n ```"} {"_id": "q_724", "text": "Returns Python `callable` which indicates fitting procedure has converged.\n\n Writing old, new `model_coefficients` as `w0`, `w1`, this function\n defines convergence as,\n\n ```python\n relative_euclidean_norm = (tf.norm(w0 - w1, ord=2, axis=-1) /\n (1. + tf.norm(w0, ord=2, axis=-1)))\n reduce_all(relative_euclidean_norm < tolerance)\n ```\n\n where `tf.norm(x, ord=2)` denotes the [Euclidean norm](\n https://en.wikipedia.org/wiki/Norm_(mathematics)#Euclidean_norm) of `x`.\n\n Args:\n tolerance: `float`-like `Tensor` indicating convergence, i.e., when\n max relative Euclidean norm weights difference < tolerance`.\n Default value: `1e-5`.\n norm_order: Order of the norm. Default value: `2` (i.e., \"Euclidean norm\".)\n\n Returns:\n convergence_criteria_fn: Python `callable` which returns `bool` `Tensor`\n indicated fitting procedure has converged. (See inner function\n specification for argument signature.)\n Default value: `1e-5`."} {"_id": "q_725", "text": "Helper to `fit` which sanitizes input args.\n\n Args:\n model_matrix: (Batch of) `float`-like, matrix-shaped `Tensor` where each row\n represents a sample's features.\n response: (Batch of) vector-shaped `Tensor` where each element represents a\n sample's observed response (to the corresponding row of features). Must\n have same `dtype` as `model_matrix`.\n model_coefficients: Optional (batch of) vector-shaped `Tensor` representing\n the model coefficients, one for each column in `model_matrix`. Must have\n same `dtype` as `model_matrix`.\n Default value: `tf.zeros(tf.shape(model_matrix)[-1], model_matrix.dtype)`.\n predicted_linear_response: Optional `Tensor` with `shape`, `dtype` matching\n `response`; represents `offset` shifted initial linear predictions based\n on current `model_coefficients`.\n Default value: `offset` if `model_coefficients is None`, and\n `tf.linalg.matvec(model_matrix, model_coefficients_start) + offset`\n otherwise.\n offset: Optional `Tensor` with `shape`, `dtype` matching `response`;\n represents constant shift applied to `predicted_linear_response`.\n Default value: `None` (i.e., `tf.zeros_like(response)`).\n name: Python `str` used as name prefix to ops created by this function.\n Default value: `\"prepare_args\"`.\n\n Returns:\n model_matrix: A `Tensor` with `shape`, `dtype` and values of the\n `model_matrix` argument.\n response: A `Tensor` with `shape`, `dtype` and values of the\n `response` argument.\n model_coefficients_start: A `Tensor` with `shape`, `dtype` and\n values of the `model_coefficients_start` argument if specified.\n A (batch of) vector-shaped `Tensors` with `dtype` matching `model_matrix`\n containing the default starting point otherwise.\n predicted_linear_response: A `Tensor` with `shape`, `dtype` and\n values of the `predicted_linear_response` argument if specified.\n A `Tensor` with `shape`, `dtype` matching `response` containing the\n default value otherwise.\n offset: A `Tensor` with `shape`, `dtype` and values of the `offset` argument\n if specified or `None` otherwise."} {"_id": "q_726", "text": "Helper function for statically evaluating predicates in `cond`."} {"_id": "q_727", "text": "Computes `rank` given a `Tensor`'s `shape`."} {"_id": "q_728", "text": "Like tf.case, except attempts to statically evaluate predicates.\n\n If any predicate in `pred_fn_pairs` is a bool or has a constant value, the\n associated callable will be called or omitted depending on its value.\n Otherwise this functions like tf.case.\n\n Args:\n pred_fn_pairs: Dict or list of pairs of a boolean scalar tensor and a\n callable which returns a list of tensors.\n default: Optional callable that returns a list of tensors.\n exclusive: True iff at most one predicate is allowed to evaluate to `True`.\n name: A name for this operation (optional).\n\n Returns:\n The tensors returned by the first pair whose predicate evaluated to True, or\n those returned by `default` if none does.\n\n Raises:\n TypeError: If `pred_fn_pairs` is not a list/dictionary.\n TypeError: If `pred_fn_pairs` is a list but does not contain 2-tuples.\n TypeError: If `fns[i]` is not callable for any i, or `default` is not\n callable."} {"_id": "q_729", "text": "Helper function to standardize op scope."} {"_id": "q_730", "text": "Creates a LinearOperator representing a diagonal matrix.\n\n Args:\n loc: Floating-point `Tensor`. This is used for inferring shape in the case\n where only `scale_identity_multiplier` is set.\n scale_diag: Floating-point `Tensor` representing the diagonal matrix.\n `scale_diag` has shape [N1, N2, ... k], which represents a k x k diagonal\n matrix. When `None` no diagonal term is added to the LinearOperator.\n scale_identity_multiplier: floating point rank 0 `Tensor` representing a\n scaling done to the identity matrix. When `scale_identity_multiplier =\n scale_diag = scale_tril = None` then `scale += IdentityMatrix`. Otherwise\n no scaled-identity-matrix is added to `scale`.\n shape_hint: scalar integer `Tensor` representing a hint at the dimension of\n the identity matrix when only `scale_identity_multiplier` is set.\n validate_args: Python `bool` indicating whether arguments should be checked\n for correctness.\n assert_positive: Python `bool` indicating whether LinearOperator should be\n checked for being positive definite.\n name: Python `str` name given to ops managed by this object.\n dtype: TF `DType` to prefer when converting args to `Tensor`s. Else, we fall\n back to a compatible dtype across all of `loc`, `scale_diag`, and\n `scale_identity_multiplier`.\n\n Returns:\n `LinearOperator` representing a lower triangular matrix.\n\n Raises:\n ValueError: If only `scale_identity_multiplier` is set and `loc` and\n `shape_hint` are both None."} {"_id": "q_731", "text": "Infer distribution batch and event shapes from a location and scale.\n\n Location and scale family distributions determine their batch/event shape by\n broadcasting the `loc` and `scale` args. This helper does that broadcast,\n statically if possible.\n\n Batch shape broadcasts as per the normal rules.\n We allow the `loc` event shape to broadcast up to that of `scale`. We do not\n allow `scale`'s event shape to change. Therefore, the last dimension of `loc`\n must either be size `1`, or the same as `scale.range_dimension`.\n\n See `MultivariateNormalLinearOperator` for a usage example.\n\n Args:\n loc: `Tensor` (already converted to tensor) or `None`. If `None`, or\n `rank(loc)==0`, both batch and event shape are determined by `scale`.\n scale: A `LinearOperator` instance.\n name: A string name to prepend to created ops.\n\n Returns:\n batch_shape: `TensorShape` (if broadcast is done statically), or `Tensor`.\n event_shape: `TensorShape` (if broadcast is done statically), or `Tensor`.\n\n Raises:\n ValueError: If the last dimension of `loc` is determined statically to be\n different than the range of `scale`."} {"_id": "q_732", "text": "Returns `True` if `scale` is a `LinearOperator` that is known to be diag.\n\n Args:\n scale: `LinearOperator` instance.\n\n Returns:\n Python `bool`.\n\n Raises:\n TypeError: If `scale` is not a `LinearOperator`."} {"_id": "q_733", "text": "Convenience function that chooses one of two values based on the predicate.\n\n This utility is equivalent to a version of `tf.where` that accepts only a\n scalar predicate and computes its result statically when possible. It may also\n be used in place of `tf.cond` when both branches yield a `Tensor` of the same\n shape; the operational difference is that `tf.cond` uses control flow to\n evaluate only the branch that's needed, while `tf.where` (and thus\n this method) may evaluate both branches before the predicate's truth is known.\n This means that `tf.cond` is preferred when one of the branches is expensive\n to evaluate (like performing a large matmul), while this method is preferred\n when both branches are cheap, e.g., constants. In the latter case, we expect\n this method to be substantially faster than `tf.cond` on GPU and to give\n similar performance on CPU.\n\n Args:\n pred: Scalar `bool` `Tensor` predicate.\n true_value: `Tensor` to return if `pred` is `True`.\n false_value: `Tensor` to return if `pred` is `False`. Must have the same\n shape as `true_value`.\n name: Python `str` name given to ops managed by this object.\n\n Returns:\n result: a `Tensor` (or `Tensor`-convertible Python value) equal to\n `true_value` if `pred` evaluates to `True` and `false_value` otherwise.\n If the condition can be evaluated statically, the result returned is one\n of the input Python values, with no graph side effects."} {"_id": "q_734", "text": "Move a single tensor dimension within its shape.\n\n This is a special case of `tf.transpose()`, which applies\n arbitrary permutations to tensor dimensions.\n\n Args:\n x: Tensor of rank `ndims`.\n source_idx: Integer index into `x.shape` (negative indexing is supported).\n dest_idx: Integer index into `x.shape` (negative indexing is supported).\n\n Returns:\n x_perm: Tensor of rank `ndims`, in which the dimension at original\n index `source_idx` has been moved to new index `dest_idx`, with\n all other dimensions retained in their original order.\n\n Example:\n\n ```python\n x = tf.placeholder(shape=[200, 30, 4, 1, 6])\n x_perm = _move_dimension(x, 1, 1) # no-op\n x_perm = _move_dimension(x, 0, 3) # result shape [30, 4, 1, 200, 6]\n x_perm = _move_dimension(x, 0, -2) # equivalent to previous\n x_perm = _move_dimension(x, 4, 2) # result shape [200, 30, 6, 4, 1]\n ```"} {"_id": "q_735", "text": "Assert x is a non-negative tensor, and optionally of integers."} {"_id": "q_736", "text": "Helper returning True if dtype is known to be unsigned."} {"_id": "q_737", "text": "Helper returning True if dtype is known to be signed."} {"_id": "q_738", "text": "Helper returning the largest integer exactly representable by dtype."} {"_id": "q_739", "text": "Helper returning the smallest integer exactly representable by dtype."} {"_id": "q_740", "text": "Circularly moves dims left or right.\n\n Effectively identical to:\n\n ```python\n numpy.transpose(x, numpy.roll(numpy.arange(len(x.shape)), shift))\n ```\n\n When `validate_args=False` additional graph-runtime checks are\n performed. These checks entail moving data from to GPU to CPU.\n\n Example:\n\n ```python\n x = tf.random_normal([1, 2, 3, 4]) # Tensor of shape [1, 2, 3, 4].\n rotate_transpose(x, -1).shape == [2, 3, 4, 1]\n rotate_transpose(x, -2).shape == [3, 4, 1, 2]\n rotate_transpose(x, 1).shape == [4, 1, 2, 3]\n rotate_transpose(x, 2).shape == [3, 4, 1, 2]\n rotate_transpose(x, 7).shape == rotate_transpose(x, 3).shape # [2, 3, 4, 1]\n rotate_transpose(x, -7).shape == rotate_transpose(x, -3).shape # [4, 1, 2, 3]\n ```\n\n Args:\n x: `Tensor`.\n shift: `Tensor`. Number of dimensions to transpose left (shift<0) or\n transpose right (shift>0).\n name: Python `str`. The name to give this op.\n\n Returns:\n rotated_x: Input `Tensor` with dimensions circularly rotated by shift.\n\n Raises:\n TypeError: if shift is not integer type."} {"_id": "q_741", "text": "Picks possibly different length row `Tensor`s based on condition.\n\n Value `Tensor`s should have exactly one dimension.\n\n If `cond` is a python Boolean or `tf.constant` then either `true_vector` or\n `false_vector` is immediately returned. I.e., no graph nodes are created and\n no validation happens.\n\n Args:\n cond: `Tensor`. Must have `dtype=tf.bool` and be scalar.\n true_vector: `Tensor` of one dimension. Returned when cond is `True`.\n false_vector: `Tensor` of one dimension. Returned when cond is `False`.\n name: Python `str`. The name to give this op.\n Example: ```python pick_vector(tf.less(0, 5), tf.range(10, 12), tf.range(15,\n 18)) # [10, 11] pick_vector(tf.less(5, 0), tf.range(10, 12), tf.range(15,\n 18)) # [15, 16, 17] ```\n\n Returns:\n true_or_false_vector: `Tensor`.\n\n Raises:\n TypeError: if `cond.dtype != tf.bool`\n TypeError: if `cond` is not a constant and\n `true_vector.dtype != false_vector.dtype`"} {"_id": "q_742", "text": "Generate a new seed, from the given seed and salt."} {"_id": "q_743", "text": "Creates a matrix with values set above, below, and on the diagonal.\n\n Example:\n\n ```python\n tridiag(below=[1., 2., 3.],\n diag=[4., 5., 6., 7.],\n above=[8., 9., 10.])\n # ==> array([[ 4., 8., 0., 0.],\n # [ 1., 5., 9., 0.],\n # [ 0., 2., 6., 10.],\n # [ 0., 0., 3., 7.]], dtype=float32)\n ```\n\n Warning: This Op is intended for convenience, not efficiency.\n\n Args:\n below: `Tensor` of shape `[B1, ..., Bb, d-1]` corresponding to the below\n diagonal part. `None` is logically equivalent to `below = 0`.\n diag: `Tensor` of shape `[B1, ..., Bb, d]` corresponding to the diagonal\n part. `None` is logically equivalent to `diag = 0`.\n above: `Tensor` of shape `[B1, ..., Bb, d-1]` corresponding to the above\n diagonal part. `None` is logically equivalent to `above = 0`.\n name: Python `str`. The name to give this op.\n\n Returns:\n tridiag: `Tensor` with values set above, below and on the diagonal.\n\n Raises:\n ValueError: if all inputs are `None`."} {"_id": "q_744", "text": "Validates quadrature grid, probs or computes them as necessary.\n\n Args:\n quadrature_grid_and_probs: Python pair of `float`-like `Tensor`s\n representing the sample points and the corresponding (possibly\n normalized) weight. When `None`, defaults to:\n `np.polynomial.hermite.hermgauss(deg=8)`.\n dtype: The expected `dtype` of `grid` and `probs`.\n validate_args: Python `bool`, default `False`. When `True` distribution\n parameters are checked for validity despite possibly degrading runtime\n performance. When `False` invalid inputs may silently render incorrect\n outputs.\n name: Python `str` name prefixed to Ops created by this class.\n\n Returns:\n quadrature_grid_and_probs: Python pair of `float`-like `Tensor`s\n representing the sample points and the corresponding (possibly\n normalized) weight.\n\n Raises:\n ValueError: if `quadrature_grid_and_probs is not None` and\n `len(quadrature_grid_and_probs[0]) != len(quadrature_grid_and_probs[1])`"} {"_id": "q_745", "text": "Returns parent frame arguments.\n\n When called inside a function, returns a dictionary with the caller's function\n arguments. These are positional arguments and keyword arguments (**kwargs),\n while variable arguments (*varargs) are excluded.\n\n When called at global scope, this will return an empty dictionary, since there\n are no arguments.\n\n WARNING: If caller function argument names are overloaded before invoking\n this method, then values will reflect the overloaded value. For this reason,\n we recommend calling `parent_frame_arguments` at the beginning of the\n function."} {"_id": "q_746", "text": "Transform a 0-D or 1-D `Tensor` to be 1-D.\n\n For user convenience, many parts of the TensorFlow Probability API accept\n inputs of rank 0 or 1 -- i.e., allowing an `event_shape` of `[5]` to be passed\n to the API as either `5` or `[5]`. This function can be used to transform\n such an argument to always be 1-D.\n\n NOTE: Python or NumPy values will be converted to `Tensor`s with standard type\n inference/conversion. In particular, an empty list or tuple will become an\n empty `Tensor` with dtype `float32`. Callers should convert values to\n `Tensor`s before calling this function if different behavior is desired\n (e.g. converting empty lists / other values to `Tensor`s with dtype `int32`).\n\n Args:\n x: A 0-D or 1-D `Tensor`.\n tensor_name: Python `str` name for `Tensor`s created by this function.\n op_name: Python `str` name for `Op`s created by this function.\n validate_args: Python `bool, default `False`. When `True`, arguments may be\n checked for validity at execution time, possibly degrading runtime\n performance. When `False`, invalid inputs may silently render incorrect\n outputs.\n Returns:\n vector: a 1-D `Tensor`."} {"_id": "q_747", "text": "Checks that `rightmost_transposed_ndims` is valid."} {"_id": "q_748", "text": "Checks that `perm` is valid."} {"_id": "q_749", "text": "Returns the concatenation of the dimension in `x` and `other`.\n\n *Note:* If either `x` or `other` is completely unknown, concatenation will\n discard information about the other shape. In future, we might support\n concatenation that preserves this information for use with slicing.\n\n For more details, see `help(tf.TensorShape.concatenate)`.\n\n Args:\n x: object representing a shape; convertible to `tf.TensorShape`.\n other: object representing a shape; convertible to `tf.TensorShape`.\n\n Returns:\n new_shape: an object like `x` whose elements are the concatenation of the\n dimensions in `x` and `other`."} {"_id": "q_750", "text": "Returns a list of dimension sizes, or `None` if `rank` is unknown.\n\n For more details, see `help(tf.TensorShape.dims)`.\n\n Args:\n x: object representing a shape; convertible to `tf.TensorShape`.\n\n Returns:\n shape_as_list: list of sizes or `None` values representing each\n dimensions size if known. A size is `tf.Dimension` if input is a\n `tf.TensorShape` and an `int` otherwise."} {"_id": "q_751", "text": "Returns a shape combining the information in `x` and `other`.\n\n The dimensions in `x` and `other` are merged elementwise, according to the\n rules defined for `tf.Dimension.merge_with()`.\n\n For more details, see `help(tf.TensorShape.merge_with)`.\n\n Args:\n x: object representing a shape; convertible to `tf.TensorShape`.\n other: object representing a shape; convertible to `tf.TensorShape`.\n\n Returns:\n merged_shape: shape having `type(x)` containing the combined information of\n `x` and `other`.\n\n Raises:\n ValueError: If `x` and `other` are not compatible."} {"_id": "q_752", "text": "Returns a shape based on `x` with at least the given `rank`.\n\n For more details, see `help(tf.TensorShape.with_rank_at_least)`.\n\n Args:\n x: object representing a shape; convertible to `tf.TensorShape`.\n rank: An `int` representing the minimum rank of `x` or else an assertion is\n raised.\n\n Returns:\n shape: a shape having `type(x)` but guaranteed to have at least the given\n rank (or else an assertion was raised).\n\n Raises:\n ValueError: If `x` does not represent a shape with at least the given\n `rank`."} {"_id": "q_753", "text": "Check that source and target shape match, statically if possible."} {"_id": "q_754", "text": "Build a callable that perform one step for backward smoothing.\n\n Args:\n get_transition_matrix_for_timestep: callable taking a timestep\n as an integer `Tensor` argument, and returning a `LinearOperator`\n of shape `[latent_size, latent_size]`.\n\n Returns:\n backward_pass_step: a callable that updates a BackwardPassState\n from timestep `t` to `t-1`."} {"_id": "q_755", "text": "Build a callable that performs one step of Kalman filtering.\n\n Args:\n get_transition_matrix_for_timestep: callable taking a timestep\n as an integer `Tensor` argument, and returning a `LinearOperator`\n of shape `[latent_size, latent_size]`.\n get_transition_noise_for_timestep: callable taking a timestep as\n an integer `Tensor` argument, and returning a\n `MultivariateNormalLinearOperator` of event shape\n `[latent_size]`.\n get_observation_matrix_for_timestep: callable taking a timestep\n as an integer `Tensor` argument, and returning a `LinearOperator`\n of shape `[observation_size, observation_size]`.\n get_observation_noise_for_timestep: callable taking a timestep as\n an integer `Tensor` argument, and returning a\n `MultivariateNormalLinearOperator` of event shape\n `[observation_size]`.\n\n Returns:\n kalman_filter_step: a callable that updates a KalmanFilterState\n from timestep `t-1` to `t`."} {"_id": "q_756", "text": "Conjugate update for a linear Gaussian model.\n\n Given a normal prior on a latent variable `z`,\n `p(z) = N(prior_mean, prior_cov) = N(u, P)`,\n for which we observe a linear Gaussian transformation `x`,\n `p(x|z) = N(H * z + c, R)`,\n the posterior is also normal:\n `p(z|x) = N(u*, P*)`.\n\n We can write this update as\n x_expected = H * u + c # pushforward prior mean\n S = R + H * P * H' # pushforward prior cov\n K = P * H' * S^{-1} # optimal Kalman gain\n u* = u + K * (x_observed - x_expected) # posterior mean\n P* = (I - K * H) * P (I - K * H)' + K * R * K' # posterior cov\n (see, e.g., https://en.wikipedia.org/wiki/Kalman_filter#Update)\n\n Args:\n prior_mean: `Tensor` with event shape `[latent_size, 1]` and\n potential batch shape `B = [b1, ..., b_n]`.\n prior_cov: `Tensor` with event shape `[latent_size, latent_size]`\n and batch shape `B` (matching `prior_mean`).\n observation_matrix: `LinearOperator` with shape\n `[observation_size, latent_size]` and batch shape broadcastable\n to `B`.\n observation_noise: potentially-batched\n `MultivariateNormalLinearOperator` instance with event shape\n `[observation_size]` and batch shape broadcastable to `B`.\n x_observed: potentially batched `Tensor` with event shape\n `[observation_size, 1]` and batch shape `B`.\n\n Returns:\n posterior_mean: `Tensor` with event shape `[latent_size, 1]` and\n batch shape `B`.\n posterior_cov: `Tensor` with event shape `[latent_size,\n latent_size]` and batch shape `B`.\n predictive_dist: the prior predictive distribution `p(x|z)`,\n as a `Distribution` instance with event\n shape `[observation_size]` and batch shape `B`. This will\n typically be `tfd.MultivariateNormalTriL`, but when\n `observation_size=1` we return a `tfd.Independent(tfd.Normal)`\n instance as an optimization."} {"_id": "q_757", "text": "Propagate a filtered distribution through a transition model."} {"_id": "q_758", "text": "Build a callable that performs one step of Kalman mean recursion.\n\n Args:\n get_transition_matrix_for_timestep: callable taking a timestep\n as an integer `Tensor` argument, and returning a `LinearOperator`\n of shape `[latent_size, latent_size]`.\n get_transition_noise_for_timestep: callable taking a timestep as\n an integer `Tensor` argument, and returning a\n `MultivariateNormalLinearOperator` of event shape\n `[latent_size]`.\n get_observation_matrix_for_timestep: callable taking a timestep\n as an integer `Tensor` argument, and returning a `LinearOperator`\n of shape `[observation_size, observation_size]`.\n get_observation_noise_for_timestep: callable taking a timestep as\n an integer `Tensor` argument, and returning a\n `MultivariateNormalLinearOperator` of event shape\n `[observation_size]`.\n\n Returns:\n kalman_mean_step: a callable that computes latent state and\n observation means at time `t`, given latent mean at time `t-1`."} {"_id": "q_759", "text": "Propagate a mean through linear Gaussian transformation."} {"_id": "q_760", "text": "Propagate covariance through linear Gaussian transformation."} {"_id": "q_761", "text": "Run the backward pass in Kalman smoother.\n\n The backward smoothing is using Rauch, Tung and Striebel smoother as\n as discussed in section 18.3.2 of Kevin P. Murphy, 2012, Machine Learning:\n A Probabilistic Perspective, The MIT Press. The inputs are returned by\n `forward_filter` function.\n\n Args:\n filtered_means: Means of the per-timestep filtered marginal\n distributions p(z_t | x_{:t}), as a Tensor of shape\n `sample_shape(x) + batch_shape + [num_timesteps, latent_size]`.\n filtered_covs: Covariances of the per-timestep filtered marginal\n distributions p(z_t | x_{:t}), as a Tensor of shape\n `batch_shape + [num_timesteps, latent_size, latent_size]`.\n predicted_means: Means of the per-timestep predictive\n distributions over latent states, p(z_{t+1} | x_{:t}), as a\n Tensor of shape `sample_shape(x) + batch_shape +\n [num_timesteps, latent_size]`.\n predicted_covs: Covariances of the per-timestep predictive\n distributions over latent states, p(z_{t+1} | x_{:t}), as a\n Tensor of shape `batch_shape + [num_timesteps, latent_size,\n latent_size]`.\n\n Returns:\n posterior_means: Means of the smoothed marginal distributions\n p(z_t | x_{1:T}), as a Tensor of shape\n `sample_shape(x) + batch_shape + [num_timesteps, latent_size]`,\n which is of the same shape as filtered_means.\n posterior_covs: Covariances of the smoothed marginal distributions\n p(z_t | x_{1:T}), as a Tensor of shape\n `batch_shape + [num_timesteps, latent_size, latent_size]`.\n which is of the same shape as filtered_covs."} {"_id": "q_762", "text": "Draw a joint sample from the prior over latents and observations."} {"_id": "q_763", "text": "Run a Kalman smoother to return posterior mean and cov.\n\n Note that the returned values `smoothed_means` depend on the observed\n time series `x`, while the `smoothed_covs` are independent\n of the observed series; i.e., they depend only on the model itself.\n This means that the mean values have shape `concat([sample_shape(x),\n batch_shape, [num_timesteps, {latent/observation}_size]])`,\n while the covariances have shape `concat[(batch_shape, [num_timesteps,\n {latent/observation}_size, {latent/observation}_size]])`, which\n does not depend on the sample shape.\n\n This function only performs smoothing. If the user wants the\n intermediate values, which are returned by filtering pass `forward_filter`,\n one could get it by:\n ```\n (log_likelihoods,\n filtered_means, filtered_covs,\n predicted_means, predicted_covs,\n observation_means, observation_covs) = model.forward_filter(x)\n smoothed_means, smoothed_covs = model.backward_smoothing_pass(x)\n ```\n where `x` is an observation sequence.\n\n Args:\n x: a float-type `Tensor` with rightmost dimensions\n `[num_timesteps, observation_size]` matching\n `self.event_shape`. Additional dimensions must match or be\n broadcastable to `self.batch_shape`; any further dimensions\n are interpreted as a sample shape.\n mask: optional bool-type `Tensor` with rightmost dimension\n `[num_timesteps]`; `True` values specify that the value of `x`\n at that timestep is masked, i.e., not conditioned on. Additional\n dimensions must match or be broadcastable to `self.batch_shape`; any\n further dimensions must match or be broadcastable to the sample\n shape of `x`.\n Default value: `None`.\n\n Returns:\n smoothed_means: Means of the per-timestep smoothed\n distributions over latent states, p(x_{t} | x_{:T}), as a\n Tensor of shape `sample_shape(x) + batch_shape +\n [num_timesteps, observation_size]`.\n smoothed_covs: Covariances of the per-timestep smoothed\n distributions over latent states, p(x_{t} | x_{:T}), as a\n Tensor of shape `sample_shape(mask) + batch_shape + [num_timesteps,\n observation_size, observation_size]`. Note that the covariances depend\n only on the model and the mask, not on the data, so this may have fewer\n dimensions than `filtered_means`."} {"_id": "q_764", "text": "Compute prior means for all variables via dynamic programming.\n\n Returns:\n latent_means: Prior means of latent states `z_t`, as a `Tensor`\n of shape `batch_shape + [num_timesteps, latent_size]`\n observation_means: Prior covariance matrices of observations\n `x_t`, as a `Tensor` of shape `batch_shape + [num_timesteps,\n observation_size]`"} {"_id": "q_765", "text": "Compute prior covariances for all variables via dynamic programming.\n\n Returns:\n latent_covs: Prior covariance matrices of latent states `z_t`, as\n a `Tensor` of shape `batch_shape + [num_timesteps,\n latent_size, latent_size]`\n observation_covs: Prior covariance matrices of observations\n `x_t`, as a `Tensor` of shape `batch_shape + [num_timesteps,\n observation_size, observation_size]`"} {"_id": "q_766", "text": "Create a deep copy of fn.\n\n Args:\n fn: a callable\n\n Returns:\n A `FunctionType`: a deep copy of fn.\n\n Raises:\n TypeError: if `fn` is not a callable."} {"_id": "q_767", "text": "Removes `dict` keys which have have `self` as value."} {"_id": "q_768", "text": "Recursively replace `dict`s with `_PrettyDict`."} {"_id": "q_769", "text": "Helper to `maybe_call_fn_and_grads`."} {"_id": "q_770", "text": "Calls `fn` and computes the gradient of the result wrt `args_list`."} {"_id": "q_771", "text": "Construct a for loop, preferring a python loop if `n` is staticaly known.\n\n Given `loop_num_iter` and `body_fn`, return an op corresponding to executing\n `body_fn` `loop_num_iter` times, feeding previous outputs of `body_fn` into\n the next iteration.\n\n If `loop_num_iter` is statically known, the op is constructed via python for\n loop, and otherwise a `tf.while_loop` is used.\n\n Args:\n loop_num_iter: `Integer` `Tensor` representing the number of loop\n iterations.\n body_fn: Callable to be executed `loop_num_iter` times.\n initial_loop_vars: Listlike object of `Tensors` to be passed in to\n `body_fn`'s first execution.\n parallel_iterations: The number of iterations allowed to run in parallel.\n It must be a positive integer. See `tf.while_loop` for more details.\n Default value: `10`.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., \"smart_for_loop\").\n Returns:\n result: `Tensor` representing applying `body_fn` iteratively `n` times."} {"_id": "q_772", "text": "A simplified version of `tf.scan` that has configurable tracing.\n\n This function repeatedly calls `loop_fn(state, elem)`, where `state` is the\n `initial_state` during the first iteration, and the return value of `loop_fn`\n for every iteration thereafter. `elem` is a slice of `elements` along the\n first dimension, accessed in order. Additionally, it calls `trace_fn` on the\n return value of `loop_fn`. The `Tensor`s in return values of `trace_fn` are\n stacked and returned from this function, such that the first dimension of\n those `Tensor`s matches the size of `elems`.\n\n Args:\n loop_fn: A callable that takes in a `Tensor` or a nested collection of\n `Tensor`s with the same structure as `initial_state`, a slice of `elems`\n and returns the same structure as `initial_state`.\n initial_state: A `Tensor` or a nested collection of `Tensor`s passed to\n `loop_fn` in the first iteration.\n elems: A `Tensor` that is split along the first dimension and each element\n of which is passed to `loop_fn`.\n trace_fn: A callable that takes in the return value of `loop_fn` and returns\n a `Tensor` or a nested collection of `Tensor`s.\n parallel_iterations: Passed to the internal `tf.while_loop`.\n name: Name scope used in this function. Default: 'trace_scan'.\n\n Returns:\n final_state: The final return value of `loop_fn`.\n trace: The same structure as the return value of `trace_fn`, but with each\n `Tensor` being a stack of the corresponding `Tensors` in the return value\n of `trace_fn` for each slice of `elems`."} {"_id": "q_773", "text": "Wraps a setter so it applies to the inner-most results in `kernel_results`.\n\n The wrapped setter unwraps `kernel_results` and applies `setter` to the first\n results without an `inner_results` attribute.\n\n Args:\n setter: A callable that takes the kernel results as well as some `*args` and\n `**kwargs` and returns a modified copy of those kernel results.\n\n Returns:\n new_setter: A wrapped `setter`."} {"_id": "q_774", "text": "Enables the `store_parameters_in_results` parameter in a chain of kernels.\n\n This is a temporary utility for use during the transition period of the\n parameter storage methods.\n\n Args:\n kernel: A TransitionKernel.\n\n Returns:\n kernel: The same kernel, but recreated with `store_parameters_in_results`\n recursively set to `True` in its parameters and its inner kernels (as\n appropriate)."} {"_id": "q_775", "text": "Check that a shape Tensor is int-type and otherwise sane."} {"_id": "q_776", "text": "Performs the line search step of the BFGS search procedure.\n\n Uses hager_zhang line search procedure to compute a suitable step size\n to advance the current `state.position` along the given `search_direction`.\n Also, if the line search is successful, updates the `state.position` by\n taking the corresponding step.\n\n Args:\n state: A namedtuple instance holding values for the current state of the\n search procedure. The state must include the fields: `position`,\n `objective_value`, `objective_gradient`, `num_iterations`,\n `num_objective_evaluations`, `converged` and `failed`.\n value_and_gradients_function: A Python callable that accepts a point as a\n real `Tensor` of shape `[..., n]` and returns a tuple of two tensors of\n the same dtype: the objective function value, a real `Tensor` of shape\n `[...]`, and its derivative, another real `Tensor` of shape `[..., n]`.\n search_direction: A real `Tensor` of shape `[..., n]`. The direction along\n which to perform line search.\n grad_tolerance: Scalar `Tensor` of real dtype. Specifies the gradient\n tolerance for the procedure.\n f_relative_tolerance: Scalar `Tensor` of real dtype. Specifies the\n tolerance for the relative change in the objective value.\n x_tolerance: Scalar `Tensor` of real dtype. Specifies the tolerance for the\n change in the position.\n stopping_condition: A Python function that takes as input two Boolean\n tensors of shape `[...]`, and returns a Boolean scalar tensor. The input\n tensors are `converged` and `failed`, indicating the current status of\n each respective batch member; the return value states whether the\n algorithm should stop.\n Returns:\n A copy of the input state with the following fields updated:\n converged: a Boolean `Tensor` of shape `[...]` indicating whether the\n convergence criteria has been met.\n failed: a Boolean `Tensor` of shape `[...]` indicating whether the line\n search procedure failed to converge, or if either the updated gradient\n or objective function are no longer finite.\n num_iterations: Increased by 1.\n num_objective_evaluations: Increased by the number of times that the\n objective function got evaluated.\n position, objective_value, objective_gradient: If line search succeeded,\n updated by computing the new position and evaluating the objective\n function at that position."} {"_id": "q_777", "text": "Updates the state advancing its position by a given position_delta."} {"_id": "q_778", "text": "Checks if the algorithm satisfies the convergence criteria."} {"_id": "q_779", "text": "Broadcast a value to match the batching dimensions of a target.\n\n If necessary the value is converted into a tensor. Both value and target\n should be of the same dtype.\n\n Args:\n value: A value to broadcast.\n target: A `Tensor` of shape [b1, ..., bn, d].\n\n Returns:\n A `Tensor` of shape [b1, ..., bn] and same dtype as the target."} {"_id": "q_780", "text": "field_name from kernel_results or kernel_results.accepted_results."} {"_id": "q_781", "text": "Makes a function which applies a list of Bijectors' `inverse`s."} {"_id": "q_782", "text": "Like tf.where but works on namedtuples."} {"_id": "q_783", "text": "Performs the secant square procedure of Hager Zhang.\n\n Given an interval that brackets a root, this procedure performs an update of\n both end points using two intermediate points generated using the secant\n interpolation. For details see the steps S1-S4 in [Hager and Zhang (2006)][2].\n\n The interval [a, b] must satisfy the opposite slope conditions described in\n the documentation for `update`.\n\n Args:\n value_and_gradients_function: A Python callable that accepts a real scalar\n tensor and returns an object that can be converted to a namedtuple.\n The namedtuple should have fields 'f' and 'df' that correspond to scalar\n tensors of real dtype containing the value of the function and its\n derivative at that point. The other namedtuple fields, if present,\n should be tensors or sequences (possibly nested) of tensors.\n In usual optimization application, this function would be generated by\n projecting the multivariate objective function along some specific\n direction. The direction is determined by some other procedure but should\n be a descent direction (i.e. the derivative of the projected univariate\n function must be negative at 0.).\n Alternatively, the function may represent the batching of `n` such line\n functions (e.g. projecting a single multivariate objective function along\n `n` distinct directions at once) accepting n points as input, i.e. a\n tensor of shape [n], and the fields 'f' and 'df' in the returned\n namedtuple should each be a tensor of shape [n], with the corresponding\n function values and derivatives at the input points.\n val_0: A namedtuple, as returned by value_and_gradients_function evaluated\n at `0.`.\n search_interval: A namedtuple describing the current search interval,\n must include the fields:\n - converged: Boolean `Tensor` of shape [n], indicating batch members\n where search has already converged. Interval for these batch members\n won't be modified.\n - failed: Boolean `Tensor` of shape [n], indicating batch members\n where search has already failed. Interval for these batch members\n wont be modified.\n - iterations: Scalar int32 `Tensor`. Number of line search iterations\n so far.\n - func_evals: Scalar int32 `Tensor`. Number of function evaluations\n so far.\n - left: A namedtuple, as returned by value_and_gradients_function,\n of the left end point of the current search interval.\n - right: A namedtuple, as returned by value_and_gradients_function,\n of the right end point of the current search interval.\n f_lim: Scalar `Tensor` of real dtype. The function value threshold for\n the approximate Wolfe conditions to be checked.\n sufficient_decrease_param: Positive scalar `Tensor` of real dtype.\n Bounded above by the curvature param. Corresponds to 'delta' in the\n terminology of [Hager and Zhang (2006)][2].\n curvature_param: Positive scalar `Tensor` of real dtype. Bounded above\n by `1.`. Corresponds to 'sigma' in the terminology of\n [Hager and Zhang (2006)][2].\n name: (Optional) Python str. The name prefixed to the ops created by this\n function. If not supplied, the default name 'secant2' is used.\n\n Returns:\n A namedtuple containing the following fields.\n active: A boolean `Tensor` of shape [n]. Used internally by the procedure\n to indicate batch members on which there is work left to do.\n converged: A boolean `Tensor` of shape [n]. Indicates whether a point\n satisfying the Wolfe conditions has been found. If this is True, the\n interval will be degenerate (i.e. `left` and `right` below will be\n identical).\n failed: A boolean `Tensor` of shape [n]. Indicates if invalid function or\n gradient values were encountered (i.e. infinity or NaNs).\n num_evals: A scalar int32 `Tensor`. The total number of function\n evaluations made.\n left: Return value of value_and_gradients_function at the updated left\n end point of the interval.\n right: Return value of value_and_gradients_function at the updated right\n end point of the interval."} {"_id": "q_784", "text": "Helper function for secant-square step."} {"_id": "q_785", "text": "Brackets the minimum given an initial starting point.\n\n Applies the Hager Zhang bracketing algorithm to find an interval containing\n a region with points satisfying Wolfe conditions. Uses the supplied initial\n step size 'c', the right end point of the provided search interval, to find\n such an interval. The only condition on 'c' is that it should be positive.\n For more details see steps B0-B3 in [Hager and Zhang (2006)][2].\n\n Args:\n value_and_gradients_function: A Python callable that accepts a real scalar\n tensor and returns a namedtuple containing the value filed `f` of the\n function and its derivative value field `df` at that point.\n Alternatively, the function may representthe batching of `n` such line\n functions (e.g. projecting a single multivariate objective function along\n `n` distinct directions at once) accepting n points as input, i.e. a\n tensor of shape [n], and return a tuple of two tensors of shape [n], the\n function values and the corresponding derivatives at the input points.\n search_interval: A namedtuple describing the current search interval,\n must include the fields:\n - converged: Boolean `Tensor` of shape [n], indicating batch members\n where search has already converged. Interval for these batch members\n wont be modified.\n - failed: Boolean `Tensor` of shape [n], indicating batch members\n where search has already failed. Interval for these batch members\n wont be modified.\n - iterations: Scalar int32 `Tensor`. Number of line search iterations\n so far.\n - func_evals: Scalar int32 `Tensor`. Number of function evaluations\n so far.\n - left: A namedtuple, as returned by value_and_gradients_function\n evaluated at 0, the left end point of the current interval.\n - right: A namedtuple, as returned by value_and_gradients_function,\n of the right end point of the current interval (labelled 'c' above).\n f_lim: real `Tensor` of shape [n]. The function value threshold for\n the approximate Wolfe conditions to be checked for each batch member.\n max_iterations: Int32 scalar `Tensor`. The maximum number of iterations\n permitted. The limit applies equally to all batch members.\n expansion_param: Scalar positive `Tensor` of real dtype. Must be greater\n than `1.`. Used to expand the initial interval in case it does not bracket\n a minimum.\n\n Returns:\n A namedtuple with the following fields.\n iteration: An int32 scalar `Tensor`. The number of iterations performed.\n Bounded above by `max_iterations` parameter.\n stopped: A boolean `Tensor` of shape [n]. True for those batch members\n where the algorithm terminated before reaching `max_iterations`.\n failed: A boolean `Tensor` of shape [n]. True for those batch members\n where an error was encountered during bracketing.\n num_evals: An int32 scalar `Tensor`. The number of times the objective\n function was evaluated.\n left: Return value of value_and_gradients_function at the updated left\n end point of the interval found.\n right: Return value of value_and_gradients_function at the updated right\n end point of the interval found."} {"_id": "q_786", "text": "Bisects an interval and updates to satisfy opposite slope conditions.\n\n Corresponds to the step U3 in [Hager and Zhang (2006)][2].\n\n Args:\n value_and_gradients_function: A Python callable that accepts a real scalar\n tensor and returns a namedtuple containing the value filed `f` of the\n function and its derivative value field `df` at that point.\n Alternatively, the function may representthe batching of `n` such line\n functions (e.g. projecting a single multivariate objective function along\n `n` distinct directions at once) accepting n points as input, i.e. a\n tensor of shape [n], and return a tuple of two tensors of shape [n], the\n function values and the corresponding derivatives at the input points.\n initial_left: Return value of value_and_gradients_function at the left end\n point of the current bracketing interval.\n initial_right: Return value of value_and_gradients_function at the right end\n point of the current bracketing interval.\n f_lim: real `Tensor` of shape [n]. The function value threshold for\n the approximate Wolfe conditions to be checked for each batch member.\n\n Returns:\n A namedtuple containing the following fields:\n iteration: An int32 scalar `Tensor`. The number of iterations performed.\n Bounded above by `max_iterations` parameter.\n stopped: A boolean scalar `Tensor`. True if the bisect algorithm\n terminated.\n failed: A scalar boolean tensor. Indicates whether the objective function\n failed to produce a finite value.\n num_evals: A scalar int32 tensor. The number of value and gradients\n function evaluations.\n left: Return value of value_and_gradients_function at the left end\n point of the bracketing interval found.\n right: Return value of value_and_gradients_function at the right end\n point of the bracketing interval found."} {"_id": "q_787", "text": "Actual implementation of bisect given initial_args in a _BracketResult."} {"_id": "q_788", "text": "Checks if the supplied values are finite.\n\n Args:\n val_1: A namedtuple instance with the function value and derivative,\n as returned e.g. by value_and_gradients_function evaluations.\n val_2: (Optional) A namedtuple instance with the function value and\n derivative, as returned e.g. by value_and_gradients_function evaluations.\n\n Returns:\n is_finite: Scalar boolean `Tensor` indicating whether the function value\n and the derivative in `val_1` (and optionally in `val_2`) are all finite."} {"_id": "q_789", "text": "Checks whether the Wolfe or approx Wolfe conditions are satisfied.\n\n The Wolfe conditions are a set of stopping criteria for an inexact line search\n algorithm. Let f(a) be the function value along the search direction and\n df(a) the derivative along the search direction evaluated a distance 'a'.\n Here 'a' is the distance along the search direction. The Wolfe conditions are:\n\n ```None\n f(a) <= f(0) + delta * a * df(0) (Armijo/Sufficient decrease condition)\n df(a) >= sigma * df(0) (Weak curvature condition)\n ```\n `delta` and `sigma` are two user supplied parameters satisfying:\n `0 < delta < sigma <= 1.`. In the following, delta is called\n `sufficient_decrease_param` and sigma is called `curvature_param`.\n\n On a finite precision machine, the Wolfe conditions are difficult to satisfy\n when one is close to the minimum. Hence, Hager-Zhang propose replacing\n the sufficient decrease condition with the following condition on the\n derivative in the vicinity of a minimum.\n\n ```None\n df(a) <= (2 * delta - 1) * df(0) (Approx Wolfe sufficient decrease)\n ```\n This condition is only used if one is near the minimum. This is tested using\n\n ```None\n f(a) <= f(0) + epsilon * |f(0)|\n ```\n The following function checks both the Wolfe and approx Wolfe conditions.\n Here, `epsilon` is a small positive constant. In the following, the argument\n `f_lim` corresponds to the product: epsilon * |f(0)|.\n\n Args:\n val_0: A namedtuple, as returned by value_and_gradients_function\n evaluated at 0.\n val_c: A namedtuple, as returned by value_and_gradients_function\n evaluated at the point to be tested.\n f_lim: Scalar `Tensor` of real dtype. The function value threshold for\n the approximate Wolfe conditions to be checked.\n sufficient_decrease_param: Positive scalar `Tensor` of real dtype.\n Bounded above by the curvature param. Corresponds to 'delta' in the\n terminology of [Hager and Zhang (2006)][2].\n curvature_param: Positive scalar `Tensor` of real dtype. Bounded above\n by `1.`. Corresponds to 'sigma' in the terminology of\n [Hager Zhang (2005)][1].\n\n Returns:\n is_satisfied: A scalar boolean `Tensor` which is True if either the\n Wolfe or approximate Wolfe conditions are satisfied."} {"_id": "q_790", "text": "Returns the secant interpolation for the minimum.\n\n The secant method is a technique for finding roots of nonlinear functions.\n When finding the minimum, one applies the secant method to the derivative\n of the function.\n For an arbitrary function and a bounding interval, the secant approximation\n can produce the next point which is outside the bounding interval. However,\n with the assumption of opposite slope condtion on the interval [a,b] the new\n point c is always bracketed by [a,b]. Note that by assumption,\n f'(a) < 0 and f'(b) > 0.\n Hence c is a weighted average of a and b and thus always in [a, b].\n\n Args:\n val_a: A namedtuple with the left end point, function value and derivative,\n of the current interval (i.e. a).\n val_b: A namedtuple with the right end point, function value and derivative,\n of the current interval (i.e. b).\n\n Returns:\n approx_minimum: A scalar real `Tensor`. An approximation to the point\n at which the derivative vanishes."} {"_id": "q_791", "text": "Create a function implementing a step-size update policy.\n\n The simple policy increases or decreases the `step_size_var` based on the\n average of `exp(minimum(0., log_accept_ratio))`. It is based on\n [Section 4.2 of Andrieu and Thoms (2008)](\n https://people.eecs.berkeley.edu/~jordan/sail/readings/andrieu-thoms.pdf).\n\n The `num_adaptation_steps` argument is set independently of any burnin\n for the overall chain. In general, adaptation prevents the chain from\n reaching a stationary distribution, so obtaining consistent samples requires\n `num_adaptation_steps` be set to a value [somewhat smaller](\n http://andrewgelman.com/2017/12/15/burn-vs-warm-iterative-simulation-algorithms/#comment-627745)\n than the number of burnin steps. However, it may sometimes be helpful to set\n `num_adaptation_steps` to a larger value during development in order to\n inspect the behavior of the chain during adaptation.\n\n Args:\n num_adaptation_steps: Scalar `int` `Tensor` number of initial steps to\n during which to adjust the step size. This may be greater, less than, or\n equal to the number of burnin steps. If `None`, the step size is adapted\n on every step (note this breaks stationarity of the chain!).\n target_rate: Scalar `Tensor` representing desired `accept_ratio`.\n Default value: `0.75` (i.e., [center of asymptotically optimal\n rate](https://arxiv.org/abs/1411.6669)).\n decrement_multiplier: `Tensor` representing amount to downscale current\n `step_size`.\n Default value: `0.01`.\n increment_multiplier: `Tensor` representing amount to upscale current\n `step_size`.\n Default value: `0.01`.\n step_counter: Scalar `int` `Variable` specifying the current step. The step\n size is adapted iff `step_counter < num_adaptation_steps`.\n Default value: if `None`, an internal variable\n `step_size_adaptation_step_counter` is created and initialized to `-1`.\n\n Returns:\n step_size_simple_update_fn: Callable that takes args\n `step_size_var, kernel_results` and returns updated step size(s)."} {"_id": "q_792", "text": "Creates initial `previous_kernel_results` using a supplied `state`."} {"_id": "q_793", "text": "Constructs a ResNet18 model.\n\n Args:\n input_shape: A `tuple` indicating the Tensor shape.\n num_classes: `int` representing the number of class labels.\n kernel_posterior_scale_mean: Python `int` number for the kernel\n posterior's scale (log variance) mean. The smaller the mean the closer\n is the initialization to a deterministic network.\n kernel_posterior_scale_stddev: Python `float` number for the initial kernel\n posterior's scale stddev.\n ```\n q(W|x) ~ N(mu, var),\n log_var ~ N(kernel_posterior_scale_mean, kernel_posterior_scale_stddev)\n ````\n kernel_posterior_scale_constraint: Python `float` number for the log value\n to constrain the log variance throughout training.\n i.e. log_var <= log(kernel_posterior_scale_constraint).\n\n Returns:\n tf.keras.Model."} {"_id": "q_794", "text": "Create the encoder function.\n\n Args:\n activation: Activation function to use.\n num_topics: The number of topics.\n layer_sizes: The number of hidden units per layer in the encoder.\n\n Returns:\n encoder: A `callable` mapping a bag-of-words `Tensor` to a\n `tfd.Distribution` instance over topics."} {"_id": "q_795", "text": "Create the decoder function.\n\n Args:\n num_topics: The number of topics.\n num_words: The number of words.\n\n Returns:\n decoder: A `callable` mapping a `Tensor` of encodings to a\n `tfd.Distribution` instance over words."} {"_id": "q_796", "text": "Implements Markov chain Monte Carlo via repeated `TransitionKernel` steps.\n\n This function samples from an Markov chain at `current_state` and whose\n stationary distribution is governed by the supplied `TransitionKernel`\n instance (`kernel`).\n\n This function can sample from multiple chains, in parallel. (Whether or not\n there are multiple chains is dictated by the `kernel`.)\n\n The `current_state` can be represented as a single `Tensor` or a `list` of\n `Tensors` which collectively represent the current state.\n\n Since MCMC states are correlated, it is sometimes desirable to produce\n additional intermediate states, and then discard them, ending up with a set of\n states with decreased autocorrelation. See [Owen (2017)][1]. Such \"thinning\"\n is made possible by setting `num_steps_between_results > 0`. The chain then\n takes `num_steps_between_results` extra steps between the steps that make it\n into the results. The extra steps are never materialized (in calls to\n `sess.run`), and thus do not increase memory requirements.\n\n Warning: when setting a `seed` in the `kernel`, ensure that `sample_chain`'s\n `parallel_iterations=1`, otherwise results will not be reproducible.\n\n In addition to returning the chain state, this function supports tracing of\n auxiliary variables used by the kernel. The traced values are selected by\n specifying `trace_fn`. By default, all kernel results are traced but in the\n future the default will be changed to no results being traced, so plan\n accordingly. See below for some examples of this feature.\n\n Args:\n num_results: Integer number of Markov chain draws.\n current_state: `Tensor` or Python `list` of `Tensor`s representing the\n current state(s) of the Markov chain(s).\n previous_kernel_results: A `Tensor` or a nested collection of `Tensor`s\n representing internal calculations made within the previous call to this\n function (or as returned by `bootstrap_results`).\n kernel: An instance of `tfp.mcmc.TransitionKernel` which implements one step\n of the Markov chain.\n num_burnin_steps: Integer number of chain steps to take before starting to\n collect results.\n Default value: 0 (i.e., no burn-in).\n num_steps_between_results: Integer number of chain steps between collecting\n a result. Only one out of every `num_steps_between_samples + 1` steps is\n included in the returned results. The number of returned chain states is\n still equal to `num_results`. Default value: 0 (i.e., no thinning).\n trace_fn: A callable that takes in the current chain state and the previous\n kernel results and return a `Tensor` or a nested collection of `Tensor`s\n that is then traced along with the chain state.\n return_final_kernel_results: If `True`, then the final kernel results are\n returned alongside the chain state and the trace specified by the\n `trace_fn`.\n parallel_iterations: The number of iterations allowed to run in parallel. It\n must be a positive integer. See `tf.while_loop` for more details.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: `None` (i.e., \"mcmc_sample_chain\").\n\n Returns:\n checkpointable_states_and_trace: if `return_final_kernel_results` is\n `True`. The return value is an instance of\n `CheckpointableStatesAndTrace`.\n all_states: if `return_final_kernel_results` is `False` and `trace_fn` is\n `None`. The return value is a `Tensor` or Python list of `Tensor`s\n representing the state(s) of the Markov chain(s) at each result step. Has\n same shape as input `current_state` but with a prepended\n `num_results`-size dimension.\n states_and_trace: if `return_final_kernel_results` is `False` and\n `trace_fn` is not `None`. The return value is an instance of\n `StatesAndTrace`.\n\n #### Examples\n\n ##### Sample from a diagonal-variance Gaussian.\n\n I.e.,\n\n ```none\n for i=1..n:\n x[i] ~ MultivariateNormal(loc=0, scale=diag(true_stddev)) # likelihood\n ```\n\n ```python\n import tensorflow as tf\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n dims = 10\n true_stddev = np.sqrt(np.linspace(1., 3., dims))\n likelihood = tfd.MultivariateNormalDiag(loc=0., scale_diag=true_stddev)\n\n states = tfp.mcmc.sample_chain(\n num_results=1000,\n num_burnin_steps=500,\n current_state=tf.zeros(dims),\n kernel=tfp.mcmc.HamiltonianMonteCarlo(\n target_log_prob_fn=likelihood.log_prob,\n step_size=0.5,\n num_leapfrog_steps=2),\n trace_fn=None)\n\n sample_mean = tf.reduce_mean(states, axis=0)\n # ==> approx all zeros\n\n sample_stddev = tf.sqrt(tf.reduce_mean(\n tf.squared_difference(states, sample_mean),\n axis=0))\n # ==> approx equal true_stddev\n ```\n\n ##### Sampling from factor-analysis posteriors with known factors.\n\n I.e.,\n\n ```none\n # prior\n w ~ MultivariateNormal(loc=0, scale=eye(d))\n for i=1..n:\n # likelihood\n x[i] ~ Normal(loc=w^T F[i], scale=1)\n ```\n\n where `F` denotes factors.\n\n ```python\n import tensorflow as tf\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n # Specify model.\n def make_prior(dims):\n return tfd.MultivariateNormalDiag(\n loc=tf.zeros(dims))\n\n def make_likelihood(weights, factors):\n return tfd.MultivariateNormalDiag(\n loc=tf.matmul(weights, factors, adjoint_b=True))\n\n def joint_log_prob(num_weights, factors, x, w):\n return (make_prior(num_weights).log_prob(w) +\n make_likelihood(w, factors).log_prob(x))\n\n def unnormalized_log_posterior(w):\n # Posterior is proportional to: `p(W, X=x | factors)`.\n return joint_log_prob(num_weights, factors, x, w)\n\n # Setup data.\n num_weights = 10 # == d\n num_factors = 40 # == n\n num_chains = 100\n\n weights = make_prior(num_weights).sample(1)\n factors = tf.random_normal([num_factors, num_weights])\n x = make_likelihood(weights, factors).sample()\n\n # Sample from Hamiltonian Monte Carlo Markov Chain.\n\n # Get `num_results` samples from `num_chains` independent chains.\n chains_states, kernels_results = tfp.mcmc.sample_chain(\n num_results=1000,\n num_burnin_steps=500,\n current_state=tf.zeros([num_chains, num_weights], name='init_weights'),\n kernel=tfp.mcmc.HamiltonianMonteCarlo(\n target_log_prob_fn=unnormalized_log_posterior,\n step_size=0.1,\n num_leapfrog_steps=2))\n\n # Compute sample stats.\n sample_mean = tf.reduce_mean(chains_states, axis=[0, 1])\n # ==> approx equal to weights\n\n sample_var = tf.reduce_mean(\n tf.squared_difference(chains_states, sample_mean),\n axis=[0, 1])\n # ==> less than 1\n ```\n\n ##### Custom tracing functions.\n\n ```python\n import tensorflow as tf\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n likelihood = tfd.Normal(loc=0., scale=1.)\n\n def sample_chain(trace_fn):\n return tfp.mcmc.sample_chain(\n num_results=1000,\n num_burnin_steps=500,\n current_state=0.,\n kernel=tfp.mcmc.HamiltonianMonteCarlo(\n target_log_prob_fn=likelihood.log_prob,\n step_size=0.5,\n num_leapfrog_steps=2),\n trace_fn=trace_fn)\n\n def trace_log_accept_ratio(states, previous_kernel_results):\n return previous_kernel_results.log_accept_ratio\n\n def trace_everything(states, previous_kernel_results):\n return previous_kernel_results\n\n _, log_accept_ratio = sample_chain(trace_fn=trace_log_accept_ratio)\n _, kernel_results = sample_chain(trace_fn=trace_everything)\n\n acceptance_prob = tf.exp(tf.minimum(log_accept_ratio_, 0.))\n # Equivalent to, but more efficient than:\n acceptance_prob = tf.exp(tf.minimum(kernel_results.log_accept_ratio_, 0.))\n ```\n\n #### References\n\n [1]: Art B. Owen. Statistically efficient thinning of a Markov chain sampler.\n _Technical Report_, 2017.\n http://statweb.stanford.edu/~owen/reports/bestthinning.pdf"} {"_id": "q_797", "text": "A multi-layered topic model over a documents-by-terms matrix."} {"_id": "q_798", "text": "Learnable Gamma via concentration and scale parameterization."} {"_id": "q_799", "text": "Get the KL function registered for classes a and b."} {"_id": "q_800", "text": "Returns an image tensor."} {"_id": "q_801", "text": "Creates a character sprite from a set of attribute sprites."} {"_id": "q_802", "text": "Creates a random sequence."} {"_id": "q_803", "text": "Creates a tf.data pipeline for the sprites dataset.\n\n Args:\n characters: A list of (skin, hair, top, pants) tuples containing\n relative paths to the sprite png image for each attribute.\n actions: A list of Actions.\n directions: A list of Directions.\n channels: Number of image channels to yield.\n length: Desired length of the sequences.\n shuffle: Whether or not to shuffle the characters and sequences\n start frame.\n fake_data: Boolean for whether or not to yield synthetic data.\n\n Returns:\n A tf.data.Dataset yielding (seq, skin label index, hair label index,\n top label index, pants label index, action label index, skin label\n name, hair label_name, top label name, pants label name, action\n label name) tuples."} {"_id": "q_804", "text": "Checks that `distributions` satisfies all assumptions."} {"_id": "q_805", "text": "Counts the number of occurrences of each value in an integer array `arr`.\n\n Works like `tf.math.bincount`, but provides an `axis` kwarg that specifies\n dimensions to reduce over. With\n `~axis = [i for i in range(arr.ndim) if i not in axis]`,\n this function returns a `Tensor` of shape `[K] + arr.shape[~axis]`.\n\n If `minlength` and `maxlength` are not given, `K = tf.reduce_max(arr) + 1`\n if `arr` is non-empty, and 0 otherwise.\n If `weights` are non-None, then index `i` of the output stores the sum of the\n value in `weights` at each index where the corresponding value in `arr` is\n `i`.\n\n Args:\n arr: An `int32` `Tensor` of non-negative values.\n weights: If non-None, must be the same shape as arr. For each value in\n `arr`, the bin will be incremented by the corresponding weight instead of\n 1.\n minlength: If given, ensures the output has length at least `minlength`,\n padding with zeros at the end if necessary.\n maxlength: If given, skips values in `arr` that are equal or greater than\n `maxlength`, ensuring that the output has length at most `maxlength`.\n axis: A `0-D` or `1-D` `int32` `Tensor` (with static values) designating\n dimensions in `arr` to reduce over.\n `Default value:` `None`, meaning reduce over all dimensions.\n dtype: If `weights` is None, determines the type of the output bins.\n name: A name scope for the associated operations (optional).\n\n Returns:\n A vector with the same dtype as `weights` or the given `dtype`. The bin\n values."} {"_id": "q_806", "text": "Bin values into discrete intervals.\n\n Given `edges = [c0, ..., cK]`, defining intervals\n `I0 = [c0, c1)`, `I1 = [c1, c2)`, ..., `I_{K-1} = [c_{K-1}, cK]`,\n This function returns `bins`, such that:\n `edges[bins[i]] <= x[i] < edges[bins[i] + 1]`.\n\n Args:\n x: Numeric `N-D` `Tensor` with `N > 0`.\n edges: `Tensor` of same `dtype` as `x`. The first dimension indexes edges\n of intervals. Must either be `1-D` or have\n `x.shape[1:] == edges.shape[1:]`. If `rank(edges) > 1`, `edges[k]`\n designates a shape `edges.shape[1:]` `Tensor` of bin edges for the\n corresponding dimensions of `x`.\n extend_lower_interval: Python `bool`. If `True`, extend the lowest\n interval `I0` to `(-inf, c1]`.\n extend_upper_interval: Python `bool`. If `True`, extend the upper\n interval `I_{K-1}` to `[c_{K-1}, +inf)`.\n dtype: The output type (`int32` or `int64`). `Default value:` `x.dtype`.\n This effects the output values when `x` is below/above the intervals,\n which will be `-1/K+1` for `int` types and `NaN` for `float`s.\n At indices where `x` is `NaN`, the output values will be `0` for `int`\n types and `NaN` for floats.\n name: A Python string name to prepend to created ops. Default: 'find_bins'\n\n Returns:\n bins: `Tensor` with same `shape` as `x` and `dtype`.\n Has whole number values. `bins[i] = k` means the `x[i]` falls into the\n `kth` bin, ie, `edges[bins[i]] <= x[i] < edges[bins[i] + 1]`.\n\n Raises:\n ValueError: If `edges.shape[0]` is determined to be less than 2.\n\n #### Examples\n\n Cut a `1-D` array\n\n ```python\n x = [0., 5., 6., 10., 20.]\n edges = [0., 5., 10.]\n tfp.stats.find_bins(x, edges)\n ==> [0., 0., 1., 1., np.nan]\n ```\n\n Cut `x` into its deciles\n\n ```python\n x = tf.random_uniform(shape=(100, 200))\n decile_edges = tfp.stats.quantiles(x, num_quantiles=10)\n bins = tfp.stats.find_bins(x, edges=decile_edges)\n bins.shape\n ==> (100, 200)\n tf.reduce_mean(bins == 0.)\n ==> approximately 0.1\n tf.reduce_mean(bins == 1.)\n ==> approximately 0.1\n ```"} {"_id": "q_807", "text": "Count how often `x` falls in intervals defined by `edges`.\n\n Given `edges = [c0, ..., cK]`, defining intervals\n `I0 = [c0, c1)`, `I1 = [c1, c2)`, ..., `I_{K-1} = [c_{K-1}, cK]`,\n This function counts how often `x` falls into each interval.\n\n Values of `x` outside of the intervals cause errors. Consider using\n `extend_lower_interval`, `extend_upper_interval` to deal with this.\n\n Args:\n x: Numeric `N-D` `Tensor` with `N > 0`. If `axis` is not\n `None`, must have statically known number of dimensions. The\n `axis` kwarg determines which dimensions index iid samples.\n Other dimensions of `x` index \"events\" for which we will compute different\n histograms.\n edges: `Tensor` of same `dtype` as `x`. The first dimension indexes edges\n of intervals. Must either be `1-D` or have `edges.shape[1:]` the same\n as the dimensions of `x` excluding `axis`.\n If `rank(edges) > 1`, `edges[k]` designates a shape `edges.shape[1:]`\n `Tensor` of interval edges for the corresponding dimensions of `x`.\n axis: Optional `0-D` or `1-D` integer `Tensor` with constant\n values. The axis in `x` that index iid samples.\n `Default value:` `None` (treat every dimension as sample dimension).\n extend_lower_interval: Python `bool`. If `True`, extend the lowest\n interval `I0` to `(-inf, c1]`.\n extend_upper_interval: Python `bool`. If `True`, extend the upper\n interval `I_{K-1}` to `[c_{K-1}, +inf)`.\n dtype: The output type (`int32` or `int64`). `Default value:` `x.dtype`.\n name: A Python string name to prepend to created ops.\n `Default value:` 'histogram'\n\n Returns:\n counts: `Tensor` of type `dtype` and, with\n `~axis = [i for i in range(arr.ndim) if i not in axis]`,\n `counts.shape = [edges.shape[0]] + x.shape[~axis]`.\n With `I` a multi-index into `~axis`, `counts[k][I]` is the number of times\n event(s) fell into the `kth` interval of `edges`.\n\n #### Examples\n\n ```python\n # x.shape = [1000, 2]\n # x[:, 0] ~ Uniform(0, 1), x[:, 1] ~ Uniform(1, 2).\n x = tf.stack([tf.random_uniform([1000]), 1 + tf.random_uniform([1000])],\n axis=-1)\n\n # edges ==> bins [0, 0.5), [0.5, 1.0), [1.0, 1.5), [1.5, 2.0].\n edges = [0., 0.5, 1.0, 1.5, 2.0]\n\n tfp.stats.histogram(x, edges)\n ==> approximately [500, 500, 500, 500]\n\n tfp.stats.histogram(x, edges, axis=0)\n ==> approximately [[500, 500, 0, 0], [0, 0, 500, 500]]\n ```"} {"_id": "q_808", "text": "Get static number of dimensions and assert that some expectations are met.\n\n This function returns the number of dimensions 'ndims' of x, as a Python int.\n\n The optional expect arguments are used to check the ndims of x, but this is\n only done if the static ndims of x is not None.\n\n Args:\n x: A Tensor.\n expect_static: Expect `x` to have statically defined `ndims`.\n expect_ndims: Optional Python integer. If provided, assert that x has\n number of dimensions equal to this.\n expect_ndims_no_more_than: Optional Python integer. If provided, assert\n that x has no more than this many dimensions.\n expect_ndims_at_least: Optional Python integer. If provided, assert that x\n has at least this many dimensions.\n\n Returns:\n ndims: A Python integer.\n\n Raises:\n ValueError: If any of the expectations above are violated."} {"_id": "q_809", "text": "Insert the dims in `axis` back as singletons after being removed.\n\n Args:\n x: `Tensor`.\n axis: Python list of integers.\n\n Returns:\n `Tensor` with same values as `x`, but additional singleton dimensions."} {"_id": "q_810", "text": "Convert possibly negatively indexed axis to non-negative list of ints.\n\n Args:\n axis: Integer Tensor.\n ndims: Number of dimensions into which axis indexes.\n\n Returns:\n A list of non-negative Python integers.\n\n Raises:\n ValueError: If `axis` is not statically defined."} {"_id": "q_811", "text": "Use `top_k` to sort a `Tensor` along the last dimension."} {"_id": "q_812", "text": "Build an ordered list of Distribution instances for component models.\n\n Args:\n num_timesteps: Python `int` number of timesteps to model.\n param_vals: a list of `Tensor` parameter values in order corresponding to\n `self.parameters`, or a dict mapping from parameter names to values.\n initial_step: optional `int` specifying the initial timestep to model.\n This is relevant when the model contains time-varying components,\n e.g., holidays or seasonality.\n\n Returns:\n component_ssms: a Python list of `LinearGaussianStateSpaceModel`\n Distribution objects, in order corresponding to `self.components`."} {"_id": "q_813", "text": "The Amari-alpha Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n When `self_normalized = True`, the Amari-alpha Csiszar-function is:\n\n ```none\n f(u) = { -log(u) + (u - 1), alpha = 0\n { u log(u) - (u - 1), alpha = 1\n { [(u**alpha - 1) - alpha (u - 1)] / (alpha (alpha - 1)), otherwise\n ```\n\n When `self_normalized = False` the `(u - 1)` terms are omitted.\n\n Warning: when `alpha != 0` and/or `self_normalized = True` this function makes\n non-log-space calculations and may therefore be numerically unstable for\n `|logu| >> 0`.\n\n For more information, see:\n A. Cichocki and S. Amari. \"Families of Alpha-Beta-and GammaDivergences:\n Flexible and Robust Measures of Similarities.\" Entropy, vol. 12, no. 6, pp.\n 1532-1568, 2010.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n alpha: `float`-like Python scalar. (See Mathematical Details for meaning.)\n self_normalized: Python `bool` indicating whether `f'(u=1)=0`. When\n `f'(u=1)=0` the implied Csiszar f-Divergence remains non-negative even\n when `p, q` are unnormalized measures.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n amari_alpha_of_u: `float`-like `Tensor` of the Csiszar-function evaluated\n at `u = exp(logu)`.\n\n Raises:\n TypeError: if `alpha` is `None` or a `Tensor`.\n TypeError: if `self_normalized` is `None` or a `Tensor`."} {"_id": "q_814", "text": "The reverse Kullback-Leibler Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n When `self_normalized = True`, the KL-reverse Csiszar-function is:\n\n ```none\n f(u) = -log(u) + (u - 1)\n ```\n\n When `self_normalized = False` the `(u - 1)` term is omitted.\n\n Observe that as an f-Divergence, this Csiszar-function implies:\n\n ```none\n D_f[p, q] = KL[q, p]\n ```\n\n The KL is \"reverse\" because in maximum likelihood we think of minimizing `q`\n as in `KL[p, q]`.\n\n Warning: when self_normalized = True` this function makes non-log-space\n calculations and may therefore be numerically unstable for `|logu| >> 0`.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n self_normalized: Python `bool` indicating whether `f'(u=1)=0`. When\n `f'(u=1)=0` the implied Csiszar f-Divergence remains non-negative even\n when `p, q` are unnormalized measures.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n kl_reverse_of_u: `float`-like `Tensor` of the Csiszar-function evaluated at\n `u = exp(logu)`.\n\n Raises:\n TypeError: if `self_normalized` is `None` or a `Tensor`."} {"_id": "q_815", "text": "The Jensen-Shannon Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n When `self_normalized = True`, the Jensen-Shannon Csiszar-function is:\n\n ```none\n f(u) = u log(u) - (1 + u) log(1 + u) + (u + 1) log(2)\n ```\n\n When `self_normalized = False` the `(u + 1) log(2)` term is omitted.\n\n Observe that as an f-Divergence, this Csiszar-function implies:\n\n ```none\n D_f[p, q] = KL[p, m] + KL[q, m]\n m(x) = 0.5 p(x) + 0.5 q(x)\n ```\n\n In a sense, this divergence is the \"reverse\" of the Arithmetic-Geometric\n f-Divergence.\n\n This Csiszar-function induces a symmetric f-Divergence, i.e.,\n `D_f[p, q] = D_f[q, p]`.\n\n Warning: this function makes non-log-space calculations and may therefore be\n numerically unstable for `|logu| >> 0`.\n\n For more information, see:\n Lin, J. \"Divergence measures based on the Shannon entropy.\" IEEE Trans.\n Inf. Th., 37, 145-151, 1991.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n self_normalized: Python `bool` indicating whether `f'(u=1)=0`. When\n `f'(u=1)=0` the implied Csiszar f-Divergence remains non-negative even\n when `p, q` are unnormalized measures.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n jensen_shannon_of_u: `float`-like `Tensor` of the Csiszar-function\n evaluated at `u = exp(logu)`."} {"_id": "q_816", "text": "The Pearson Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n The Pearson Csiszar-function is:\n\n ```none\n f(u) = (u - 1)**2\n ```\n\n Warning: this function makes non-log-space calculations and may therefore be\n numerically unstable for `|logu| >> 0`.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n pearson_of_u: `float`-like `Tensor` of the Csiszar-function evaluated at\n `u = exp(logu)`."} {"_id": "q_817", "text": "The Squared-Hellinger Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n The Squared-Hellinger Csiszar-function is:\n\n ```none\n f(u) = (sqrt(u) - 1)**2\n ```\n\n This Csiszar-function induces a symmetric f-Divergence, i.e.,\n `D_f[p, q] = D_f[q, p]`.\n\n Warning: this function makes non-log-space calculations and may therefore be\n numerically unstable for `|logu| >> 0`.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n squared_hellinger_of_u: `float`-like `Tensor` of the Csiszar-function\n evaluated at `u = exp(logu)`."} {"_id": "q_818", "text": "The T-Power Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n When `self_normalized = True` the T-Power Csiszar-function is:\n\n ```none\n f(u) = s [ u**t - 1 - t(u - 1) ]\n s = { -1 0 < t < 1\n { +1 otherwise\n ```\n\n When `self_normalized = False` the `- t(u - 1)` term is omitted.\n\n This is similar to the `amari_alpha` Csiszar-function, with the associated\n divergence being the same up to factors depending only on `t`.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n t: `Tensor` of same `dtype` as `logu` and broadcastable shape.\n self_normalized: Python `bool` indicating whether `f'(u=1)=0`.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n t_power_of_u: `float`-like `Tensor` of the Csiszar-function evaluated\n at `u = exp(logu)`."} {"_id": "q_819", "text": "The log1p-abs Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n The Log1p-Abs Csiszar-function is:\n\n ```none\n f(u) = u**(sign(u-1)) - 1\n ```\n\n This function is so-named because it was invented from the following recipe.\n Choose a convex function g such that g(0)=0 and solve for f:\n\n ```none\n log(1 + f(u)) = g(log(u)).\n <=>\n f(u) = exp(g(log(u))) - 1\n ```\n\n That is, the graph is identically `g` when y-axis is `log1p`-domain and x-axis\n is `log`-domain.\n\n Warning: this function makes non-log-space calculations and may therefore be\n numerically unstable for `|logu| >> 0`.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n log1p_abs_of_u: `float`-like `Tensor` of the Csiszar-function evaluated\n at `u = exp(logu)`."} {"_id": "q_820", "text": "The Jeffreys Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n The Jeffreys Csiszar-function is:\n\n ```none\n f(u) = 0.5 ( u log(u) - log(u) )\n = 0.5 kl_forward + 0.5 kl_reverse\n = symmetrized_csiszar_function(kl_reverse)\n = symmetrized_csiszar_function(kl_forward)\n ```\n\n This Csiszar-function induces a symmetric f-Divergence, i.e.,\n `D_f[p, q] = D_f[q, p]`.\n\n Warning: this function makes non-log-space calculations and may therefore be\n numerically unstable for `|logu| >> 0`.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n jeffreys_of_u: `float`-like `Tensor` of the Csiszar-function evaluated\n at `u = exp(logu)`."} {"_id": "q_821", "text": "The Modified-GAN Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n When `self_normalized = True` the modified-GAN (Generative/Adversarial\n Network) Csiszar-function is:\n\n ```none\n f(u) = log(1 + u) - log(u) + 0.5 (u - 1)\n ```\n\n When `self_normalized = False` the `0.5 (u - 1)` is omitted.\n\n The unmodified GAN Csiszar-function is identical to Jensen-Shannon (with\n `self_normalized = False`).\n\n Warning: this function makes non-log-space calculations and may therefore be\n numerically unstable for `|logu| >> 0`.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n self_normalized: Python `bool` indicating whether `f'(u=1)=0`. When\n `f'(u=1)=0` the implied Csiszar f-Divergence remains non-negative even\n when `p, q` are unnormalized measures.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n chi_square_of_u: `float`-like `Tensor` of the Csiszar-function evaluated\n at `u = exp(logu)`."} {"_id": "q_822", "text": "Calculates the dual Csiszar-function in log-space.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n The Csiszar-dual is defined as:\n\n ```none\n f^*(u) = u f(1 / u)\n ```\n\n where `f` is some other Csiszar-function.\n\n For example, the dual of `kl_reverse` is `kl_forward`, i.e.,\n\n ```none\n f(u) = -log(u)\n f^*(u) = u f(1 / u) = -u log(1 / u) = u log(u)\n ```\n\n The dual of the dual is the original function:\n\n ```none\n f^**(u) = {u f(1/u)}^*(u) = u (1/u) f(1/(1/u)) = f(u)\n ```\n\n Warning: this function makes non-log-space calculations and may therefore be\n numerically unstable for `|logu| >> 0`.\n\n Args:\n logu: `float`-like `Tensor` representing `log(u)` from above.\n csiszar_function: Python `callable` representing a Csiszar-function over\n log-domain.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n dual_f_of_u: `float`-like `Tensor` of the result of calculating the dual of\n `f` at `u = exp(logu)`."} {"_id": "q_823", "text": "Monte-Carlo approximation of the Csiszar f-Divergence.\n\n A Csiszar-function is a member of,\n\n ```none\n F = { f:R_+ to R : f convex }.\n ```\n\n The Csiszar f-Divergence for Csiszar-function f is given by:\n\n ```none\n D_f[p(X), q(X)] := E_{q(X)}[ f( p(X) / q(X) ) ]\n ~= m**-1 sum_j^m f( p(x_j) / q(x_j) ),\n where x_j ~iid q(X)\n ```\n\n Tricks: Reparameterization and Score-Gradient\n\n When q is \"reparameterized\", i.e., a diffeomorphic transformation of a\n parameterless distribution (e.g.,\n `Normal(Y; m, s) <=> Y = sX + m, X ~ Normal(0,1)`), we can swap gradient and\n expectation, i.e.,\n `grad[Avg{ s_i : i=1...n }] = Avg{ grad[s_i] : i=1...n }` where `S_n=Avg{s_i}`\n and `s_i = f(x_i), x_i ~iid q(X)`.\n\n However, if q is not reparameterized, TensorFlow's gradient will be incorrect\n since the chain-rule stops at samples of unreparameterized distributions. In\n this circumstance using the Score-Gradient trick results in an unbiased\n gradient, i.e.,\n\n ```none\n grad[ E_q[f(X)] ]\n = grad[ int dx q(x) f(x) ]\n = int dx grad[ q(x) f(x) ]\n = int dx [ q'(x) f(x) + q(x) f'(x) ]\n = int dx q(x) [q'(x) / q(x) f(x) + f'(x) ]\n = int dx q(x) grad[ f(x) q(x) / stop_grad[q(x)] ]\n = E_q[ grad[ f(x) q(x) / stop_grad[q(x)] ] ]\n ```\n\n Unless `q.reparameterization_type != tfd.FULLY_REPARAMETERIZED` it is\n usually preferable to set `use_reparametrization = True`.\n\n Example Application:\n\n The Csiszar f-Divergence is a useful framework for variational inference.\n I.e., observe that,\n\n ```none\n f(p(x)) = f( E_{q(Z | x)}[ p(x, Z) / q(Z | x) ] )\n <= E_{q(Z | x)}[ f( p(x, Z) / q(Z | x) ) ]\n := D_f[p(x, Z), q(Z | x)]\n ```\n\n The inequality follows from the fact that the \"perspective\" of `f`, i.e.,\n `(s, t) |-> t f(s / t))`, is convex in `(s, t)` when `s/t in domain(f)` and\n `t` is a real. Since the above framework includes the popular Evidence Lower\n BOund (ELBO) as a special case, i.e., `f(u) = -log(u)`, we call this framework\n \"Evidence Divergence Bound Optimization\" (EDBO).\n\n Args:\n f: Python `callable` representing a Csiszar-function in log-space, i.e.,\n takes `p_log_prob(q_samples) - q.log_prob(q_samples)`.\n p_log_prob: Python `callable` taking (a batch of) samples from `q` and\n returning the natural-log of the probability under distribution `p`.\n (In variational inference `p` is the joint distribution.)\n q: `tf.Distribution`-like instance; must implement:\n `reparameterization_type`, `sample(n, seed)`, and `log_prob(x)`.\n (In variational inference `q` is the approximate posterior distribution.)\n num_draws: Integer scalar number of draws used to approximate the\n f-Divergence expectation.\n use_reparametrization: Python `bool`. When `None` (the default),\n automatically set to:\n `q.reparameterization_type == tfd.FULLY_REPARAMETERIZED`.\n When `True` uses the standard Monte-Carlo average. When `False` uses the\n score-gradient trick. (See above for details.) When `False`, consider\n using `csiszar_vimco`.\n seed: Python `int` seed for `q.sample`.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n monte_carlo_csiszar_f_divergence: `float`-like `Tensor` Monte Carlo\n approximation of the Csiszar f-Divergence.\n\n Raises:\n ValueError: if `q` is not a reparameterized distribution and\n `use_reparametrization = True`. A distribution `q` is said to be\n \"reparameterized\" when its samples are generated by transforming the\n samples of another distribution which does not depend on the\n parameterization of `q`. This property ensures the gradient (with respect\n to parameters) is valid.\n TypeError: if `p_log_prob` is not a Python `callable`."} {"_id": "q_824", "text": "Helper to `csiszar_vimco`; computes `log_avg_u`, `log_sooavg_u`.\n\n `axis = 0` of `logu` is presumed to correspond to iid samples from `q`, i.e.,\n\n ```none\n logu[j] = log(u[j])\n u[j] = p(x, h[j]) / q(h[j] | x)\n h[j] iid~ q(H | x)\n ```\n\n Args:\n logu: Floating-type `Tensor` representing `log(p(x, h) / q(h | x))`.\n name: Python `str` name prefixed to Ops created by this function.\n\n Returns:\n log_avg_u: `logu.dtype` `Tensor` corresponding to the natural-log of the\n average of `u`. The sum of the gradient of `log_avg_u` is `1`.\n log_sooavg_u: `logu.dtype` `Tensor` characterized by the natural-log of the\n average of `u`` except that the average swaps-out `u[i]` for the\n leave-`i`-out Geometric-average. The mean of the gradient of\n `log_sooavg_u` is `1`. Mathematically `log_sooavg_u` is,\n ```none\n log_sooavg_u[i] = log(Avg{h[j ; i] : j=0, ..., m-1})\n h[j ; i] = { u[j] j!=i\n { GeometricAverage{u[k] : k != i} j==i\n ```"} {"_id": "q_825", "text": "Like batch_gather, but broadcasts to the left of axis."} {"_id": "q_826", "text": "Broadcasts the event or distribution parameters."} {"_id": "q_827", "text": "r\"\"\"Importance sampling with a positive function, in log-space.\n\n With \\\\(p(z) := exp^{log_p(z)}\\\\), and \\\\(f(z) = exp{log_f(z)}\\\\),\n this `Op` returns\n\n \\\\(Log[ n^{-1} sum_{i=1}^n [ f(z_i) p(z_i) / q(z_i) ] ], z_i ~ q,\\\\)\n \\\\(\\approx Log[ E_q[ f(Z) p(Z) / q(Z) ] ]\\\\)\n \\\\(= Log[E_p[f(Z)]]\\\\)\n\n This integral is done in log-space with max-subtraction to better handle the\n often extreme values that `f(z) p(z) / q(z)` can take on.\n\n In contrast to `expectation_importance_sampler`, this `Op` returns values in\n log-space.\n\n\n User supplies either `Tensor` of samples `z`, or number of samples to draw `n`\n\n Args:\n log_f: Callable mapping samples from `sampling_dist_q` to `Tensors` with\n shape broadcastable to `q.batch_shape`.\n For example, `log_f` works \"just like\" `sampling_dist_q.log_prob`.\n log_p: Callable mapping samples from `sampling_dist_q` to `Tensors` with\n shape broadcastable to `q.batch_shape`.\n For example, `log_p` works \"just like\" `q.log_prob`.\n sampling_dist_q: The sampling distribution.\n `tfp.distributions.Distribution`.\n `float64` `dtype` recommended.\n `log_p` and `q` should be supported on the same set.\n z: `Tensor` of samples from `q`, produced by `q.sample` for some `n`.\n n: Integer `Tensor`. Number of samples to generate if `z` is not provided.\n seed: Python integer to seed the random number generator.\n name: A name to give this `Op`.\n\n Returns:\n Logarithm of the importance sampling estimate. `Tensor` with `shape` equal\n to batch shape of `q`, and `dtype` = `q.dtype`."} {"_id": "q_828", "text": "Broadcasts the event or samples."} {"_id": "q_829", "text": "Applies the BFGS algorithm to minimize a differentiable function.\n\n Performs unconstrained minimization of a differentiable function using the\n BFGS scheme. For details of the algorithm, see [Nocedal and Wright(2006)][1].\n\n ### Usage:\n\n The following example demonstrates the BFGS optimizer attempting to find the\n minimum for a simple two dimensional quadratic objective function.\n\n ```python\n minimum = np.array([1.0, 1.0]) # The center of the quadratic bowl.\n scales = np.array([2.0, 3.0]) # The scales along the two axes.\n\n # The objective function and the gradient.\n def quadratic(x):\n value = tf.reduce_sum(scales * (x - minimum) ** 2)\n return value, tf.gradients(value, x)[0]\n\n start = tf.constant([0.6, 0.8]) # Starting point for the search.\n optim_results = tfp.optimizer.bfgs_minimize(\n quadratic, initial_position=start, tolerance=1e-8)\n\n with tf.Session() as session:\n results = session.run(optim_results)\n # Check that the search converged\n assert(results.converged)\n # Check that the argmin is close to the actual value.\n np.testing.assert_allclose(results.position, minimum)\n # Print out the total number of function evaluations it took. Should be 6.\n print (\"Function evaluations: %d\" % results.num_objective_evaluations)\n ```\n\n ### References:\n [1]: Jorge Nocedal, Stephen Wright. Numerical Optimization. Springer Series in\n Operations Research. pp 136-140. 2006\n http://pages.mtu.edu/~struther/Courses/OLD/Sp2013/5630/Jorge_Nocedal_Numerical_optimization_267490.pdf\n\n Args:\n value_and_gradients_function: A Python callable that accepts a point as a\n real `Tensor` and returns a tuple of `Tensor`s of real dtype containing\n the value of the function and its gradient at that point. The function\n to be minimized. The input should be of shape `[..., n]`, where `n` is\n the size of the domain of input points, and all others are batching\n dimensions. The first component of the return value should be a real\n `Tensor` of matching shape `[...]`. The second component (the gradient)\n should also be of shape `[..., n]` like the input value to the function.\n initial_position: real `Tensor` of shape `[..., n]`. The starting point, or\n points when using batching dimensions, of the search procedure. At these\n points the function value and the gradient norm should be finite.\n tolerance: Scalar `Tensor` of real dtype. Specifies the gradient tolerance\n for the procedure. If the supremum norm of the gradient vector is below\n this number, the algorithm is stopped.\n x_tolerance: Scalar `Tensor` of real dtype. If the absolute change in the\n position between one iteration and the next is smaller than this number,\n the algorithm is stopped.\n f_relative_tolerance: Scalar `Tensor` of real dtype. If the relative change\n in the objective value between one iteration and the next is smaller\n than this value, the algorithm is stopped.\n initial_inverse_hessian_estimate: Optional `Tensor` of the same dtype\n as the components of the output of the `value_and_gradients_function`.\n If specified, the shape should broadcastable to shape `[..., n, n]`; e.g.\n if a single `[n, n]` matrix is provided, it will be automatically\n broadcasted to all batches. Alternatively, one can also specify a\n different hessian estimate for each batch member.\n For the correctness of the algorithm, it is required that this parameter\n be symmetric and positive definite. Specifies the starting estimate for\n the inverse of the Hessian at the initial point. If not specified,\n the identity matrix is used as the starting estimate for the\n inverse Hessian.\n max_iterations: Scalar positive int32 `Tensor`. The maximum number of\n iterations for BFGS updates.\n parallel_iterations: Positive integer. The number of iterations allowed to\n run in parallel.\n stopping_condition: (Optional) A Python function that takes as input two\n Boolean tensors of shape `[...]`, and returns a Boolean scalar tensor.\n The input tensors are `converged` and `failed`, indicating the current\n status of each respective batch member; the return value states whether\n the algorithm should stop. The default is tfp.optimizer.converged_all\n which only stops when all batch members have either converged or failed.\n An alternative is tfp.optimizer.converged_any which stops as soon as one\n batch member has converged, or when all have failed.\n name: (Optional) Python str. The name prefixed to the ops created by this\n function. If not supplied, the default name 'minimize' is used.\n\n Returns:\n optimizer_results: A namedtuple containing the following items:\n converged: boolean tensor of shape `[...]` indicating for each batch\n member whether the minimum was found within tolerance.\n failed: boolean tensor of shape `[...]` indicating for each batch\n member whether a line search step failed to find a suitable step size\n satisfying Wolfe conditions. In the absence of any constraints on the\n number of objective evaluations permitted, this value will\n be the complement of `converged`. However, if there is\n a constraint and the search stopped due to available\n evaluations being exhausted, both `failed` and `converged`\n will be simultaneously False.\n num_objective_evaluations: The total number of objective\n evaluations performed.\n position: A tensor of shape `[..., n]` containing the last argument value\n found during the search from each starting point. If the search\n converged, then this value is the argmin of the objective function.\n objective_value: A tensor of shape `[...]` with the value of the\n objective function at the `position`. If the search converged, then\n this is the (local) minimum of the objective function.\n objective_gradient: A tensor of shape `[..., n]` containing the gradient\n of the objective function at the `position`. If the search converged\n the max-norm of this tensor should be below the tolerance.\n inverse_hessian_estimate: A tensor of shape `[..., n, n]` containing the\n inverse of the estimated Hessian."} {"_id": "q_830", "text": "Computes control inputs to validate a provided inverse Hessian.\n\n These ensure that the provided inverse Hessian is positive definite and\n symmetric.\n\n Args:\n inv_hessian: The starting estimate for the inverse of the Hessian at the\n initial point.\n\n Returns:\n A list of tf.Assert ops suitable for use with tf.control_dependencies."} {"_id": "q_831", "text": "Update the BGFS state by computing the next inverse hessian estimate."} {"_id": "q_832", "text": "Applies the BFGS update to the inverse Hessian estimate.\n\n The BFGS update rule is (note A^T denotes the transpose of a vector/matrix A).\n\n ```None\n rho = 1/(grad_delta^T * position_delta)\n U = (I - rho * position_delta * grad_delta^T)\n H_1 = U * H_0 * U^T + rho * position_delta * position_delta^T\n ```\n\n Here, `H_0` is the inverse Hessian estimate at the previous iteration and\n `H_1` is the next estimate. Note that `*` should be interpreted as the\n matrix multiplication (with the understanding that matrix multiplication for\n scalars is usual multiplication and for matrix with vector is the action of\n the matrix on the vector.).\n\n The implementation below utilizes an expanded version of the above formula\n to avoid the matrix multiplications that would be needed otherwise. By\n expansion it is easy to see that one only needs matrix-vector or\n vector-vector operations. The expanded version is:\n\n ```None\n f = 1 + rho * (grad_delta^T * H_0 * grad_delta)\n H_1 - H_0 = - rho * [position_delta * (H_0 * grad_delta)^T +\n (H_0 * grad_delta) * position_delta^T] +\n rho * f * [position_delta * position_delta^T]\n ```\n\n All the terms in square brackets are matrices and are constructed using\n vector outer products. All the other terms on the right hand side are scalars.\n Also worth noting that the first and second lines are both rank 1 updates\n applied to the current inverse Hessian estimate.\n\n Args:\n grad_delta: Real `Tensor` of shape `[..., n]`. The difference between the\n gradient at the new position and the old position.\n position_delta: Real `Tensor` of shape `[..., n]`. The change in position\n from the previous iteration to the current one.\n normalization_factor: Real `Tensor` of shape `[...]`. Should be equal to\n `grad_delta^T * position_delta`, i.e. `1/rho` as defined above.\n inv_hessian_estimate: Real `Tensor` of shape `[..., n, n]`. The previous\n estimate of the inverse Hessian. Should be positive definite and\n symmetric.\n\n Returns:\n A tuple containing the following fields\n is_valid: A Boolean `Tensor` of shape `[...]` indicating batch members\n where the update succeeded. The update can fail if the position change\n becomes orthogonal to the gradient change.\n next_inv_hessian_estimate: A `Tensor` of shape `[..., n, n]`. The next\n Hessian estimate updated using the BFGS update scheme. If the\n `inv_hessian_estimate` is symmetric and positive definite, the\n `next_inv_hessian_estimate` is guaranteed to satisfy the same\n conditions."} {"_id": "q_833", "text": "Transpose a possibly batched matrix.\n\n Args:\n mat: A `tf.Tensor` of shape `[..., n, m]`.\n\n Returns:\n A tensor of shape `[..., m, n]` with matching batch dimensions."} {"_id": "q_834", "text": "Maybe add `ndims` ones to `x.shape` on the right.\n\n If `ndims` is zero, this is a no-op; otherwise, we will create and return a\n new `Tensor` whose shape is that of `x` with `ndims` ones concatenated on the\n right side. If the shape of `x` is known statically, the shape of the return\n value will be as well.\n\n Args:\n x: The `Tensor` we'll return a reshaping of.\n ndims: Python `integer` number of ones to pad onto `x.shape`.\n Returns:\n If `ndims` is zero, `x`; otherwise, a `Tensor` whose shape is that of `x`\n with `ndims` ones concatenated on the right side. If possible, returns a\n `Tensor` whose shape is known statically.\n Raises:\n ValueError: if `ndims` is not a Python `integer` greater than or equal to\n zero."} {"_id": "q_835", "text": "Return `Tensor` with right-most ndims summed.\n\n Args:\n x: the `Tensor` whose right-most `ndims` dimensions to sum\n ndims: number of right-most dimensions to sum.\n\n Returns:\n A `Tensor` resulting from calling `reduce_sum` on the `ndims` right-most\n dimensions. If the shape of `x` is statically known, the result will also\n have statically known shape. Otherwise, the resulting shape will only be\n known at runtime."} {"_id": "q_836", "text": "A sqrt function whose gradient at zero is very large but finite.\n\n Args:\n x: a `Tensor` whose sqrt is to be computed.\n name: a Python `str` prefixed to all ops created by this function.\n Default `None` (i.e., \"sqrt_with_finite_grads\").\n\n Returns:\n sqrt: the square root of `x`, with an overridden gradient at zero\n grad: a gradient function, which is the same as sqrt's gradient everywhere\n except at zero, where it is given a large finite value, instead of `inf`.\n\n Raises:\n TypeError: if `tf.convert_to_tensor(x)` is not a `float` type.\n\n Often in kernel functions, we need to compute the L2 norm of the difference\n between two vectors, `x` and `y`: `sqrt(sum_i((x_i - y_i) ** 2))`. In the\n case where `x` and `y` are identical, e.g., on the diagonal of a kernel\n matrix, we get `NaN`s when we take gradients with respect to the inputs. To\n see, this consider the forward pass:\n\n ```\n [x_1 ... x_N] --> [x_1 ** 2 ... x_N ** 2] -->\n (x_1 ** 2 + ... + x_N ** 2) --> sqrt((x_1 ** 2 + ... + x_N ** 2))\n ```\n\n When we backprop through this forward pass, the `sqrt` yields an `inf` because\n `grad_z(sqrt(z)) = 1 / (2 * sqrt(z))`. Continuing the backprop to the left, at\n the `x ** 2` term, we pick up a `2 * x`, and when `x` is zero, we get\n `0 * inf`, which is `NaN`.\n\n We'd like to avoid these `NaN`s, since they infect the rest of the connected\n computation graph. Practically, when two inputs to a kernel function are\n equal, we are in one of two scenarios:\n 1. We are actually computing k(x, x), in which case norm(x - x) is\n identically zero, independent of x. In this case, we'd like the\n gradient to reflect this independence: it should be zero.\n 2. We are computing k(x, y), and x just *happens* to have the same value\n as y. The gradient at such inputs is in fact ill-defined (there is a\n cusp in the sqrt((x - y) ** 2) surface along the line x = y). There are,\n however, an infinite number of sub-gradients, all of which are valid at\n all such inputs. By symmetry, there is exactly one which is \"special\":\n zero, and we elect to use that value here. In practice, having two\n identical inputs to a kernel matrix is probably a pathological\n situation to be avoided, but that is better resolved at a higher level\n than this.\n\n To avoid the infinite gradient at zero, we use tf.custom_gradient to redefine\n the gradient at zero. We assign it to be a very large value, specifically\n the sqrt of the max value of the floating point dtype of the input. We use\n the sqrt (as opposed to just using the max floating point value) to avoid\n potential overflow when combining this value with others downstream."} {"_id": "q_837", "text": "Applies the L-BFGS algorithm to minimize a differentiable function.\n\n Performs unconstrained minimization of a differentiable function using the\n L-BFGS scheme. See [Nocedal and Wright(2006)][1] for details of the algorithm.\n\n ### Usage:\n\n The following example demonstrates the L-BFGS optimizer attempting to find the\n minimum for a simple high-dimensional quadratic objective function.\n\n ```python\n # A high-dimensional quadratic bowl.\n ndims = 60\n minimum = np.ones([ndims], dtype='float64')\n scales = np.arange(ndims, dtype='float64') + 1.0\n\n # The objective function and the gradient.\n def quadratic(x):\n value = tf.reduce_sum(scales * (x - minimum) ** 2)\n return value, tf.gradients(value, x)[0]\n\n start = np.arange(ndims, 0, -1, dtype='float64')\n optim_results = tfp.optimizer.lbfgs_minimize(\n quadratic, initial_position=start, num_correction_pairs=10,\n tolerance=1e-8)\n\n with tf.Session() as session:\n results = session.run(optim_results)\n # Check that the search converged\n assert(results.converged)\n # Check that the argmin is close to the actual value.\n np.testing.assert_allclose(results.position, minimum)\n ```\n\n ### References:\n\n [1] Jorge Nocedal, Stephen Wright. Numerical Optimization. Springer Series\n in Operations Research. pp 176-180. 2006\n\n http://pages.mtu.edu/~struther/Courses/OLD/Sp2013/5630/Jorge_Nocedal_Numerical_optimization_267490.pdf\n\n Args:\n value_and_gradients_function: A Python callable that accepts a point as a\n real `Tensor` and returns a tuple of `Tensor`s of real dtype containing\n the value of the function and its gradient at that point. The function\n to be minimized. The input is of shape `[..., n]`, where `n` is the size\n of the domain of input points, and all others are batching dimensions.\n The first component of the return value is a real `Tensor` of matching\n shape `[...]`. The second component (the gradient) is also of shape\n `[..., n]` like the input value to the function.\n initial_position: Real `Tensor` of shape `[..., n]`. The starting point, or\n points when using batching dimensions, of the search procedure. At these\n points the function value and the gradient norm should be finite.\n num_correction_pairs: Positive integer. Specifies the maximum number of\n (position_delta, gradient_delta) correction pairs to keep as implicit\n approximation of the Hessian matrix.\n tolerance: Scalar `Tensor` of real dtype. Specifies the gradient tolerance\n for the procedure. If the supremum norm of the gradient vector is below\n this number, the algorithm is stopped.\n x_tolerance: Scalar `Tensor` of real dtype. If the absolute change in the\n position between one iteration and the next is smaller than this number,\n the algorithm is stopped.\n f_relative_tolerance: Scalar `Tensor` of real dtype. If the relative change\n in the objective value between one iteration and the next is smaller\n than this value, the algorithm is stopped.\n initial_inverse_hessian_estimate: None. Option currently not supported.\n max_iterations: Scalar positive int32 `Tensor`. The maximum number of\n iterations for L-BFGS updates.\n parallel_iterations: Positive integer. The number of iterations allowed to\n run in parallel.\n stopping_condition: (Optional) A Python function that takes as input two\n Boolean tensors of shape `[...]`, and returns a Boolean scalar tensor.\n The input tensors are `converged` and `failed`, indicating the current\n status of each respective batch member; the return value states whether\n the algorithm should stop. The default is tfp.optimizer.converged_all\n which only stops when all batch members have either converged or failed.\n An alternative is tfp.optimizer.converged_any which stops as soon as one\n batch member has converged, or when all have failed.\n name: (Optional) Python str. The name prefixed to the ops created by this\n function. If not supplied, the default name 'minimize' is used.\n\n Returns:\n optimizer_results: A namedtuple containing the following items:\n converged: Scalar boolean tensor indicating whether the minimum was\n found within tolerance.\n failed: Scalar boolean tensor indicating whether a line search\n step failed to find a suitable step size satisfying Wolfe\n conditions. In the absence of any constraints on the\n number of objective evaluations permitted, this value will\n be the complement of `converged`. However, if there is\n a constraint and the search stopped due to available\n evaluations being exhausted, both `failed` and `converged`\n will be simultaneously False.\n num_objective_evaluations: The total number of objective\n evaluations performed.\n position: A tensor containing the last argument value found\n during the search. If the search converged, then\n this value is the argmin of the objective function.\n objective_value: A tensor containing the value of the objective\n function at the `position`. If the search converged, then this is\n the (local) minimum of the objective function.\n objective_gradient: A tensor containing the gradient of the objective\n function at the `position`. If the search converged the\n max-norm of this tensor should be below the tolerance.\n position_deltas: A tensor encoding information about the latest\n changes in `position` during the algorithm execution.\n gradient_deltas: A tensor encoding information about the latest\n changes in `objective_gradient` during the algorithm execution."} {"_id": "q_838", "text": "Create LBfgsOptimizerResults with initial state of search procedure."} {"_id": "q_839", "text": "Computes the search direction to follow at the current state.\n\n On the `k`-th iteration of the main L-BFGS algorithm, the state has collected\n the most recent `m` correction pairs in position_deltas and gradient_deltas,\n where `k = state.num_iterations` and `m = min(k, num_correction_pairs)`.\n\n Assuming these, the code below is an implementation of the L-BFGS two-loop\n recursion algorithm given by [Nocedal and Wright(2006)][1]:\n\n ```None\n q_direction = objective_gradient\n for i in reversed(range(m)): # First loop.\n inv_rho[i] = gradient_deltas[i]^T * position_deltas[i]\n alpha[i] = position_deltas[i]^T * q_direction / inv_rho[i]\n q_direction = q_direction - alpha[i] * gradient_deltas[i]\n\n kth_inv_hessian_factor = (gradient_deltas[-1]^T * position_deltas[-1] /\n gradient_deltas[-1]^T * gradient_deltas[-1])\n r_direction = kth_inv_hessian_factor * I * q_direction\n\n for i in range(m): # Second loop.\n beta = gradient_deltas[i]^T * r_direction / inv_rho[i]\n r_direction = r_direction + position_deltas[i] * (alpha[i] - beta)\n\n return -r_direction # Approximates - H_k * objective_gradient.\n ```\n\n Args:\n state: A `LBfgsOptimizerResults` tuple with the current state of the\n search procedure.\n\n Returns:\n A real `Tensor` of the same shape as the `state.position`. The direction\n along which to perform line search."} {"_id": "q_840", "text": "Creates a `tf.Tensor` suitable to hold `k` element-shaped tensors.\n\n For example:\n\n ```python\n element = tf.constant([[0., 1., 2., 3., 4.],\n [5., 6., 7., 8., 9.]])\n\n # A queue capable of holding 3 elements.\n _make_empty_queue_for(3, element)\n # => [[[ 0., 0., 0., 0., 0.],\n # [ 0., 0., 0., 0., 0.]],\n #\n # [[ 0., 0., 0., 0., 0.],\n # [ 0., 0., 0., 0., 0.]],\n #\n # [[ 0., 0., 0., 0., 0.],\n # [ 0., 0., 0., 0., 0.]]]\n ```\n\n Args:\n k: A positive scalar integer, number of elements that each queue will hold.\n element: A `tf.Tensor`, only its shape and dtype information are relevant.\n\n Returns:\n A zero-filed `tf.Tensor` of shape `(k,) + tf.shape(element)` and same dtype\n as `element`."} {"_id": "q_841", "text": "Computes whether each square matrix in the input is positive semi-definite.\n\n Args:\n x: A floating-point `Tensor` of shape `[B1, ..., Bn, M, M]`.\n\n Returns:\n mask: A floating-point `Tensor` of shape `[B1, ... Bn]`. Each\n scalar is 1 if the corresponding matrix was PSD, otherwise 0."} {"_id": "q_842", "text": "Returns rejection samples from trying to get good correlation matrices.\n\n The proposal being rejected from is the uniform distribution on\n \"correlation-like\" matrices. We say a matrix is \"correlation-like\"\n if it is a symmetric square matrix with all entries between -1 and 1\n (inclusive) and 1s on the main diagonal. Of these, the ones that\n are positive semi-definite are exactly the correlation matrices.\n\n The rejection algorithm, then, is to sample a `Tensor` of\n `sample_shape` correlation-like matrices of dimensions `dim` by\n `dim`, and check each one for (i) being a correlation matrix (i.e.,\n PSD), and (ii) having determinant at least the corresponding entry\n of `det_bounds`.\n\n Args:\n det_bounds: A `Tensor` of lower bounds on the determinants of\n acceptable matrices. The shape must broadcast with `sample_shape`.\n dim: A Python `int` dimension of correlation matrices to sample.\n sample_shape: Python `tuple` of `int` shape of the samples to\n compute, excluding the two matrix dimensions.\n dtype: The `dtype` in which to do the computation.\n seed: Random seed.\n\n Returns:\n weights: A `Tensor` of shape `sample_shape`. Each entry is 0 if the\n corresponding matrix was not a correlation matrix, or had too\n small of a determinant. Otherwise, the entry is the\n multiplicative inverse of the density of proposing that matrix\n uniformly, i.e., the volume of the set of `dim` by `dim`\n correlation-like matrices.\n volume: The volume of the set of `dim` by `dim` correlation-like\n matrices."} {"_id": "q_843", "text": "Computes a confidence interval for the mean of the given 1-D distribution.\n\n Assumes (and checks) that the given distribution is Bernoulli, i.e.,\n takes only two values. This licenses using the CDF of the binomial\n distribution for the confidence, which is tighter (for extreme\n probabilities) than the DKWM inequality. The method is known as the\n [Clopper-Pearson method]\n (https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval).\n\n Assumes:\n\n - The given samples were drawn iid from the distribution of interest.\n\n - The given distribution is a Bernoulli, i.e., supported only on\n low and high.\n\n Guarantees:\n\n - The probability (over the randomness of drawing the given sample)\n that the true mean is outside the returned interval is no more\n than the given error_rate.\n\n Args:\n samples: `np.ndarray` of samples drawn iid from the distribution\n of interest.\n error_rate: Python `float` admissible rate of mistakes.\n\n Returns:\n low: Lower bound of confidence interval.\n high: Upper bound of confidence interval.\n\n Raises:\n ValueError: If `samples` has rank other than 1 (batch semantics\n are not implemented), or if `samples` contains values other than\n `low` or `high` (as that makes the distribution not Bernoulli)."} {"_id": "q_844", "text": "Computes the von Mises CDF and its derivative via series expansion."} {"_id": "q_845", "text": "Computes the von Mises CDF and its derivative via Normal approximation."} {"_id": "q_846", "text": "Performs one step of the differential evolution algorithm.\n\n Args:\n objective_function: A Python callable that accepts a batch of possible\n solutions and returns the values of the objective function at those\n arguments as a rank 1 real `Tensor`. This specifies the function to be\n minimized. The input to this callable may be either a single `Tensor`\n or a Python `list` of `Tensor`s. The signature must match the format of\n the argument `population`. (i.e. objective_function(*population) must\n return the value of the function to be minimized).\n population: `Tensor` or Python `list` of `Tensor`s representing the\n current population vectors. Each `Tensor` must be of the same real dtype.\n The first dimension indexes individual population members while the\n rest of the dimensions are consumed by the value function. For example,\n if the population is a single `Tensor` of shape [n, m1, m2], then `n` is\n the population size and the output of `objective_function` applied to the\n population is a `Tensor` of shape [n]. If the population is a python\n list of `Tensor`s then each `Tensor` in the list should have the first\n axis of a common size, say `n` and `objective_function(*population)`\n should return a `Tensor of shape [n]. The population must have at least\n 4 members for the algorithm to work correctly.\n population_values: A `Tensor` of rank 1 and real dtype. The result of\n applying `objective_function` to the `population`. If not supplied it is\n computed using the `objective_function`.\n Default value: None.\n differential_weight: Real scalar `Tensor`. Must be positive and less than\n 2.0. The parameter controlling the strength of mutation.\n Default value: 0.5\n crossover_prob: Real scalar `Tensor`. Must be between 0 and 1. The\n probability of recombination per site.\n Default value: 0.9\n seed: `int` or None. The random seed for this `Op`. If `None`, no seed is\n applied.\n Default value: None.\n name: (Optional) Python str. The name prefixed to the ops created by this\n function. If not supplied, the default name 'one_step' is\n used.\n Default value: None\n\n Returns:\n A sequence containing the following elements (in order):\n next_population: A `Tensor` or Python `list` of `Tensor`s of the same\n structure as the input population. The population at the next generation.\n next_population_values: A `Tensor` of same shape and dtype as input\n `population_values`. The function values for the `next_population`."} {"_id": "q_847", "text": "Processes initial args."} {"_id": "q_848", "text": "Finds the population member with the lowest value."} {"_id": "q_849", "text": "Computes the mutatated vectors for each population member.\n\n Args:\n population: Python `list` of `Tensor`s representing the\n current population vectors. Each `Tensor` must be of the same real dtype.\n The first dimension of each `Tensor` indexes individual\n population members. For example, if the population is a list with a\n single `Tensor` of shape [n, m1, m2], then `n` is the population size and\n the shape of an individual solution is [m1, m2].\n If there is more than one element in the population, then each `Tensor`\n in the list should have the first axis of the same size.\n population_size: Scalar integer `Tensor`. The size of the population.\n mixing_indices: `Tensor` of integral dtype and shape [n, 3] where `n` is the\n number of members in the population. Each element of the `Tensor` must be\n a valid index into the first dimension of the population (i.e range\n between `0` and `n-1` inclusive).\n differential_weight: Real scalar `Tensor`. Must be positive and less than\n 2.0. The parameter controlling the strength of mutation.\n\n Returns:\n mutants: `Tensor` or Python `list` of `Tensor`s of the same shape and dtype\n as the input population. The mutated vectors."} {"_id": "q_850", "text": "Generates an array of indices suitable for mutation operation.\n\n The mutation operation in differential evolution requires that for every\n element of the population, three distinct other elements be chosen to produce\n a trial candidate. This function generates an array of shape [size, 3]\n satisfying the properties that:\n (a). array[i, :] does not contain the index 'i'.\n (b). array[i, :] does not contain any overlapping indices.\n (c). All elements in the array are between 0 and size - 1 inclusive.\n\n Args:\n size: Scalar integer `Tensor`. The number of samples as well as a the range\n of the indices to sample from.\n seed: `int` or None. The random seed for this `Op`. If `None`, no seed is\n applied.\n Default value: `None`.\n name: Python `str` name prefixed to Ops created by this function.\n Default value: 'get_mixing_indices'.\n\n Returns:\n sample: A `Tensor` of shape [size, 3] and same dtype as `size` containing\n samples without replacement between 0 and size - 1 (inclusive) with the\n `i`th row not including the number `i`."} {"_id": "q_851", "text": "Converts the input arg to a list if it is not a list already.\n\n Args:\n tensor_or_list: A `Tensor` or a Python list of `Tensor`s. The argument to\n convert to a list of `Tensor`s.\n\n Returns:\n A tuple of two elements. The first is a Python list of `Tensor`s containing\n the original arguments. The second is a boolean indicating whether\n the original argument was a list or tuple already."} {"_id": "q_852", "text": "Gets a Tensor of type `dtype`, 0 if `tol` is None, validation optional."} {"_id": "q_853", "text": "Soft Thresholding operator.\n\n This operator is defined by the equations\n\n ```none\n { x[i] - gamma, x[i] > gamma\n SoftThreshold(x, gamma)[i] = { 0, x[i] == gamma\n { x[i] + gamma, x[i] < -gamma\n ```\n\n In the context of proximal gradient methods, we have\n\n ```none\n SoftThreshold(x, gamma) = prox_{gamma L1}(x)\n ```\n\n where `prox` is the proximity operator. Thus the soft thresholding operator\n is used in proximal gradient descent for optimizing a smooth function with\n (non-smooth) L1 regularization, as outlined below.\n\n The proximity operator is defined as:\n\n ```none\n prox_r(x) = argmin{ r(z) + 0.5 ||x - z||_2**2 : z },\n ```\n\n where `r` is a (weakly) convex function, not necessarily differentiable.\n Because the L2 norm is strictly convex, the above argmin is unique.\n\n One important application of the proximity operator is as follows. Let `L` be\n a convex and differentiable function with Lipschitz-continuous gradient. Let\n `R` be a convex lower semicontinuous function which is possibly\n nondifferentiable. Let `gamma` be an arbitrary positive real. Then\n\n ```none\n x_star = argmin{ L(x) + R(x) : x }\n ```\n\n if and only if the fixed-point equation is satisfied:\n\n ```none\n x_star = prox_{gamma R}(x_star - gamma grad L(x_star))\n ```\n\n Proximal gradient descent thus typically consists of choosing an initial value\n `x^{(0)}` and repeatedly applying the update\n\n ```none\n x^{(k+1)} = prox_{gamma^{(k)} R}(x^{(k)} - gamma^{(k)} grad L(x^{(k)}))\n ```\n\n where `gamma` is allowed to vary from iteration to iteration. Specializing to\n the case where `R(x) = ||x||_1`, we minimize `L(x) + ||x||_1` by repeatedly\n applying the update\n\n ```\n x^{(k+1)} = SoftThreshold(x - gamma grad L(x^{(k)}), gamma)\n ```\n\n (This idea can also be extended to second-order approximations, although the\n multivariate case does not have a known closed form like above.)\n\n Args:\n x: `float` `Tensor` representing the input to the SoftThreshold function.\n threshold: nonnegative scalar, `float` `Tensor` representing the radius of\n the interval on which each coordinate of SoftThreshold takes the value\n zero. Denoted `gamma` above.\n name: Python string indicating the name of the TensorFlow operation.\n Default value: `'soft_threshold'`.\n\n Returns:\n softthreshold: `float` `Tensor` with the same shape and dtype as `x`,\n representing the value of the SoftThreshold function.\n\n #### References\n\n [1]: Yu, Yao-Liang. The Proximity Operator.\n https://www.cs.cmu.edu/~suvrit/teach/yaoliang_proximity.pdf\n\n [2]: Wikipedia Contributors. Proximal gradient methods for learning.\n _Wikipedia, The Free Encyclopedia_, 2018.\n https://en.wikipedia.org/wiki/Proximal_gradient_methods_for_learning"} {"_id": "q_854", "text": "Clips values to a specified min and max while leaving gradient unaltered.\n\n Like `tf.clip_by_value`, this function returns a tensor of the same type and\n shape as input `t` but with values clamped to be no smaller than to\n `clip_value_min` and no larger than `clip_value_max`. Unlike\n `tf.clip_by_value`, the gradient is unaffected by this op, i.e.,\n\n ```python\n tf.gradients(tfp.math.clip_by_value_preserve_gradient(x), x)[0]\n # ==> ones_like(x)\n ```\n\n Note: `clip_value_min` needs to be smaller or equal to `clip_value_max` for\n correct results.\n\n Args:\n t: A `Tensor`.\n clip_value_min: A scalar `Tensor`, or a `Tensor` with the same shape\n as `t`. The minimum value to clip by.\n clip_value_max: A scalar `Tensor`, or a `Tensor` with the same shape\n as `t`. The maximum value to clip by.\n name: A name for the operation (optional).\n Default value: `'clip_by_value_preserve_gradient'`.\n\n Returns:\n clipped_t: A clipped `Tensor`."} {"_id": "q_855", "text": "Build an iterator over training batches."} {"_id": "q_856", "text": "Converts a sequence of productions into a string of terminal symbols.\n\n Args:\n productions: Tensor of shape [1, num_productions, num_production_rules].\n Slices along the `num_productions` dimension represent one-hot vectors.\n\n Returns:\n str that concatenates all terminal symbols from `productions`.\n\n Raises:\n ValueError: If the first production rule does not begin with\n `self.start_symbol`."} {"_id": "q_857", "text": "Runs the model forward to generate a sequence of productions.\n\n Args:\n inputs: Unused.\n\n Returns:\n productions: Tensor of shape [1, num_productions, num_production_rules].\n Slices along the `num_productions` dimension represent one-hot vectors."} {"_id": "q_858", "text": "Runs the model forward to return a stochastic encoding.\n\n Args:\n inputs: Tensor of shape [1, num_productions, num_production_rules]. It is\n a sequence of productions of length `num_productions`. Each production\n is a one-hot vector of length `num_production_rules`: it determines\n which production rule the production corresponds to.\n\n Returns:\n latent_code_posterior: A random variable capturing a sample from the\n variational distribution, of shape [1, self.latent_size]."} {"_id": "q_859", "text": "Integral of the `hat` function, used for sampling.\n\n We choose a `hat` function, h(x) = x^(-power), which is a continuous\n (unnormalized) density touching each positive integer at the (unnormalized)\n pmf. This function implements `hat` integral: H(x) = int_x^inf h(t) dt;\n which is needed for sampling purposes.\n\n Arguments:\n x: A Tensor of points x at which to evaluate H(x).\n\n Returns:\n A Tensor containing evaluation H(x) at x."} {"_id": "q_860", "text": "Inverse function of _hat_integral."} {"_id": "q_861", "text": "Compute the matrix rank; the number of non-zero SVD singular values.\n\n Arguments:\n a: (Batch of) `float`-like matrix-shaped `Tensor`(s) which are to be\n pseudo-inverted.\n tol: Threshold below which the singular value is counted as \"zero\".\n Default value: `None` (i.e., `eps * max(rows, cols) * max(singular_val)`).\n validate_args: When `True`, additional assertions might be embedded in the\n graph.\n Default value: `False` (i.e., no graph assertions are added).\n name: Python `str` prefixed to ops created by this function.\n Default value: \"matrix_rank\".\n\n Returns:\n matrix_rank: (Batch of) `int32` scalars representing the number of non-zero\n singular values."} {"_id": "q_862", "text": "Compute the Moore-Penrose pseudo-inverse of a matrix.\n\n Calculate the [generalized inverse of a matrix](\n https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse) using its\n singular-value decomposition (SVD) and including all large singular values.\n\n The pseudo-inverse of a matrix `A`, is defined as: \"the matrix that 'solves'\n [the least-squares problem] `A @ x = b`,\" i.e., if `x_hat` is a solution, then\n `A_pinv` is the matrix such that `x_hat = A_pinv @ b`. It can be shown that if\n `U @ Sigma @ V.T = A` is the singular value decomposition of `A`, then\n `A_pinv = V @ inv(Sigma) U^T`. [(Strang, 1980)][1]\n\n This function is analogous to [`numpy.linalg.pinv`](\n https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.pinv.html).\n It differs only in default value of `rcond`. In `numpy.linalg.pinv`, the\n default `rcond` is `1e-15`. Here the default is\n `10. * max(num_rows, num_cols) * np.finfo(dtype).eps`.\n\n Args:\n a: (Batch of) `float`-like matrix-shaped `Tensor`(s) which are to be\n pseudo-inverted.\n rcond: `Tensor` of small singular value cutoffs. Singular values smaller\n (in modulus) than `rcond` * largest_singular_value (again, in modulus) are\n set to zero. Must broadcast against `tf.shape(a)[:-2]`.\n Default value: `10. * max(num_rows, num_cols) * np.finfo(a.dtype).eps`.\n validate_args: When `True`, additional assertions might be embedded in the\n graph.\n Default value: `False` (i.e., no graph assertions are added).\n name: Python `str` prefixed to ops created by this function.\n Default value: \"pinv\".\n\n Returns:\n a_pinv: The pseudo-inverse of input `a`. Has same shape as `a` except\n rightmost two dimensions are transposed.\n\n Raises:\n TypeError: if input `a` does not have `float`-like `dtype`.\n ValueError: if input `a` has fewer than 2 dimensions.\n\n #### Examples\n\n ```python\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n a = tf.constant([[1., 0.4, 0.5],\n [0.4, 0.2, 0.25],\n [0.5, 0.25, 0.35]])\n tf.matmul(tfp.math.pinv(a), a)\n # ==> array([[1., 0., 0.],\n [0., 1., 0.],\n [0., 0., 1.]], dtype=float32)\n\n a = tf.constant([[1., 0.4, 0.5, 1.],\n [0.4, 0.2, 0.25, 2.],\n [0.5, 0.25, 0.35, 3.]])\n tf.matmul(tfp.math.pinv(a), a)\n # ==> array([[ 0.76, 0.37, 0.21, -0.02],\n [ 0.37, 0.43, -0.33, 0.02],\n [ 0.21, -0.33, 0.81, 0.01],\n [-0.02, 0.02, 0.01, 1. ]], dtype=float32)\n ```\n\n #### References\n\n [1]: G. Strang. \"Linear Algebra and Its Applications, 2nd Ed.\" Academic Press,\n Inc., 1980, pp. 139-142."} {"_id": "q_863", "text": "Solves systems of linear eqns `A X = RHS`, given LU factorizations.\n\n Note: this function does not verify the implied matrix is actually invertible\n nor is this condition checked even when `validate_args=True`.\n\n Args:\n lower_upper: `lu` as returned by `tf.linalg.lu`, i.e., if\n `matmul(P, matmul(L, U)) = X` then `lower_upper = L + U - eye`.\n perm: `p` as returned by `tf.linag.lu`, i.e., if\n `matmul(P, matmul(L, U)) = X` then `perm = argmax(P)`.\n rhs: Matrix-shaped float `Tensor` representing targets for which to solve;\n `A X = RHS`. To handle vector cases, use:\n `lu_solve(..., rhs[..., tf.newaxis])[..., 0]`.\n validate_args: Python `bool` indicating whether arguments should be checked\n for correctness. Note: this function does not verify the implied matrix is\n actually invertible, even when `validate_args=True`.\n Default value: `False` (i.e., don't validate arguments).\n name: Python `str` name given to ops managed by this object.\n Default value: `None` (i.e., \"lu_solve\").\n\n Returns:\n x: The `X` in `A @ X = RHS`.\n\n #### Examples\n\n ```python\n import numpy as np\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n x = [[[1., 2],\n [3, 4]],\n [[7, 8],\n [3, 4]]]\n inv_x = tfp.math.lu_solve(*tf.linalg.lu(x), rhs=tf.eye(2))\n tf.assert_near(tf.matrix_inverse(x), inv_x)\n # ==> True\n ```"} {"_id": "q_864", "text": "Computes a matrix inverse given the matrix's LU decomposition.\n\n This op is conceptually identical to,\n\n ````python\n inv_X = tf.lu_matrix_inverse(*tf.linalg.lu(X))\n tf.assert_near(tf.matrix_inverse(X), inv_X)\n # ==> True\n ```\n\n Note: this function does not verify the implied matrix is actually invertible\n nor is this condition checked even when `validate_args=True`.\n\n Args:\n lower_upper: `lu` as returned by `tf.linalg.lu`, i.e., if\n `matmul(P, matmul(L, U)) = X` then `lower_upper = L + U - eye`.\n perm: `p` as returned by `tf.linag.lu`, i.e., if\n `matmul(P, matmul(L, U)) = X` then `perm = argmax(P)`.\n validate_args: Python `bool` indicating whether arguments should be checked\n for correctness. Note: this function does not verify the implied matrix is\n actually invertible, even when `validate_args=True`.\n Default value: `False` (i.e., don't validate arguments).\n name: Python `str` name given to ops managed by this object.\n Default value: `None` (i.e., \"lu_matrix_inverse\").\n\n Returns:\n inv_x: The matrix_inv, i.e.,\n `tf.matrix_inverse(tfp.math.lu_reconstruct(lu, perm))`.\n\n #### Examples\n\n ```python\n import numpy as np\n import tensorflow as tf\n import tensorflow_probability as tfp\n\n x = [[[3., 4], [1, 2]],\n [[7., 8], [3, 4]]]\n inv_x = tfp.math.lu_matrix_inverse(*tf.linalg.lu(x))\n tf.assert_near(tf.matrix_inverse(x), inv_x)\n # ==> True\n ```"} {"_id": "q_865", "text": "Returns list of assertions related to `lu_solve` assumptions."} {"_id": "q_866", "text": "Returns a block diagonal rank 2 SparseTensor from a batch of SparseTensors.\n\n Args:\n sp_a: A rank 3 `SparseTensor` representing a batch of matrices.\n\n Returns:\n sp_block_diag_a: matrix-shaped, `float` `SparseTensor` with the same dtype\n as `sparse_or_matrix`, of shape [B * M, B * N] where `sp_a` has shape\n [B, M, N]. Each [M, N] batch of `sp_a` is lined up along the diagonal."} {"_id": "q_867", "text": "Computes the neg-log-likelihood gradient and Fisher information for a GLM.\n\n Note that Fisher information is related to the Hessian of the log-likelihood\n by the equation\n\n ```none\n FisherInfo = E[Hessian with respect to model_coefficients of -LogLikelihood(\n Y | model_matrix, model_coefficients)]\n ```\n\n where `LogLikelihood` is the log-likelihood of a generalized linear model\n parameterized by `model_matrix` and `model_coefficients`, and the expectation\n is taken over Y, distributed according to the same GLM with the same parameter\n values.\n\n Args:\n model_matrix: (Batch of) matrix-shaped, `float` `Tensor` or `SparseTensor`\n where each row represents a sample's features. Has shape `[N, n]` where\n `N` is the number of data samples and `n` is the number of features per\n sample.\n linear_response: (Batch of) vector-shaped `Tensor` with the same dtype as\n `model_matrix`, equal to `model_matix @ model_coefficients` where\n `model_coefficients` are the coefficients of the linear component of the\n GLM.\n response: (Batch of) vector-shaped `Tensor` with the same dtype as\n `model_matrix` where each element represents a sample's observed response\n (to the corresponding row of features).\n model: `tfp.glm.ExponentialFamily`-like instance, which specifies the link\n function and distribution of the GLM, and thus characterizes the negative\n log-likelihood. Must have sufficient statistic equal to the response, that\n is, `T(y) = y`.\n\n Returns:\n grad_neg_log_likelihood: (Batch of) vector-shaped `Tensor` with the same\n shape and dtype as a single row of `model_matrix`, representing the\n gradient of the negative log likelihood of `response` given linear\n response `linear_response`.\n fim_middle: (Batch of) vector-shaped `Tensor` with the same shape and dtype\n as a single column of `model_matrix`, satisfying the equation\n `Fisher information =\n Transpose(model_matrix)\n @ diag(fim_middle)\n @ model_matrix`."} {"_id": "q_868", "text": "r\"\"\"Fits a GLM using coordinate-wise FIM-informed proximal gradient descent.\n\n This function uses a L1- and L2-regularized, second-order quasi-Newton method\n to find maximum-likelihood parameters for the given model and observed data.\n The second-order approximations use negative Fisher information in place of\n the Hessian, that is,\n\n ```none\n FisherInfo = E_Y[Hessian with respect to model_coefficients of -LogLikelihood(\n Y | model_matrix, current value of model_coefficients)]\n ```\n\n For large, sparse data sets, `model_matrix` should be supplied as a\n `SparseTensor`.\n\n Args:\n model_matrix: (Batch of) matrix-shaped, `float` `Tensor` or `SparseTensor`\n where each row represents a sample's features. Has shape `[N, n]` where\n `N` is the number of data samples and `n` is the number of features per\n sample.\n response: (Batch of) vector-shaped `Tensor` with the same dtype as\n `model_matrix` where each element represents a sample's observed response\n (to the corresponding row of features).\n model: `tfp.glm.ExponentialFamily`-like instance, which specifies the link\n function and distribution of the GLM, and thus characterizes the negative\n log-likelihood which will be minimized. Must have sufficient statistic\n equal to the response, that is, `T(y) = y`.\n model_coefficients_start: (Batch of) vector-shaped, `float` `Tensor` with\n the same dtype as `model_matrix`, representing the initial values of the\n coefficients for the GLM regression. Has shape `[n]` where `model_matrix`\n has shape `[N, n]`.\n tolerance: scalar, `float` `Tensor` representing the tolerance for each\n optiization step; see the `tolerance` argument of `fit_sparse_one_step`.\n l1_regularizer: scalar, `float` `Tensor` representing the weight of the L1\n regularization term.\n l2_regularizer: scalar, `float` `Tensor` representing the weight of the L2\n regularization term.\n Default value: `None` (i.e., no L2 regularization).\n maximum_iterations: Python integer specifying maximum number of iterations\n of the outer loop of the optimizer (i.e., maximum number of calls to\n `fit_sparse_one_step`). After this many iterations of the outer loop, the\n algorithm will terminate even if the return value `model_coefficients` has\n not converged.\n Default value: `1`.\n maximum_full_sweeps_per_iteration: Python integer specifying the maximum\n number of coordinate descent sweeps allowed in each iteration.\n Default value: `1`.\n learning_rate: scalar, `float` `Tensor` representing a multiplicative factor\n used to dampen the proximal gradient descent steps.\n Default value: `None` (i.e., factor is conceptually `1`).\n name: Python string representing the name of the TensorFlow operation.\n The default name is `\"fit_sparse\"`.\n\n Returns:\n model_coefficients: (Batch of) `Tensor` of the same shape and dtype as\n `model_coefficients_start`, representing the computed model coefficients\n which minimize the regularized negative log-likelihood.\n is_converged: scalar, `bool` `Tensor` indicating whether the minimization\n procedure converged across all batches within the specified number of\n iterations. Here convergence means that an iteration of the inner loop\n (`fit_sparse_one_step`) returns `True` for its `is_converged` output\n value.\n iter: scalar, `int` `Tensor` indicating the actual number of iterations of\n the outer loop of the optimizer completed (i.e., number of calls to\n `fit_sparse_one_step` before achieving convergence).\n\n #### Example\n\n ```python\n from __future__ import print_function\n import numpy as np\n import tensorflow as tf\n import tensorflow_probability as tfp\n tfd = tfp.distributions\n\n def make_dataset(n, d, link, scale=1., dtype=np.float32):\n model_coefficients = tfd.Uniform(\n low=np.array(-1, dtype), high=np.array(1, dtype)).sample(\n d, seed=42)\n radius = np.sqrt(2.)\n model_coefficients *= radius / tf.linalg.norm(model_coefficients)\n mask = tf.random_shuffle(tf.range(d)) < tf.to_int32(0.5 * tf.to_float(d))\n model_coefficients = tf.where(mask, model_coefficients,\n tf.zeros_like(model_coefficients))\n model_matrix = tfd.Normal(\n loc=np.array(0, dtype), scale=np.array(1, dtype)).sample(\n [n, d], seed=43)\n scale = tf.convert_to_tensor(scale, dtype)\n linear_response = tf.matmul(model_matrix,\n model_coefficients[..., tf.newaxis])[..., 0]\n if link == 'linear':\n response = tfd.Normal(loc=linear_response, scale=scale).sample(seed=44)\n elif link == 'probit':\n response = tf.cast(\n tfd.Normal(loc=linear_response, scale=scale).sample(seed=44) > 0,\n dtype)\n elif link == 'logit':\n response = tfd.Bernoulli(logits=linear_response).sample(seed=44)\n else:\n raise ValueError('unrecognized true link: {}'.format(link))\n return model_matrix, response, model_coefficients, mask\n\n with tf.Session() as sess:\n x_, y_, model_coefficients_true_, _ = sess.run(make_dataset(\n n=int(1e5), d=100, link='probit'))\n\n model = tfp.glm.Bernoulli()\n model_coefficients_start = tf.zeros(x_.shape[-1], np.float32)\n\n model_coefficients, is_converged, num_iter = tfp.glm.fit_sparse(\n model_matrix=tf.convert_to_tensor(x_),\n response=tf.convert_to_tensor(y_),\n model=model,\n model_coefficients_start=model_coefficients_start,\n l1_regularizer=800.,\n l2_regularizer=None,\n maximum_iterations=10,\n maximum_full_sweeps_per_iteration=10,\n tolerance=1e-6,\n learning_rate=None)\n\n model_coefficients_, is_converged_, num_iter_ = sess.run([\n model_coefficients, is_converged, num_iter])\n\n print(\"is_converged:\", is_converged_)\n print(\" num_iter:\", num_iter_)\n print(\"\\nLearned / True\")\n print(np.concatenate(\n [[model_coefficients_], [model_coefficients_true_]], axis=0).T)\n\n # ==>\n # is_converged: True\n # num_iter: 1\n #\n # Learned / True\n # [[ 0. 0. ]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [ 0.11195257 0.12484948]\n # [ 0. 0. ]\n # [ 0.05191106 0.06394956]\n # [-0.15090358 -0.15325639]\n # [-0.18187316 -0.18825999]\n # [-0.06140942 -0.07994166]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [ 0.14474444 0.15810856]\n # [ 0. 0. ]\n # [-0.25249591 -0.24260855]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [-0.03888761 -0.06755984]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [-0.0192222 -0.04169233]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [ 0.01434913 0.03568212]\n # [-0.11336883 -0.12873614]\n # [ 0. 0. ]\n # [-0.24496339 -0.24048163]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [ 0.04088281 0.06565224]\n # [-0.12784363 -0.13359821]\n # [ 0.05618424 0.07396613]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [ 0. -0.01719233]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [-0.00076072 -0.03607186]\n # [ 0.21801499 0.21146794]\n # [-0.02161094 -0.04031265]\n # [ 0.0918689 0.10487888]\n # [ 0.0106154 0.03233612]\n # [-0.07817317 -0.09725142]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [-0.23725343 -0.24194022]\n # [ 0. 0. ]\n # [-0.08725718 -0.1048776 ]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [-0.02114314 -0.04145789]\n # [ 0. 0. ]\n # [ 0. 0. ]\n # [-0.02710908 -0.04590397]\n # [ 0.15293184 0.15415154]\n # [ 0.2114463 0.2088728 ]\n # [-0.10969634 -0.12368613]\n # [ 0. -0.01505797]\n # [-0.01140458 -0.03234904]\n # [ 0.16051085 0.1680062 ]\n # [ 0.09816848 0.11094204]\n ```\n\n #### References\n\n [1]: Jerome Friedman, Trevor Hastie and Rob Tibshirani. Regularization Paths\n for Generalized Linear Models via Coordinate Descent. _Journal of\n Statistical Software_, 33(1), 2010.\n https://www.jstatsoft.org/article/view/v033i01/v33i01.pdf\n\n [2]: Guo-Xun Yuan, Chia-Hua Ho and Chih-Jen Lin. An Improved GLMNET for\n L1-regularized Logistic Regression. _Journal of Machine Learning\n Research_, 13, 2012.\n http://www.jmlr.org/papers/volume13/yuan12a/yuan12a.pdf"} {"_id": "q_869", "text": "Generate the mask for building an autoregressive dense layer."} {"_id": "q_870", "text": "A autoregressively masked dense layer. Analogous to `tf.layers.dense`.\n\n See [Germain et al. (2015)][1] for detailed explanation.\n\n Arguments:\n inputs: Tensor input.\n units: Python `int` scalar representing the dimensionality of the output\n space.\n num_blocks: Python `int` scalar representing the number of blocks for the\n MADE masks.\n exclusive: Python `bool` scalar representing whether to zero the diagonal of\n the mask, used for the first layer of a MADE.\n kernel_initializer: Initializer function for the weight matrix.\n If `None` (default), weights are initialized using the\n `tf.glorot_random_initializer`.\n reuse: Python `bool` scalar representing whether to reuse the weights of a\n previous layer by the same name.\n name: Python `str` used to describe ops managed by this function.\n *args: `tf.layers.dense` arguments.\n **kwargs: `tf.layers.dense` keyword arguments.\n\n Returns:\n Output tensor.\n\n Raises:\n NotImplementedError: if rightmost dimension of `inputs` is unknown prior to\n graph execution.\n\n #### References\n\n [1]: Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE:\n Masked Autoencoder for Distribution Estimation. In _International\n Conference on Machine Learning_, 2015. https://arxiv.org/abs/1502.03509"} {"_id": "q_871", "text": "Returns a degree vectors for the input."} {"_id": "q_872", "text": "Returns a list of binary mask matrices enforcing autoregressivity."} {"_id": "q_873", "text": "Returns a masked version of the given initializer."} {"_id": "q_874", "text": "See tfkl.Layer.call."} {"_id": "q_875", "text": "Sample a multinomial.\n\n The batch shape is given by broadcasting num_trials with\n remove_last_dimension(logits).\n\n Args:\n num_samples: Python int or singleton integer Tensor: number of multinomial\n samples to draw.\n num_classes: Python int or singleton integer Tensor: number of classes.\n logits: Floating Tensor with last dimension k, of (unnormalized) logit\n probabilities per class.\n num_trials: Tensor of number of categorical trials each multinomial consists\n of. num_trials[..., tf.newaxis] must broadcast with logits.\n dtype: dtype at which to emit samples.\n seed: Random seed.\n\n Returns:\n samples: Tensor of given dtype and shape [n] + batch_shape + [k]."} {"_id": "q_876", "text": "Build a zero-dimensional MVNDiag object."} {"_id": "q_877", "text": "Computes the number of edges on longest path from node to root."} {"_id": "q_878", "text": "Creates lists of callables suitable for JDSeq."} {"_id": "q_879", "text": "Variational loss for the VGP.\n\n Given `observations` and `observation_index_points`, compute the\n negative variational lower bound as specified in [Hensman, 2013][1].\n\n Args:\n observations: `float` `Tensor` representing collection, or batch of\n collections, of observations corresponding to\n `observation_index_points`. Shape has the form `[b1, ..., bB, e]`, which\n must be brodcastable with the batch and example shapes of\n `observation_index_points`. The batch shape `[b1, ..., bB]` must be\n broadcastable with the shapes of all other batched parameters\n (`kernel.batch_shape`, `observation_index_points`, etc.).\n observation_index_points: `float` `Tensor` representing finite (batch of)\n vector(s) of points where observations are defined. Shape has the\n form `[b1, ..., bB, e1, f1, ..., fF]` where `F` is the number of feature\n dimensions and must equal `kernel.feature_ndims` and `e1` is the number\n (size) of index points in each batch (we denote it `e1` to distinguish\n it from the numer of inducing index points, denoted `e2` below). If\n set to `None` uses `index_points` as the origin for observations.\n Default value: None.\n kl_weight: Amount by which to scale the KL divergence loss between prior\n and posterior.\n Default value: 1.\n name: Python `str` name prefixed to Ops created by this class.\n Default value: \"GaussianProcess\".\n Returns:\n loss: Scalar tensor representing the negative variational lower bound.\n Can be directly used in a `tf.Optimizer`.\n Raises:\n ValueError: if `mean_fn` is not `None` and is not callable.\n\n #### References\n\n [1]: Hensman, J., Lawrence, N. \"Gaussian Processes for Big Data\", 2013\n https://arxiv.org/abs/1309.6835"} {"_id": "q_880", "text": "Model selection for optimal variational hyperparameters.\n\n Given the full training set (parameterized by `observations` and\n `observation_index_points`), compute the optimal variational\n location and scale for the VGP. This is based of the method suggested\n in [Titsias, 2009][1].\n\n Args:\n kernel: `PositiveSemidefiniteKernel`-like instance representing the\n GP's covariance function.\n inducing_index_points: `float` `Tensor` of locations of inducing points in\n the index set. Shape has the form `[b1, ..., bB, e2, f1, ..., fF]`, just\n like `observation_index_points`. The batch shape components needn't be\n identical to those of `observation_index_points`, but must be broadcast\n compatible with them.\n observation_index_points: `float` `Tensor` representing finite (batch of)\n vector(s) of points where observations are defined. Shape has the\n form `[b1, ..., bB, e1, f1, ..., fF]` where `F` is the number of feature\n dimensions and must equal `kernel.feature_ndims` and `e1` is the number\n (size) of index points in each batch (we denote it `e1` to distinguish\n it from the numer of inducing index points, denoted `e2` below).\n observations: `float` `Tensor` representing collection, or batch of\n collections, of observations corresponding to\n `observation_index_points`. Shape has the form `[b1, ..., bB, e]`, which\n must be brodcastable with the batch and example shapes of\n `observation_index_points`. The batch shape `[b1, ..., bB]` must be\n broadcastable with the shapes of all other batched parameters\n (`kernel.batch_shape`, `observation_index_points`, etc.).\n observation_noise_variance: `float` `Tensor` representing the variance\n of the noise in the Normal likelihood distribution of the model. May be\n batched, in which case the batch shape must be broadcastable with the\n shapes of all other batched parameters (`kernel.batch_shape`,\n `index_points`, etc.).\n Default value: `0.`\n mean_fn: Python `callable` that acts on index points to produce a (batch\n of) vector(s) of mean values at those index points. Takes a `Tensor` of\n shape `[b1, ..., bB, f1, ..., fF]` and returns a `Tensor` whose shape is\n (broadcastable with) `[b1, ..., bB]`. Default value: `None` implies\n constant zero function.\n jitter: `float` scalar `Tensor` added to the diagonal of the covariance\n matrix to ensure positive definiteness of the covariance matrix.\n Default value: `1e-6`.\n name: Python `str` name prefixed to Ops created by this class.\n Default value: \"optimal_variational_posterior\".\n Returns:\n loc, scale: Tuple representing the variational location and scale.\n Raises:\n ValueError: if `mean_fn` is not `None` and is not callable.\n\n #### References\n\n [1]: Titsias, M. \"Variational Model Selection for Sparse Gaussian Process\n Regression\", 2009.\n http://proceedings.mlr.press/v5/titsias09a/titsias09a.pdf"} {"_id": "q_881", "text": "Build utility method to compute whether the season is changing."} {"_id": "q_882", "text": "Build a function computing transitions for a seasonal effect model."} {"_id": "q_883", "text": "Build the transition noise model for a SeasonalStateSpaceModel."} {"_id": "q_884", "text": "Returns `True` if given observation data is empty.\n\n Emptiness means either\n 1. Both `observation_index_points` and `observations` are `None`, or\n 2. the \"number of observations\" shape is 0. The shape of\n `observation_index_points` is `[..., N, f1, ..., fF]`, where `N` is the\n number of observations and the `f`s are feature dims. Thus, we look at the\n shape element just to the left of the leftmost feature dim. If that shape is\n zero, we consider the data empty.\n\n We don't check the shape of observations; validations are checked elsewhere in\n the calling code, to ensure these shapes are consistent.\n\n Args:\n feature_ndims: the number of feature dims, as reported by the GP kernel.\n observation_index_points: the observation data locations in the index set.\n observations: the observation data.\n\n Returns:\n is_empty: True if the data were deemed to be empty."} {"_id": "q_885", "text": "Ensure that observation data and locations have consistent shapes.\n\n This basically means that the batch shapes are broadcastable. We can only\n ensure this when those shapes are fully statically defined.\n\n\n Args:\n kernel: The GP kernel.\n observation_index_points: the observation data locations in the index set.\n observations: the observation data.\n\n Raises:\n ValueError: if the observations' batch shapes are not broadcastable."} {"_id": "q_886", "text": "Add a learning rate scheduler to the contained `schedules`\n\n :param scheduler: learning rate scheduler to be add\n :param max_iteration: iteration numbers this scheduler will run"} {"_id": "q_887", "text": "Configure checkpoint settings.\n\n\n :param checkpoint_trigger: the interval to write snapshots\n :param checkpoint_path: the path to write snapshots into\n :param isOverWrite: whether to overwrite existing snapshots in path.default is True"} {"_id": "q_888", "text": "Configure constant clipping settings.\n\n\n :param min_value: the minimum value to clip by\n :param max_value: the maxmimum value to clip by"} {"_id": "q_889", "text": "Do an optimization."} {"_id": "q_890", "text": "Set validation summary. A ValidationSummary object contains information\n necessary for the optimizer to know how often the logs are recorded,\n where to store the logs and how to retrieve them, etc. For details,\n refer to the docs of ValidationSummary.\n\n\n :param summary: a ValidationSummary object"} {"_id": "q_891", "text": "Parse or download news20 if source_dir is empty.\n\n :param source_dir: The directory storing news data.\n :return: A list of (tokens, label)"} {"_id": "q_892", "text": "Parse or download the pre-trained glove word2vec if source_dir is empty.\n\n :param source_dir: The directory storing the pre-trained word2vec\n :param dim: The dimension of a vector\n :return: A dict mapping from word to vector"} {"_id": "q_893", "text": "Train a model for a fixed number of epochs on a dataset.\n\n # Arguments\n x: Input data. A Numpy array or RDD of Sample or Image DataSet.\n y: Labels. A Numpy array. Default is None if x is already RDD of Sample or Image DataSet.\n batch_size: Number of samples per gradient update.\n nb_epoch: Number of iterations to train.\n validation_data: Tuple (x_val, y_val) where x_val and y_val are both Numpy arrays.\n Or RDD of Sample. Default is None if no validation is involved.\n distributed: Boolean. Whether to train the model in distributed mode or local mode.\n Default is True. In local mode, x and y must both be Numpy arrays."} {"_id": "q_894", "text": "Evaluate a model on a given dataset in distributed mode.\n\n # Arguments\n x: Input data. A Numpy array or RDD of Sample.\n y: Labels. A Numpy array. Default is None if x is already RDD of Sample.\n batch_size: Number of samples per gradient update."} {"_id": "q_895", "text": "Get mnist dataset and parallelize into RDDs.\n Data would be downloaded automatically if it doesn't present at the specific location.\n\n :param sc: SparkContext.\n :param data_type: \"train\" for training data and \"test\" for testing data.\n :param location: Location to store mnist dataset.\n :return: RDD of (features: ndarray, label: ndarray)."} {"_id": "q_896", "text": "Preprocess mnist dataset.\n Normalize and transform into Sample of RDDs."} {"_id": "q_897", "text": "When to end the optimization based on input option."} {"_id": "q_898", "text": "Set validation and checkpoint for distributed optimizer."} {"_id": "q_899", "text": "Return the broadcasted value"} {"_id": "q_900", "text": "Call Java Function"} {"_id": "q_901", "text": "Return a JavaRDD of Object by unpickling\n\n\n It will convert each Python object into Java object by Pyrolite, whenever\n the RDD is serialized in batch or not."} {"_id": "q_902", "text": "Convert Python object into Java"} {"_id": "q_903", "text": "Convert to a bigdl activation layer\n given the name of the activation as a string"} {"_id": "q_904", "text": "Convert a ndarray to a DenseTensor which would be used in Java side.\n\n >>> import numpy as np\n >>> from bigdl.util.common import JTensor\n >>> from bigdl.util.common import callBigDlFunc\n >>> np.random.seed(123)\n >>> data = np.random.uniform(0, 1, (2, 3)).astype(\"float32\")\n >>> result = JTensor.from_ndarray(data)\n >>> expected_storage = np.array([[0.69646919, 0.28613934, 0.22685145], [0.55131477, 0.71946895, 0.42310646]])\n >>> expected_shape = np.array([2, 3])\n >>> np.testing.assert_allclose(result.storage, expected_storage, rtol=1e-6, atol=1e-6)\n >>> np.testing.assert_allclose(result.shape, expected_shape)\n >>> data_back = result.to_ndarray()\n >>> (data == data_back).all()\n True\n >>> tensor1 = callBigDlFunc(\"float\", \"testTensor\", JTensor.from_ndarray(data)) # noqa\n >>> array_from_tensor = tensor1.to_ndarray()\n >>> (array_from_tensor == data).all()\n True"} {"_id": "q_905", "text": "get label as ndarray from ImageFeature"} {"_id": "q_906", "text": "Read parquet file as DistributedImageFrame"} {"_id": "q_907", "text": "write ImageFrame as parquet file"} {"_id": "q_908", "text": "get image from ImageFrame"} {"_id": "q_909", "text": "get image list from ImageFrame"} {"_id": "q_910", "text": "get label rdd from ImageFrame"} {"_id": "q_911", "text": "get prediction rdd from ImageFrame"} {"_id": "q_912", "text": "Generates output predictions for the input samples,\n processing the samples in a batched way.\n\n # Arguments\n x: the input data, as a Numpy array or list of Numpy array for local mode.\n as RDD[Sample] for distributed mode\n is_distributed: used to control run in local or cluster. the default value is False\n # Returns\n A Numpy array or RDD[Sample] of predictions."} {"_id": "q_913", "text": "Apply the transformer to the images in \"inputCol\" and store the transformed result\n into \"outputCols\""} {"_id": "q_914", "text": "Save a Keras model definition to JSON with given path"} {"_id": "q_915", "text": "Define a convnet model in Keras 1.2.2"} {"_id": "q_916", "text": "Set weights for this layer\n\n :param weights: a list of numpy arrays which represent weight and bias\n :return:\n\n >>> linear = Linear(3,2)\n creating: createLinear\n >>> linear.set_weights([np.array([[1,2,3],[4,5,6]]), np.array([7,8])])\n >>> weights = linear.get_weights()\n >>> weights[0].shape == (2,3)\n True\n >>> np.testing.assert_allclose(weights[0][0], np.array([1., 2., 3.]))\n >>> np.testing.assert_allclose(weights[1], np.array([7., 8.]))\n >>> relu = ReLU()\n creating: createReLU\n >>> from py4j.protocol import Py4JJavaError\n >>> try:\n ... relu.set_weights([np.array([[1,2,3],[4,5,6]]), np.array([7,8])])\n ... except Py4JJavaError as err:\n ... print(err.java_exception)\n ...\n java.lang.IllegalArgumentException: requirement failed: this layer does not have weight/bias\n >>> relu.get_weights()\n The layer does not have weight/bias\n >>> add = Add(2)\n creating: createAdd\n >>> try:\n ... add.set_weights([np.array([7,8]), np.array([1,2])])\n ... except Py4JJavaError as err:\n ... print(err.java_exception)\n ...\n java.lang.IllegalArgumentException: requirement failed: the number of input weight/bias is not consistant with number of weight/bias of this layer, number of input 1, number of output 2\n >>> cAdd = CAdd([4, 1])\n creating: createCAdd\n >>> cAdd.set_weights(np.ones([4, 1]))\n >>> (cAdd.get_weights()[0] == np.ones([4, 1])).all()\n True"} {"_id": "q_917", "text": "Load a pre-trained Torch model.\n\n :param path: The path containing the pre-trained model.\n :return: A pre-trained model."} {"_id": "q_918", "text": "Load a pre-trained Keras model.\n\n :param json_path: The json path containing the keras model definition.\n :param hdf5_path: The HDF5 path containing the pre-trained keras model weights with or without the model architecture.\n :return: A bigdl model."} {"_id": "q_919", "text": "Create a python Criterion by a java criterion object\n\n :param jcriterion: A java criterion object which created by Py4j\n :return: a criterion."} {"_id": "q_920", "text": "The file path can be stored in a local file system, HDFS, S3,\n or any Hadoop-supported file system."} {"_id": "q_921", "text": "Load IMDB dataset\n Transform input data into an RDD of Sample"} {"_id": "q_922", "text": "Define a recurrent convolutional model in Keras 1.2.2"} {"_id": "q_923", "text": "Return a list of shape tuples if there are multiple inputs.\n Return one shape tuple otherwise."} {"_id": "q_924", "text": "Return a list of shape tuples if there are multiple outputs.\n Return one shape tuple otherwise."} {"_id": "q_925", "text": "Get mnist dataset with features and label as ndarray.\n Data would be downloaded automatically if it doesn't present at the specific location.\n\n :param data_type: \"train\" for training data and \"test\" for testing data.\n :param location: Location to store mnist dataset.\n :return: (features: ndarray, label: ndarray)"} {"_id": "q_926", "text": "Parse or download movielens 1m data if train_dir is empty.\n\n :param data_dir: The directory storing the movielens data\n :return: a 2D numpy array with user index and item index in each row"} {"_id": "q_927", "text": "Get and return the jar path for bigdl if exists."} {"_id": "q_928", "text": "Export variable tensors from the checkpoint files.\n\n :param checkpoint_path: tensorflow checkpoint path\n :return: dictionary of tensor. The key is the variable name and the value is the numpy"} {"_id": "q_929", "text": "Save a variable dictionary to a Java object file, so it can be read by BigDL\n\n :param tensors: tensor dictionary\n :param target_path: where is the Java object file store\n :param bigdl_type: model variable numeric type\n :return: nothing"} {"_id": "q_930", "text": "Expand and tile tensor along given axis\n\n Args:\n units: tf tensor with dimensions [batch_size, time_steps, n_input_features]\n axis: axis along which expand and tile. Must be 1 or 2"} {"_id": "q_931", "text": "Simple attention without any conditions.\n\n Computes weighted sum of memory elements."} {"_id": "q_932", "text": "Computes weighted sum of inputs conditioned on state"} {"_id": "q_933", "text": "Computes BLEU score of translated segments against one or more references.\n\n Args:\n reference_corpus: list of lists of references for each translation. Each\n reference should be tokenized into a list of tokens.\n translation_corpus: list of translations to score. Each translation\n should be tokenized into a list of tokens.\n max_order: Maximum n-gram order to use when computing BLEU score.\n smooth: Whether or not to apply Lin et al. 2004 smoothing.\n\n Returns:\n 3-Tuple with the BLEU score, n-gram precisions, geometric mean of n-gram\n precisions and brevity penalty."} {"_id": "q_934", "text": "Dump the trained weights from a model to a HDF5 file."} {"_id": "q_935", "text": "Convert labels to one-hot vectors for multi-class multi-label classification\n\n Args:\n labels: list of samples where each sample is a class or a list of classes which sample belongs with\n classes: array of classes' names\n\n Returns:\n 2d array with one-hot representation of given samples"} {"_id": "q_936", "text": "Checks existence of the model file, loads the model if the file exists"} {"_id": "q_937", "text": "Extract values of momentum variables from optimizer\n\n Returns:\n optimizer's `rho` or `beta_1`"} {"_id": "q_938", "text": "Update graph variables setting giving `learning_rate` and `momentum`\n\n Args:\n learning_rate: learning rate value to be set in graph (set if not None)\n momentum: momentum value to be set in graph (set if not None)\n\n Returns:\n None"} {"_id": "q_939", "text": "Converts word to a tuple of symbols, optionally converts it to lowercase\n and adds capitalization label.\n\n Args:\n word: input word\n to_lower: whether to lowercase\n append_case: whether to add case mark\n ('' for first capital and '' for all caps)\n\n Returns:\n a preprocessed word"} {"_id": "q_940", "text": "Number of convolutional layers stacked on top of each other\n\n Args:\n units: a tensorflow tensor with dimensionality [None, n_tokens, n_features]\n n_hidden_list: list with number of hidden units at the ouput of each layer\n filter_width: width of the kernel in tokens\n use_batch_norm: whether to use batch normalization between layers\n use_dilation: use power of 2 dilation scheme [1, 2, 4, 8 .. ] for layers 1, 2, 3, 4 ...\n training_ph: boolean placeholder determining whether is training phase now or not.\n It is used only for batch normalization to determine whether to use\n current batch average (std) or memory stored average (std)\n add_l2_losses: whether to add l2 losses on network kernels to\n tf.GraphKeys.REGULARIZATION_LOSSES or not\n\n Returns:\n units: tensor at the output of the last convolutional layer"} {"_id": "q_941", "text": "Highway convolutional network. Skip connection with gating\n mechanism.\n\n Args:\n units: a tensorflow tensor with dimensionality [None, n_tokens, n_features]\n n_hidden_list: list with number of hidden units at the output of each layer\n filter_width: width of the kernel in tokens\n use_batch_norm: whether to use batch normalization between layers\n use_dilation: use power of 2 dilation scheme [1, 2, 4, 8 .. ] for layers 1, 2, 3, 4 ...\n training_ph: boolean placeholder determining whether is training phase now or not.\n It is used only for batch normalization to determine whether to use\n current batch average (std) or memory stored average (std)\n Returns:\n units: tensor at the output of the last convolutional layer\n with dimensionality [None, n_tokens, n_hidden_list[-1]]"} {"_id": "q_942", "text": "Token embedding layer. Create matrix of for token embeddings.\n Can be initialized with given matrix (for example pre-trained\n with word2ve algorithm\n\n Args:\n token_indices: token indices tensor of type tf.int32\n token_embedding_matrix: matrix of embeddings with dimensionality\n [n_tokens, embeddings_dimension]\n n_tokens: total number of unique tokens\n token_embedding_dim: dimensionality of embeddings, typical 100..300\n name: embedding matrix name (variable name)\n trainable: whether to set the matrix trainable or not\n\n Returns:\n embedded_tokens: tf tensor of size [B, T, E], where B - batch size\n T - number of tokens, E - token_embedding_dim"} {"_id": "q_943", "text": "Fast CuDNN GRU implementation\n\n Args:\n units: tf.Tensor with dimensions [B x T x F], where\n B - batch size\n T - number of tokens\n F - features\n\n n_hidden: dimensionality of hidden state\n trainable_initial_states: whether to create a special trainable variable\n to initialize the hidden states of the network or use just zeros\n seq_lengths: tensor of sequence lengths with dimension [B]\n n_layers: number of layers\n input_initial_h: initial hidden state, tensor\n name: name of the variable scope to use\n reuse:whether to reuse already initialized variable\n\n Returns:\n h - all hidden states along T dimension,\n tf.Tensor with dimensionality [B x T x F]\n h_last - last hidden state, tf.Tensor with dimensionality [B x H]"} {"_id": "q_944", "text": "CuDNN Compatible GRU implementation.\n It should be used to load models saved with CudnnGRUCell to run on CPU.\n\n Args:\n units: tf.Tensor with dimensions [B x T x F], where\n B - batch size\n T - number of tokens\n F - features\n\n n_hidden: dimensionality of hidden state\n trainable_initial_states: whether to create a special trainable variable\n to initialize the hidden states of the network or use just zeros\n seq_lengths: tensor of sequence lengths with dimension [B]\n n_layers: number of layers\n input_initial_h: initial hidden state, tensor\n name: name of the variable scope to use\n reuse:whether to reuse already initialized variable\n\n Returns:\n h - all hidden states along T dimension,\n tf.Tensor with dimensionality [B x T x F]\n h_last - last hidden state, tf.Tensor with dimensionality [B x H]"} {"_id": "q_945", "text": "CuDNN Compatible LSTM implementation.\n It should be used to load models saved with CudnnLSTMCell to run on CPU.\n\n Args:\n units: tf.Tensor with dimensions [B x T x F], where\n B - batch size\n T - number of tokens\n F - features\n n_hidden: dimensionality of hidden state\n n_layers: number of layers\n trainable_initial_states: whether to create a special trainable variable\n to initialize the hidden states of the network or use just zeros\n seq_lengths: tensor of sequence lengths with dimension [B]\n initial_h: optional initial hidden state, masks trainable_initial_states\n if provided\n initial_c: optional initial cell state, masks trainable_initial_states\n if provided\n name: name of the variable scope to use\n reuse:whether to reuse already initialized variable\n\n\n Returns:\n h - all hidden states along T dimension,\n tf.Tensor with dimensionality [B x T x F]\n h_last - last hidden state, tf.Tensor with dimensionality [B x H]\n where H - number of hidden units\n c_last - last cell state, tf.Tensor with dimensionality [B x H]\n where H - number of hidden units"} {"_id": "q_946", "text": "Fast CuDNN Bi-GRU implementation\n\n Args:\n units: tf.Tensor with dimensions [B x T x F], where\n B - batch size\n T - number of tokens\n F - features\n n_hidden: dimensionality of hidden state\n seq_lengths: number of tokens in each sample in the batch\n n_layers: number of layers\n trainable_initial_states: whether to create a special trainable variable\n to initialize the hidden states of the network or use just zeros\n name: name of the variable scope to use\n reuse:whether to reuse already initialized variable\n\n\n Returns:\n h - all hidden states along T dimension,\n tf.Tensor with dimensionality [B x T x F]\n h_last - last hidden state, tf.Tensor with dimensionality [B x H * 2]\n where H - number of hidden units"} {"_id": "q_947", "text": "Fast CuDNN Bi-LSTM implementation\n\n Args:\n units: tf.Tensor with dimensions [B x T x F], where\n B - batch size\n T - number of tokens\n F - features\n n_hidden: dimensionality of hidden state\n seq_lengths: number of tokens in each sample in the batch\n n_layers: number of layers\n trainable_initial_states: whether to create a special trainable variable\n to initialize the hidden states of the network or use just zeros\n name: name of the variable scope to use\n reuse:whether to reuse already initialized variable\n\n Returns:\n h - all hidden states along T dimension,\n tf.Tensor with dimensionality [B x T x F]\n h_last - last hidden state, tf.Tensor with dimensionality [B x H * 2]\n where H - number of hidden units\n c_last - last cell state, tf.Tensor with dimensionality [B x H * 2]\n where H - number of hidden units"} {"_id": "q_948", "text": "Fast CuDNN Stacked Bi-GRU implementation\n\n Args:\n units: tf.Tensor with dimensions [B x T x F], where\n B - batch size\n T - number of tokens\n F - features\n n_hidden: dimensionality of hidden state\n seq_lengths: number of tokens in each sample in the batch\n n_stacks: number of stacked Bi-GRU\n keep_prob: dropout keep_prob between Bi-GRUs (intra-layer dropout)\n concat_stacked_outputs: return last Bi-GRU output or concat outputs from every Bi-GRU,\n trainable_initial_states: whether to create a special trainable variable\n to initialize the hidden states of the network or use just zeros\n name: name of the variable scope to use\n reuse: whether to reuse already initialized variable\n\n\n Returns:\n h - all hidden states along T dimension,\n tf.Tensor with dimensionality [B x T x ((n_hidden * 2) * n_stacks)]"} {"_id": "q_949", "text": "Dropout with the same drop mask for all fixed_mask_dims\n\n Args:\n units: a tensor, usually with shapes [B x T x F], where\n B - batch size\n T - tokens dimension\n F - feature dimension\n keep_prob: keep probability\n fixed_mask_dims: in these dimensions the mask will be the same\n\n Returns:\n dropped units tensor"} {"_id": "q_950", "text": "Builds the network using Keras."} {"_id": "q_951", "text": "Builds word-level network"} {"_id": "q_952", "text": "Makes predictions on a single batch\n\n Args:\n data: a batch of word sequences together with additional inputs\n return_indexes: whether to return tag indexes in vocabulary or tags themselves\n\n Returns:\n a batch of label sequences"} {"_id": "q_953", "text": "Transforms a sentence to Numpy array, which will be the network input.\n\n Args:\n sent: input sentence\n bucket_length: the width of the bucket\n\n Returns:\n A 3d array, answer[i][j][k] contains the index of k-th letter\n in j-th word of i-th input sentence."} {"_id": "q_954", "text": "Calculate BLEU score\n\n Parameters:\n y_true: list of reference tokens\n y_predicted: list of query tokens\n weights: n-gram weights\n smoothing_function: SmoothingFunction\n auto_reweigh: Option to re-normalize the weights uniformly\n penalty: either enable brevity penalty or not\n\n Return:\n BLEU score"} {"_id": "q_955", "text": "Verify signature certificate URL against Amazon Alexa requirements.\n\n Each call of Agent passes incoming utterances batch through skills filter,\n agent skills, skills processor. Batch of dialog IDs can be provided, in\n other case utterances indexes in incoming batch are used as dialog IDs.\n\n Args:\n url: Signature certificate URL from SignatureCertChainUrl HTTP header.\n\n Returns:\n result: True if verification was successful, False if not."} {"_id": "q_956", "text": "Extracts pycrypto X509 objects from SSL certificates chain string.\n\n Args:\n certs_txt: SSL certificates chain string.\n\n Returns:\n result: List of pycrypto X509 objects."} {"_id": "q_957", "text": "Verifies if Amazon and additional certificates creates chain of trust to a root CA.\n\n Args:\n certs_chain: List of pycrypto X509 intermediate certificates from signature chain URL.\n amazon_cert: Pycrypto X509 Amazon certificate.\n\n Returns:\n result: True if verification was successful, False if not."} {"_id": "q_958", "text": "Verifies Alexa request signature.\n\n Args:\n amazon_cert: Pycrypto X509 Amazon certificate.\n signature: Base64 decoded Alexa request signature from Signature HTTP header.\n request_body: full HTTPS request body\n Returns:\n result: True if verification was successful, False if not."} {"_id": "q_959", "text": "Conducts series of Alexa SSL certificate verifications against Amazon Alexa requirements.\n\n Args:\n signature_chain_url: Signature certificate URL from SignatureCertChainUrl HTTP header.\n Returns:\n result: Amazon certificate if verification was successful, None if not."} {"_id": "q_960", "text": "Returns list of Telegram compatible states of the RichMessage\n instance nested controls.\n\n Returns:\n telegram_controls: Telegram representation of RichMessage instance nested\n controls."} {"_id": "q_961", "text": "DeepPavlov console configuration utility."} {"_id": "q_962", "text": "Constructs function encapsulated in the graph."} {"_id": "q_963", "text": "Calculate accuracy in terms of absolute coincidence\n\n Args:\n y_true: array of true values\n y_predicted: array of predicted values\n\n Returns:\n portion of absolutely coincidental samples"} {"_id": "q_964", "text": "Builds agent based on PatternMatchingSkill and HighestConfidenceSelector.\n\n This is agent building tutorial. You can use this .py file to check how hello-bot agent works.\n\n Returns:\n agent: Agent capable of handling several simple greetings."} {"_id": "q_965", "text": "Takes an array of integers and transforms it\n to an array of one-hot encoded vectors"} {"_id": "q_966", "text": "Populate settings directory with default settings files\n\n Args:\n force: if ``True``, replace existing settings files with default ones\n\n Returns:\n ``True`` if any files were copied and ``False`` otherwise"} {"_id": "q_967", "text": "Load model parameters from self.load_path"} {"_id": "q_968", "text": "Get train operation for given loss\n\n Args:\n loss: loss, tf tensor or scalar\n learning_rate: scalar or placeholder.\n clip_norm: clip gradients norm by clip_norm.\n learnable_scopes: which scopes are trainable (None for all).\n optimizer: instance of tf.train.Optimizer, default Adam.\n **kwargs: parameters passed to tf.train.Optimizer object\n (scalars or placeholders).\n\n Returns:\n train_op"} {"_id": "q_969", "text": "Finds all dictionary words in d-window from word"} {"_id": "q_970", "text": "Initiates self-destruct timer."} {"_id": "q_971", "text": "Infers DeepPavlov agent with raw user input extracted from Alexa request.\n\n Args:\n utterance: Raw user input extracted from Alexa request.\n Returns:\n response: DeepPavlov agent response."} {"_id": "q_972", "text": "Populates generated response with additional data conforming Alexa response specification.\n\n Args:\n response: Raw user input extracted from Alexa request.\n request: Alexa request.\n Returns:\n response: Response conforming Alexa response specification."} {"_id": "q_973", "text": "Handles LaunchRequest Alexa request.\n\n Args:\n request: Alexa request.\n Returns:\n response: \"response\" part of response dict conforming Alexa specification."} {"_id": "q_974", "text": "Handles all unsupported types of Alexa requests. Returns standard message.\n\n Args:\n request: Alexa request.\n Returns:\n response: \"response\" part of response dict conforming Alexa specification."} {"_id": "q_975", "text": "Calculates perplexity by loss\n\n Args:\n losses: list of numpy arrays of model losses\n\n Returns:\n perplexity : float"} {"_id": "q_976", "text": "Build and return the model described in corresponding configuration file."} {"_id": "q_977", "text": "Start interaction with the model described in corresponding configuration file."} {"_id": "q_978", "text": "Reads input file in CONLL-U format\n\n Args:\n infile: a path to a file\n word_column: column containing words (default=1)\n pos_column: column containing part-of-speech labels (default=3)\n tag_column: column containing fine-grained tags (default=5)\n max_sents: maximal number of sents to read\n read_only_words: whether to read only words\n\n Returns:\n a list of sentences. Each item contains a word sequence and a tag sequence, which is ``None``\n in case ``read_only_words = True``"} {"_id": "q_979", "text": "Decorator for metric registration."} {"_id": "q_980", "text": "Find the best value according to given losses\n\n Args:\n values: list of considered values\n losses: list of obtained loss values corresponding to `values`\n max_loss_div: maximal divergence of loss to be considered significant\n min_val_div: minimum divergence of loss to be considered significant\n\n Returns:\n best value divided by `min_val_div`"} {"_id": "q_981", "text": "Embed one text sample\n\n Args:\n tokens: tokenized text sample\n mean: whether to return mean embedding of tokens per sample\n\n Returns:\n list of embedded tokens or array of mean values"} {"_id": "q_982", "text": "parses requirements from requirements.txt"} {"_id": "q_983", "text": "Exports a TF-Hub module"} {"_id": "q_984", "text": "Make an agent\n\n Returns:\n agent: created Ecommerce agent"} {"_id": "q_985", "text": "Parse parameters and run ms bot framework"} {"_id": "q_986", "text": "Download a file from URL to one or several target locations\n\n Args:\n dest_file_path: path or list of paths to the file destination files (including file name)\n source_url: the source URL\n force_download: download file if it already exists, or not"} {"_id": "q_987", "text": "Simple tar archive extractor\n\n Args:\n file_path: path to the tar file to be extracted\n extract_folder: folder to which the files will be extracted"} {"_id": "q_988", "text": "Download and extract .tar.gz or .gz file to one or several target locations.\n The archive is deleted if extraction was successful.\n\n Args:\n url: URL for file downloading\n download_path: path to the directory where downloaded file will be stored\n until the end of extraction\n extract_paths: path or list of paths where contents of archive will be extracted"} {"_id": "q_989", "text": "Updates dict recursively\n\n You need to use this function to update dictionary if depth of editing_dict is more then 1\n\n Args:\n editable_dict: dictionary, that will be edited\n editing_dict: dictionary, that contains edits\n Returns:\n None"} {"_id": "q_990", "text": "Given a URL, set or replace a query parameter and return the modified URL.\n\n Args:\n url: a given URL\n param_name: the parameter name to add\n param_value: the parameter value\n Returns:\n URL with the added parameter"} {"_id": "q_991", "text": "Returns Amazon Alexa compatible state of the PlainText instance.\n\n Creating Amazon Alexa response blank with populated \"outputSpeech\" and\n \"card sections.\n\n Returns:\n response: Amazon Alexa representation of PlainText state."} {"_id": "q_992", "text": "Returns json compatible state of the Button instance.\n\n Returns:\n control_json: Json representation of Button state."} {"_id": "q_993", "text": "Returns json compatible state of the ButtonsFrame instance.\n\n Returns json compatible state of the ButtonsFrame instance including\n all nested buttons.\n\n Returns:\n control_json: Json representation of ButtonsFrame state."} {"_id": "q_994", "text": "Returns MS Bot Framework compatible state of the ButtonsFrame instance.\n\n Creating MS Bot Framework activity blank with RichCard in \"attachments\". RichCard\n is populated with CardActions corresponding buttons embedded in ButtonsFrame.\n\n Returns:\n control_json: MS Bot Framework representation of ButtonsFrame state."} {"_id": "q_995", "text": "Calculates recall at k ranking metric.\n\n Args:\n y_true: Labels. Not used in the calculation of the metric.\n y_predicted: Predictions.\n Each prediction contains ranking score of all ranking candidates for the particular data sample.\n It is supposed that the ranking score for the true candidate goes first in the prediction.\n\n Returns:\n Recall at k"} {"_id": "q_996", "text": "Recursively apply config's variables values to its property"} {"_id": "q_997", "text": "Convert relative paths to absolute with resolving user directory."} {"_id": "q_998", "text": "Builds and returns the Component from corresponding dictionary of parameters."} {"_id": "q_999", "text": "Thread run method implementation."} {"_id": "q_1000", "text": "Deletes Conversation instance.\n\n Args:\n conversation_key: Conversation key."} {"_id": "q_1001", "text": "Conducts cleanup of periodical certificates with expired validation."} {"_id": "q_1002", "text": "Conducts series of Alexa request verifications against Amazon Alexa requirements.\n\n Args:\n signature_chain_url: Signature certificate URL from SignatureCertChainUrl HTTP header.\n signature: Base64 decoded Alexa request signature from Signature HTTP header.\n request_body: full HTTPS request body\n Returns:\n result: True if verification was successful, False if not."} {"_id": "q_1003", "text": "Extract full regularization path explored during lambda search from glm model.\n\n :param model: source lambda search model"} {"_id": "q_1004", "text": "Create a custom GLM model using the given coefficients.\n\n Needs to be passed source model trained on the dataset to extract the dataset information from.\n\n :param model: source model, used for extracting dataset information\n :param coefs: dictionary containing model coefficients\n :param threshold: (optional, only for binomial) decision threshold used for classification"} {"_id": "q_1005", "text": "Determine if the H2O cluster is running or not.\n\n :returns: True if the cluster is up; False otherwise"} {"_id": "q_1006", "text": "List all jobs performed by the cluster."} {"_id": "q_1007", "text": "Return the list of all known timezones."} {"_id": "q_1008", "text": "Update information in this object from another H2OCluster instance.\n\n :param H2OCluster other: source of the new information for this object."} {"_id": "q_1009", "text": "Parameters for metalearner algorithm\n\n Type: ``dict`` (default: ``None``).\n Example: metalearner_gbm_params = {'max_depth': 2, 'col_sample_rate': 0.3}"} {"_id": "q_1010", "text": "Repeatedly test a function waiting for it to return True.\n\n Arguments:\n test_func -- A function that will be run repeatedly\n error -- A function that will be run to produce an error message\n it will be called with (node, timeTakenSecs, numberOfRetries)\n OR\n -- A string that will be interpolated with a dictionary of\n { 'timeTakenSecs', 'numberOfRetries' }\n timeoutSecs -- How long in seconds to keep trying before declaring a failure\n retryDelaySecs -- How long to wait between retry attempts"} {"_id": "q_1011", "text": "Return the summary for a single column for a single Frame in the h2o cluster."} {"_id": "q_1012", "text": "Delete a frame on the h2o cluster, given its key."} {"_id": "q_1013", "text": "Return a model builder or all of the model builders known to the\n h2o cluster. The model builders are contained in a dictionary\n called \"model_builders\" at the top level of the result. The\n dictionary maps algorithm names to parameters lists. Each of the\n parameters contains all the metdata required by a client to\n present a model building interface to the user.\n\n if parameters = True, return the parameters?"} {"_id": "q_1014", "text": "Check a dictionary of model builder parameters on the h2o cluster \n using the given algorithm and model parameters."} {"_id": "q_1015", "text": "Score a model on the h2o cluster on the given Frame and return only the model metrics."} {"_id": "q_1016", "text": "ModelMetrics list."} {"_id": "q_1017", "text": "Create a new reservation for count instances"} {"_id": "q_1018", "text": "terminate all the instances given by its ids"} {"_id": "q_1019", "text": "Reboot all the instances given by its ids"} {"_id": "q_1020", "text": "Return fully qualified function name.\n\n This method will attempt to find \"full name\" of the given function object. This full name is either of\n the form \".\" if the function is a class method, or \".\"\n if it's a regular function. Thus, this is an attempt to back-port func.__qualname__ to Python 2.\n\n :param func: a function object.\n\n :returns: string with the function's full name as explained above."} {"_id": "q_1021", "text": "Return function's declared arguments as a string.\n\n For example for this function it returns \"func, highlight=None\"; for the ``_wrap`` function it returns\n \"text, wrap_at=120, indent=4\". This should usually coincide with the function's declaration (the part\n which is inside the parentheses)."} {"_id": "q_1022", "text": "Return piece of text, wrapped around if needed.\n\n :param text: text that may be too long and then needs to be wrapped.\n :param wrap_at: the maximum line length.\n :param indent: number of spaces to prepend to all subsequent lines after the first."} {"_id": "q_1023", "text": "Wait until job's completion."} {"_id": "q_1024", "text": "Fit an H2O model as part of a scikit-learn pipeline or grid search.\n\n A warning will be issued if a caller other than sklearn attempts to use this method.\n\n :param H2OFrame X: An H2OFrame consisting of the predictor variables.\n :param H2OFrame y: An H2OFrame consisting of the response variable.\n :param params: Extra arguments.\n :returns: The current instance of H2OEstimator for method chaining."} {"_id": "q_1025", "text": "Obtain parameters for this estimator.\n\n Used primarily for sklearn Pipelines and sklearn grid search.\n\n :param deep: If True, return parameters of all sub-objects that are estimators.\n\n :returns: A dict of parameters"} {"_id": "q_1026", "text": "This function is written to remove sandbox directories if they exist under the\n parent_dir.\n\n :param parent_dir: string denoting full parent directory path\n :param dir_name: string denoting directory path which could be a sandbox\n :return: None"} {"_id": "q_1027", "text": "Look at the stdout log and figure out which port the JVM chose.\n\n If successful, port number is stored in self.port; otherwise the\n program is terminated. This call is blocking, and will wait for\n up to 30s for the server to start up."} {"_id": "q_1028", "text": "Look at the stdout log and wait until the cluster of proper size is formed.\n This call is blocking.\n Exit if this fails.\n\n :param nodes_per_cloud:\n :return none"} {"_id": "q_1029", "text": "Normal node shutdown.\n Ignore failures for now.\n\n :return none"} {"_id": "q_1030", "text": "Return an ip to use to talk to this cluster."} {"_id": "q_1031", "text": "Return a port to use to talk to this cluster."} {"_id": "q_1032", "text": "Mean absolute error regression loss.\n\n :param y_actual: H2OFrame of actual response.\n :param y_predicted: H2OFrame of predicted response.\n :param weights: (Optional) sample weights\n :returns: mean absolute error loss (best is 0.0)."} {"_id": "q_1033", "text": "Explained variance regression score function.\n\n :param y_actual: H2OFrame of actual response.\n :param y_predicted: H2OFrame of predicted response.\n :param weights: (Optional) sample weights\n :returns: the explained variance score."} {"_id": "q_1034", "text": "Assert that string variable matches the provided regular expression.\n\n :param v: variable to check.\n :param regex: regular expression to check against (can be either a string, or compiled regexp)."} {"_id": "q_1035", "text": "Assert that variable satisfies the provided condition.\n\n :param v: variable to check. Its value is only used for error reporting.\n :param bool cond: condition that must be satisfied. Should be somehow related to the variable ``v``.\n :param message: message string to use instead of the default."} {"_id": "q_1036", "text": "Magic variable name retrieval.\n\n This function is designed as a helper for assert_is_type() function. Typically such assertion is used like this::\n\n assert_is_type(num_threads, int)\n\n If the variable `num_threads` turns out to be non-integer, we would like to raise an exception such as\n\n H2OTypeError(\"`num_threads` is expected to be integer, but got \")\n\n and in order to compose an error message like that, we need to know that the variables that was passed to\n assert_is_type() carries a name \"num_threads\". Naturally, the variable itself knows nothing about that.\n\n This is where this function comes in: we walk up the stack trace until the first frame outside of this\n file, find the original line that called the assert_is_type() function, and extract the variable name from\n that line. This is slightly fragile, in particular we assume that only one assert_is_type statement can be per line,\n or that this statement does not spill over multiple lines, etc."} {"_id": "q_1037", "text": "Return True if the variable is of the specified type, and False otherwise.\n\n :param var: variable to check\n :param vtype: expected variable's type"} {"_id": "q_1038", "text": "Attempt to find the source code of the ``lambda_fn`` within the string ``src``."} {"_id": "q_1039", "text": "Return True if the variable does not match any of the types, and False otherwise."} {"_id": "q_1040", "text": "Retrieve the config as a dictionary of key-value pairs."} {"_id": "q_1041", "text": "Return possible locations for the .h2oconfig file, one at a time."} {"_id": "q_1042", "text": "Start the progress bar, and return only when the progress reaches 100%.\n\n :param progress_fn: the executor function (or a generator). This function should take no arguments\n and return either a single number -- the current progress level, or a tuple (progress level, delay),\n where delay is the time interval for when the progress should be checked again. This function may at\n any point raise the ``StopIteration(message)`` exception, which will interrupt the progress bar,\n display the ``message`` in red font, and then re-raise the exception.\n :raises StopIteration: if the job is interrupted. The reason for interruption is provided in the exception's\n message. The message will say \"cancelled\" if the job was interrupted by the user by pressing Ctrl+C."} {"_id": "q_1043", "text": "Save the current model progress into ``self._progress_data``, and update ``self._next_poll_time``.\n\n :param res: tuple (progress level, poll delay).\n :param now: current timestamp."} {"_id": "q_1044", "text": "Compute t0, x0, v0, ve."} {"_id": "q_1045", "text": "Estimate the moment when the underlying process is expected to reach completion.\n\n This function should only return future times. Also this function is not allowed to return time moments less\n than self._next_poll_time if the actual progress is below 100% (this is because we won't know that the\n process have finished until we poll the external progress function)."} {"_id": "q_1046", "text": "Determine when to query the progress status next.\n\n This function is used if the external progress function did not return time interval for when it should be\n queried next."} {"_id": "q_1047", "text": "Return the projected time when progress level `x_target` will be reached.\n\n Since the underlying progress model is nonlinear, we need to do use Newton method to find a numerical solution\n to the equation x(t) = x_target."} {"_id": "q_1048", "text": "Print the rendered string to the stdout."} {"_id": "q_1049", "text": "Initial rendering stage, done in order to compute widths of all widgets."} {"_id": "q_1050", "text": "Find current STDOUT's width, in characters."} {"_id": "q_1051", "text": "Returns encoding map as an object that maps 'column_name' -> 'frame_with_encoding_map_for_this_column_name'\n\n :param frame frame: An H2OFrame object with which to create the target encoding map"} {"_id": "q_1052", "text": "Reload frame information from the backend H2O server."} {"_id": "q_1053", "text": "The type for the given column.\n\n :param col: either a name, or an index of the column to look up\n :returns: type of the column, one of: ``str``, ``int``, ``real``, ``enum``, ``time``, ``bool``.\n :raises H2OValueError: if such column does not exist in the frame."} {"_id": "q_1054", "text": "Extract columns of the specified type from the frame.\n\n :param str coltype: A character string indicating which column type to filter by. This must be\n one of the following:\n\n - ``\"numeric\"`` - Numeric, but not categorical or time\n - ``\"categorical\"`` - Integer, with a categorical/factor String mapping\n - ``\"string\"`` - String column\n - ``\"time\"`` - Long msec since the Unix Epoch - with a variety of display/parse options\n - ``\"uuid\"`` - UUID\n - ``\"bad\"`` - No none-NA rows (triple negative! all NAs or zero rows)\n\n :returns: list of indices of columns that have the requested type"} {"_id": "q_1055", "text": "Display summary information about the frame.\n\n Summary includes min/mean/max/sigma and other rollup data.\n\n :param bool return_data: Return a dictionary of the summary output"} {"_id": "q_1056", "text": "Generate an in-depth description of this H2OFrame.\n\n This will print to the console the dimensions of the frame; names/types/summary statistics for each column;\n and finally first ten rows of the frame.\n\n :param bool chunk_summary: Retrieve the chunk summary along with the distribution summary"} {"_id": "q_1057", "text": "Get the factor levels.\n\n :returns: A list of lists, one list per column, of levels."} {"_id": "q_1058", "text": "Change names of columns in the frame.\n\n Dict key is an index or name of the column whose name is to be set.\n Dict value is the new name of the column.\n\n :param columns: dict-like transformations to apply to the column names"} {"_id": "q_1059", "text": "Test whether elements of an H2OFrame are contained in the ``item``.\n\n :param items: An item or a list of items to compare the H2OFrame against.\n\n :returns: An H2OFrame of 0s and 1s showing whether each element in the original H2OFrame is contained in item."} {"_id": "q_1060", "text": "Build a fold assignments column for cross-validation.\n\n Rows are assigned a fold according to the current row number modulo ``n_folds``.\n\n :param int n_folds: An integer specifying the number of validation sets to split the training data into.\n :returns: A single-column H2OFrame with the fold assignments."} {"_id": "q_1061", "text": "Compactly display the internal structure of an H2OFrame."} {"_id": "q_1062", "text": "Obtain the dataset as a python-local object.\n\n :param bool use_pandas: If True (default) then return the H2OFrame as a pandas DataFrame (requires that the\n ``pandas`` library was installed). If False, then return the contents of the H2OFrame as plain nested\n list, in a row-wise order.\n :param bool header: If True (default), then column names will be appended as the first row in list\n\n :returns: A python object (a list of lists of strings, each list is a row, if use_pandas=False, otherwise\n a pandas DataFrame) containing this H2OFrame instance's data."} {"_id": "q_1063", "text": "Compute quantiles.\n\n :param List[float] prob: list of probabilities for which quantiles should be computed.\n :param str combine_method: for even samples this setting determines how to combine quantiles. This can be\n one of ``\"interpolate\"``, ``\"average\"``, ``\"low\"``, ``\"high\"``.\n :param weights_column: optional weights for each row. If not given, all rows are assumed to have equal\n importance. This parameter can be either the name of column containing the observation weights in\n this frame, or a single-column separate H2OFrame of observation weights.\n\n :returns: a new H2OFrame containing the quantiles and probabilities."} {"_id": "q_1064", "text": "Append data to this frame column-wise.\n\n :param H2OFrame data: append columns of frame ``data`` to the current frame. You can also cbind a number,\n in which case it will get converted into a constant column.\n\n :returns: new H2OFrame with all frames in ``data`` appended column-wise."} {"_id": "q_1065", "text": "Split a frame into distinct subsets of size determined by the given ratios.\n\n The number of subsets is always 1 more than the number of ratios given. Note that\n this does not give an exact split. H2O is designed to be efficient on big data\n using a probabilistic splitting method rather than an exact split. For example\n when specifying a split of 0.75/0.25, H2O will produce a test/train split with\n an expected value of 0.75/0.25 rather than exactly 0.75/0.25. On small datasets,\n the sizes of the resulting splits will deviate from the expected value more than\n on big data, where they will be very close to exact.\n\n :param List[float] ratios: The fractions of rows for each split.\n :param List[str] destination_frames: The names of the split frames.\n :param int seed: seed for the random number generator\n\n :returns: A list of H2OFrames"} {"_id": "q_1066", "text": "Return a new Frame that fills NA along a given axis and along a given direction with a maximum fill length\n\n :param method: ``\"forward\"`` or ``\"backward\"``\n :param axis: 0 for columnar-wise or 1 for row-wise fill\n :param maxlen: Max number of consecutive NA's to fill\n \n :return:"} {"_id": "q_1067", "text": "Impute missing values into the frame, modifying it in-place.\n\n :param int column: Index of the column to impute, or -1 to impute the entire frame.\n :param str method: The method of imputation: ``\"mean\"``, ``\"median\"``, or ``\"mode\"``.\n :param str combine_method: When the method is ``\"median\"``, this setting dictates how to combine quantiles\n for even samples. One of ``\"interpolate\"``, ``\"average\"``, ``\"low\"``, ``\"high\"``.\n :param by: The list of columns to group on.\n :param H2OFrame group_by_frame: Impute the values with this pre-computed grouped frame.\n :param List values: The list of impute values, one per column. None indicates to skip the column.\n\n :returns: A list of values used in the imputation or the group-by result used in imputation."} {"_id": "q_1068", "text": "Insert missing values into the current frame, modifying it in-place.\n\n Randomly replaces a user-specified fraction of entries in a H2O dataset with missing\n values.\n\n :param float fraction: A number between 0 and 1 indicating the fraction of entries to replace with missing.\n :param int seed: The seed for the random number generator used to determine which values to make missing.\n\n :returns: the original H2OFrame with missing values inserted."} {"_id": "q_1069", "text": "Compute the variance-covariance matrix of one or two H2OFrames.\n\n :param H2OFrame y: If this parameter is given, then a covariance matrix between the columns of the target\n frame and the columns of ``y`` is computed. If this parameter is not provided then the covariance matrix\n of the target frame is returned. If target frame has just a single column, then return the scalar variance\n instead of the matrix. Single rows are treated as single columns.\n :param str use: A string indicating how to handle missing values. This could be one of the following:\n\n - ``\"everything\"``: outputs NaNs whenever one of its contributing observations is missing\n - ``\"all.obs\"``: presence of missing observations will throw an error\n - ``\"complete.obs\"``: discards missing values along with all observations in their rows so that only\n complete observations are used\n :param bool na_rm: an alternative to ``use``: when this is True then default value for ``use`` is\n ``\"everything\"``; and if False then default ``use`` is ``\"complete.obs\"``. This parameter has no effect\n if ``use`` is given explicitly.\n\n :returns: An H2OFrame of the covariance matrix of the columns of this frame (if ``y`` is not given),\n or with the columns of ``y`` (if ``y`` is given). However when this frame and ``y`` are both single rows\n or single columns, then the variance is returned as a scalar."} {"_id": "q_1070", "text": "Compute the correlation matrix of one or two H2OFrames.\n\n :param H2OFrame y: If this parameter is provided, then compute correlation between the columns of ``y``\n and the columns of the current frame. If this parameter is not given, then just compute the correlation\n matrix for the columns of the current frame.\n :param str use: A string indicating how to handle missing values. This could be one of the following:\n\n - ``\"everything\"``: outputs NaNs whenever one of its contributing observations is missing\n - ``\"all.obs\"``: presence of missing observations will throw an error\n - ``\"complete.obs\"``: discards missing values along with all observations in their rows so that only\n complete observations are used\n :param bool na_rm: an alternative to ``use``: when this is True then default value for ``use`` is\n ``\"everything\"``; and if False then default ``use`` is ``\"complete.obs\"``. This parameter has no effect\n if ``use`` is given explicitly.\n\n :returns: An H2OFrame of the correlation matrix of the columns of this frame (if ``y`` is not given),\n or with the columns of ``y`` (if ``y`` is given). However when this frame and ``y`` are both single rows\n or single columns, then the correlation is returned as a scalar."} {"_id": "q_1071", "text": "Convert columns in the current frame to categoricals.\n\n :returns: new H2OFrame with columns of the \"enum\" type."} {"_id": "q_1072", "text": "Split the strings in the target column on the given regular expression pattern.\n\n :param str pattern: The split pattern.\n :returns: H2OFrame containing columns of the split strings."} {"_id": "q_1073", "text": "For each string in the frame, count the occurrences of the provided pattern. If countmathces is applied to\n a frame, all columns of the frame must be type string, otherwise, the returned frame will contain errors.\n\n The pattern here is a plain string, not a regular expression. We will search for the occurrences of the\n pattern as a substring in element of the frame. This function is applicable to frames containing only\n string or categorical columns.\n\n :param str pattern: The pattern to count matches on in each string. This can also be a list of strings,\n in which case all of them will be searched for.\n :returns: numeric H2OFrame with the same shape as the original, containing counts of matches of the\n pattern for each cell in the original frame."} {"_id": "q_1074", "text": "For each string, return a new string that is a substring of the original string.\n\n If end_index is not specified, then the substring extends to the end of the original string. If the start_index\n is longer than the length of the string, or is greater than or equal to the end_index, an empty string is\n returned. Negative start_index is coerced to 0.\n\n :param int start_index: The index of the original string at which to start the substring, inclusive.\n :param int end_index: The index of the original string at which to end the substring, exclusive.\n :returns: An H2OFrame containing the specified substrings."} {"_id": "q_1075", "text": "Return a copy of the column with leading characters removed.\n\n The set argument is a string specifying the set of characters to be removed.\n If omitted, the set argument defaults to removing whitespace.\n\n :param character set: The set of characters to lstrip from strings in column.\n :returns: a new H2OFrame with the same shape as the original frame and having all its values\n trimmed from the left (equivalent of Python's ``str.lstrip()``)."} {"_id": "q_1076", "text": "Compute the counts of values appearing in a column, or co-occurence counts between two columns.\n\n :param H2OFrame data2: An optional single column to aggregate counts by.\n :param bool dense: If True (default) then use dense representation, which lists only non-zero counts,\n 1 combination per row. Set to False to expand counts across all combinations.\n\n :returns: H2OFrame of the counts at each combination of factor levels"} {"_id": "q_1077", "text": "Compute a histogram over a numeric column.\n\n :param breaks: Can be one of ``\"sturges\"``, ``\"rice\"``, ``\"sqrt\"``, ``\"doane\"``, ``\"fd\"``, ``\"scott\"``;\n or a single number for the number of breaks; or a list containing the split points, e.g:\n ``[-50, 213.2123, 9324834]``. If breaks is \"fd\", the MAD is used over the IQR in computing bin width.\n :param bool plot: If True (default), then a plot will be generated using ``matplotlib``.\n\n :returns: If ``plot`` is False, return H2OFrame with these columns: breaks, counts, mids_true,\n mids, and density; otherwise this method draws a plot and returns nothing."} {"_id": "q_1078", "text": "Substitute the first occurrence of pattern in a string with replacement.\n\n :param str pattern: A regular expression.\n :param str replacement: A replacement string.\n :param bool ignore_case: If True then pattern will match case-insensitively.\n :returns: an H2OFrame with all values matching ``pattern`` replaced with ``replacement``."} {"_id": "q_1079", "text": "Searches for matches to argument `pattern` within each element\n of a string column.\n\n Default behavior is to return indices of the elements matching the pattern. Parameter\n `output_logical` can be used to return a logical vector indicating if the element matches\n the pattern (1) or not (0).\n\n :param str pattern: A character string containing a regular expression.\n :param bool ignore_case: If True, then case is ignored during matching.\n :param bool invert: If True, then identify elements that do not match the pattern.\n :param bool output_logical: If True, then return logical vector of indicators instead of list of matching positions\n :return: H2OFrame holding the matching positions or a logical list if `output_logical` is enabled."} {"_id": "q_1080", "text": "Construct a column that can be used to perform a random stratified split.\n\n :param float test_frac: The fraction of rows that will belong to the \"test\".\n :param int seed: The seed for the random number generator.\n\n :returns: an H2OFrame having single categorical column with two levels: ``\"train\"`` and ``\"test\"``.\n\n :examples:\n >>> stratsplit = df[\"y\"].stratified_split(test_frac=0.3, seed=12349453)\n >>> train = df[stratsplit==\"train\"]\n >>> test = df[stratsplit==\"test\"]\n >>>\n >>> # check that the distributions among the initial frame, and the\n >>> # train/test frames match\n >>> df[\"y\"].table()[\"Count\"] / df[\"y\"].table()[\"Count\"].sum()\n >>> train[\"y\"].table()[\"Count\"] / train[\"y\"].table()[\"Count\"].sum()\n >>> test[\"y\"].table()[\"Count\"] / test[\"y\"].table()[\"Count\"].sum()"} {"_id": "q_1081", "text": "Get the index of the max value in a column or row\n\n :param bool skipna: If True (default), then NAs are ignored during the search. Otherwise presence\n of NAs renders the entire result NA.\n :param int axis: Direction of finding the max index. If 0 (default), then the max index is searched columnwise, and the\n result is a frame with 1 row and number of columns as in the original frame. If 1, then the max index is searched\n rowwise and the result is a frame with 1 column, and number of rows equal to the number of rows in the original frame.\n :returns: either a list of max index values per-column or an H2OFrame containing max index values\n per-row from the original frame."} {"_id": "q_1082", "text": "Parse the provided file, and return Code object."} {"_id": "q_1083", "text": "Move the token by `drow` rows and `dcol` columns."} {"_id": "q_1084", "text": "Convert the parsed representation back into the source code."} {"_id": "q_1085", "text": "The standardized centers for the kmeans model."} {"_id": "q_1086", "text": "Connect to an existing H2O server, remote or local.\n\n There are two ways to connect to a server: either pass a `server` parameter containing an instance of\n an H2OLocalServer, or specify `ip` and `port` of the server that you want to connect to.\n\n :param server: An H2OLocalServer instance to connect to (optional).\n :param url: Full URL of the server to connect to (can be used instead of `ip` + `port` + `https`).\n :param ip: The ip address (or host name) of the server where H2O is running.\n :param port: Port number that H2O service is listening to.\n :param https: Set to True to connect via https:// instead of http://.\n :param verify_ssl_certificates: When using https, setting this to False will disable SSL certificates verification.\n :param auth: Either a (username, password) pair for basic authentication, an instance of h2o.auth.SpnegoAuth\n or one of the requests.auth authenticator objects.\n :param proxy: Proxy server address.\n :param cookies: Cookie (or list of) to add to request\n :param verbose: Set to False to disable printing connection status messages.\n :param connection_conf: Connection configuration object encapsulating connection parameters.\n :returns: the new :class:`H2OConnection` object."} {"_id": "q_1087", "text": "Perform a REST API request to a previously connected server.\n\n This function is mostly for internal purposes, but may occasionally be useful for direct access to\n the backend H2O server. It has same parameters as :meth:`H2OConnection.request `."} {"_id": "q_1088", "text": "Upload a dataset from the provided local path to the H2O cluster.\n\n Does a single-threaded push to H2O. Also see :meth:`import_file`.\n\n :param path: A path specifying the location of the data to upload.\n :param destination_frame: The unique hex key assigned to the imported file. If none is given, a key will\n be automatically generated.\n :param header: -1 means the first line is data, 0 means guess, 1 means first line is header.\n :param sep: The field separator character. Values on each line of the file are separated by\n this character. If not provided, the parser will automatically detect the separator.\n :param col_names: A list of column names for the file.\n :param col_types: A list of types or a dictionary of column names to types to specify whether columns\n should be forced to a certain type upon import parsing. If a list, the types for elements that are\n one will be guessed. The possible types a column may have are:\n\n - \"unknown\" - this will force the column to be parsed as all NA\n - \"uuid\" - the values in the column must be true UUID or will be parsed as NA\n - \"string\" - force the column to be parsed as a string\n - \"numeric\" - force the column to be parsed as numeric. H2O will handle the compression of the numeric\n data in the optimal manner.\n - \"enum\" - force the column to be parsed as a categorical column.\n - \"time\" - force the column to be parsed as a time column. H2O will attempt to parse the following\n list of date time formats: (date) \"yyyy-MM-dd\", \"yyyy MM dd\", \"dd-MMM-yy\", \"dd MMM yy\", (time)\n \"HH:mm:ss\", \"HH:mm:ss:SSS\", \"HH:mm:ss:SSSnnnnnn\", \"HH.mm.ss\" \"HH.mm.ss.SSS\", \"HH.mm.ss.SSSnnnnnn\".\n Times can also contain \"AM\" or \"PM\".\n :param na_strings: A list of strings, or a list of lists of strings (one list per column), or a dictionary\n of column names to strings which are to be interpreted as missing values.\n :param skipped_columns: an integer lists of column indices to skip and not parsed into the final frame from the import file.\n\n :returns: a new :class:`H2OFrame` instance.\n\n :examples:\n >>> frame = h2o.upload_file(\"/path/to/local/data\")"} {"_id": "q_1089", "text": "Import a dataset that is already on the cluster.\n\n The path to the data must be a valid path for each node in the H2O cluster. If some node in the H2O cluster\n cannot see the file, then an exception will be thrown by the H2O cluster. Does a parallel/distributed\n multi-threaded pull of the data. The main difference between this method and :func:`upload_file` is that\n the latter works with local files, whereas this method imports remote files (i.e. files local to the server).\n If you running H2O server on your own maching, then both methods behave the same.\n\n :param path: path(s) specifying the location of the data to import or a path to a directory of files to import\n :param destination_frame: The unique hex key assigned to the imported file. If none is given, a key will be\n automatically generated.\n :param parse: If True, the file should be parsed after import. If False, then a list is returned containing the file path.\n :param header: -1 means the first line is data, 0 means guess, 1 means first line is header.\n :param sep: The field separator character. Values on each line of the file are separated by\n this character. If not provided, the parser will automatically detect the separator.\n :param col_names: A list of column names for the file.\n :param col_types: A list of types or a dictionary of column names to types to specify whether columns\n should be forced to a certain type upon import parsing. If a list, the types for elements that are\n one will be guessed. The possible types a column may have are:\n\n - \"unknown\" - this will force the column to be parsed as all NA\n - \"uuid\" - the values in the column must be true UUID or will be parsed as NA\n - \"string\" - force the column to be parsed as a string\n - \"numeric\" - force the column to be parsed as numeric. H2O will handle the compression of the numeric\n data in the optimal manner.\n - \"enum\" - force the column to be parsed as a categorical column.\n - \"time\" - force the column to be parsed as a time column. H2O will attempt to parse the following\n list of date time formats: (date) \"yyyy-MM-dd\", \"yyyy MM dd\", \"dd-MMM-yy\", \"dd MMM yy\", (time)\n \"HH:mm:ss\", \"HH:mm:ss:SSS\", \"HH:mm:ss:SSSnnnnnn\", \"HH.mm.ss\" \"HH.mm.ss.SSS\", \"HH.mm.ss.SSSnnnnnn\".\n Times can also contain \"AM\" or \"PM\".\n :param na_strings: A list of strings, or a list of lists of strings (one list per column), or a dictionary\n of column names to strings which are to be interpreted as missing values.\n :param pattern: Character string containing a regular expression to match file(s) in the folder if `path` is a\n directory.\n :param skipped_columns: an integer list of column indices to skip and not parsed into the final frame from the import file.\n :param custom_non_data_line_markers: If a line in imported file starts with any character in given string it will NOT be imported. Empty string means all lines are imported, None means that default behaviour for given format will be used\n\n :returns: a new :class:`H2OFrame` instance.\n\n :examples:\n >>> # Single file import\n >>> iris = import_file(\"h2o-3/smalldata/iris.csv\")\n >>> # Return all files in the folder iris/ matching the regex r\"iris_.*\\.csv\"\n >>> iris_pattern = h2o.import_file(path = \"h2o-3/smalldata/iris\",\n ... pattern = \"iris_.*\\.csv\")"} {"_id": "q_1090", "text": "Import Hive table to H2OFrame in memory.\n\n Make sure to start H2O with Hive on classpath. Uses hive-site.xml on classpath to connect to Hive.\n\n :param database: Name of Hive database (default database will be used by default)\n :param table: name of Hive table to import\n :param partitions: a list of lists of strings - partition key column values of partitions you want to import.\n :param allow_multi_format: enable import of partitioned tables with different storage formats used. WARNING:\n this may fail on out-of-memory for tables with a large number of small partitions.\n\n :returns: an :class:`H2OFrame` containing data of the specified Hive table.\n\n :examples:\n >>> my_citibike_data = h2o.import_hive_table(\"default\", \"table\", [[\"2017\", \"01\"], [\"2017\", \"02\"]])"} {"_id": "q_1091", "text": "Import the SQL table that is the result of the specified SQL query to H2OFrame in memory.\n\n Creates a temporary SQL table from the specified sql_query.\n Runs multiple SELECT SQL queries on the temporary table concurrently for parallel ingestion, then drops the table.\n Be sure to start the h2o.jar in the terminal with your downloaded JDBC driver in the classpath::\n\n java -cp : water.H2OApp\n\n Also see h2o.import_sql_table. Currently supported SQL databases are MySQL, PostgreSQL, MariaDB, Hive, Oracle \n and Microsoft SQL Server.\n\n :param connection_url: URL of the SQL database connection as specified by the Java Database Connectivity (JDBC)\n Driver. For example, \"jdbc:mysql://localhost:3306/menagerie?&useSSL=false\"\n :param select_query: SQL query starting with `SELECT` that returns rows from one or more database tables.\n :param username: username for SQL server\n :param password: password for SQL server\n :param optimize: DEPRECATED. Ignored - use fetch_mode instead. Optimize import of SQL table for faster imports.\n :param use_temp_table: whether a temporary table should be created from select_query\n :param temp_table_name: name of temporary table to be created from select_query\n :param fetch_mode: Set to DISTRIBUTED to enable distributed import. Set to SINGLE to force a sequential read by a single node\n from the database.\n\n :returns: an :class:`H2OFrame` containing data of the specified SQL query.\n\n :examples:\n >>> conn_url = \"jdbc:mysql://172.16.2.178:3306/ingestSQL?&useSSL=false\"\n >>> select_query = \"SELECT bikeid from citibike20k\"\n >>> username = \"root\"\n >>> password = \"abc123\"\n >>> my_citibike_data = h2o.import_sql_select(conn_url, select_query,\n ... username, password, fetch_mode)"} {"_id": "q_1092", "text": "Parse dataset using the parse setup structure.\n\n :param setup: Result of ``h2o.parse_setup()``\n :param id: an id for the frame.\n :param first_line_is_header: -1, 0, 1 if the first line is to be used as the header\n\n :returns: an :class:`H2OFrame` object."} {"_id": "q_1093", "text": "Load a model from the server.\n\n :param model_id: The model identification in H2O\n\n :returns: Model object, a subclass of H2OEstimator"} {"_id": "q_1094", "text": "Download the POJO for this model to the directory specified by path; if path is \"\", then dump to screen.\n\n :param model: the model whose scoring POJO should be retrieved.\n :param path: an absolute path to the directory where POJO should be saved.\n :param get_jar: retrieve the h2o-genmodel.jar also (will be saved to the same folder ``path``).\n :param jar_name: Custom name of genmodel jar.\n :returns: location of the downloaded POJO file."} {"_id": "q_1095", "text": "Download an H2O data set to a CSV file on the local disk.\n\n Warning: Files located on the H2O server may be very large! Make sure you have enough\n hard drive space to accommodate the entire file.\n\n :param data: an H2OFrame object to be downloaded.\n :param filename: name for the CSV file where the data should be saved to."} {"_id": "q_1096", "text": "Export a given H2OFrame to a path on the machine this python session is currently connected to.\n\n :param frame: the Frame to save to disk.\n :param path: the path to the save point on disk.\n :param force: if True, overwrite any preexisting file with the same path\n :param parts: enables export to multiple 'part' files instead of just a single file.\n Convenient for large datasets that take too long to store in a single file.\n Use parts=-1 to instruct H2O to determine the optimal number of part files or\n specify your desired maximum number of part files. Path needs to be a directory\n when exporting to multiple files, also that directory must be empty.\n Default is ``parts = 1``, which is to export to a single file."} {"_id": "q_1097", "text": "Convert an H2O data object into a python-specific object.\n\n WARNING! This will pull all data local!\n\n If Pandas is available (and use_pandas is True), then pandas will be used to parse the\n data frame. Otherwise, a list-of-lists populated by character data will be returned (so\n the types of data will all be str).\n\n :param data: an H2O data object.\n :param use_pandas: If True, try to use pandas for reading in the data.\n :param header: If True, return column names as first element in list\n\n :returns: List of lists (Rows x Columns)."} {"_id": "q_1098", "text": "H2O built-in demo facility.\n\n :param funcname: A string that identifies the h2o python function to demonstrate.\n :param interactive: If True, the user will be prompted to continue the demonstration after every segment.\n :param echo: If True, the python commands that are executed will be displayed.\n :param test: If True, `h2o.init()` will not be called (used for pyunit testing).\n\n :example:\n >>> import h2o\n >>> h2o.demo(\"gbm\")"} {"_id": "q_1099", "text": "Imports a data file within the 'h2o_data' folder."} {"_id": "q_1100", "text": "Create Model Metrics from predicted and actual values in H2O.\n\n :param H2OFrame predicted: an H2OFrame containing predictions.\n :param H2OFrame actuals: an H2OFrame containing actual values.\n :param domain: list of response factors for classification.\n :param distribution: distribution for regression."} {"_id": "q_1101", "text": "Check that the provided frame id is valid in Rapids language."} {"_id": "q_1102", "text": "Convert given number of bytes into a human readable representation, i.e. add prefix such as kb, Mb, Gb,\n etc. The `size` argument must be a non-negative integer.\n\n :param size: integer representing byte size of something\n :return: string representation of the size, in human-readable form"} {"_id": "q_1103", "text": "Return a \"canonical\" version of slice ``s``.\n\n :param slice s: the original slice expression\n :param total int: total number of elements in the collection sliced by ``s``\n :return slice: a slice equivalent to ``s`` but not containing any negative indices or Nones."} {"_id": "q_1104", "text": "MOJO scoring function to take a Pandas frame and use MOJO model as zip file to score.\n\n :param dataframe: Pandas frame to score.\n :param mojo_zip_path: Path to MOJO zip downloaded from H2O.\n :param genmodel_jar_path: Optional, path to genmodel jar file. If None (default) then the h2o-genmodel.jar in the same\n folder as the MOJO zip will be used.\n :param classpath: Optional, specifies custom user defined classpath which will be used when scoring. If None\n (default) then the default classpath for this MOJO model will be used.\n :param java_options: Optional, custom user defined options for Java. By default ``-Xmx4g`` is used.\n :param verbose: Optional, if True, then additional debug information will be printed. False by default.\n :return: Pandas frame with predictions"} {"_id": "q_1105", "text": "MOJO scoring function to take a CSV file and use MOJO model as zip file to score.\n\n :param input_csv_path: Path to input CSV file.\n :param mojo_zip_path: Path to MOJO zip downloaded from H2O.\n :param output_csv_path: Optional, name of the output CSV file with computed predictions. If None (default), then\n predictions will be saved as prediction.csv in the same folder as the MOJO zip.\n :param genmodel_jar_path: Optional, path to genmodel jar file. If None (default) then the h2o-genmodel.jar in the same\n folder as the MOJO zip will be used.\n :param classpath: Optional, specifies custom user defined classpath which will be used when scoring. If None\n (default) then the default classpath for this MOJO model will be used.\n :param java_options: Optional, custom user defined options for Java. By default ``-Xmx4g -XX:ReservedCodeCacheSize=256m`` is used.\n :param verbose: Optional, if True, then additional debug information will be printed. False by default.\n :return: List of computed predictions"} {"_id": "q_1106", "text": "The decorator to mark deprecated functions."} {"_id": "q_1107", "text": "Wait until grid finishes computing."} {"_id": "q_1108", "text": "Print a detailed summary of the explored models."} {"_id": "q_1109", "text": "Print models sorted by metric."} {"_id": "q_1110", "text": "Derived and returned the model parameters used to train the particular grid search model.\n\n :param str id: The model id of the model with hyperparameters of interest.\n :param bool display: Flag to indicate whether to display the hyperparameter names.\n\n :returns: A dict of model pararmeters derived from the hyper-parameters used to train this particular model."} {"_id": "q_1111", "text": "Retrieve an H2OGridSearch instance.\n\n Optionally specify a metric by which to sort models and a sort order.\n Note that if neither cross-validation nor a validation frame is used in the grid search, then the\n training metrics will display in the \"get grid\" output. If a validation frame is passed to the grid, and\n ``nfolds = 0``, then the validation metrics will display. However, if ``nfolds`` > 1, then cross-validation\n metrics will display even if a validation frame is provided.\n\n :param str sort_by: A metric by which to sort the models in the grid space. Choices are: ``\"logloss\"``,\n ``\"residual_deviance\"``, ``\"mse\"``, ``\"auc\"``, ``\"r2\"``, ``\"accuracy\"``, ``\"precision\"``, ``\"recall\"``,\n ``\"f1\"``, etc.\n :param bool decreasing: Sort the models in decreasing order of metric if true, otherwise sort in increasing\n order (default).\n\n :returns: A new H2OGridSearch instance optionally sorted on the specified metric."} {"_id": "q_1112", "text": "Return the Importance of components associcated with a pca model.\n\n use_pandas: ``bool`` (default: ``False``)."} {"_id": "q_1113", "text": "Convert archetypes of the model into original feature space.\n\n :param H2OFrame test_data: The dataset upon which the model was trained.\n :param bool reverse_transform: Whether the transformation of the training data during model-building\n should be reversed on the projected archetypes.\n\n :returns: model archetypes projected back into the original training data's feature space."} {"_id": "q_1114", "text": "Convert names with underscores into camelcase.\n\n For example:\n \"num_rows\" => \"numRows\"\n \"very_long_json_name\" => \"veryLongJsonName\"\n \"build_GBM_model\" => \"buildGbmModel\"\n \"KEY\" => \"key\"\n \"middle___underscores\" => \"middleUnderscores\"\n \"_exclude_fields\" => \"_excludeFields\" (retain initial/trailing underscores)\n \"__http_status__\" => \"__httpStatus__\"\n\n :param name: name to be converted"} {"_id": "q_1115", "text": "Dedent text to the specific indentation level.\n\n :param ind: common indentation level for the resulting text (number of spaces to append to every line)\n :param text: text that should be transformed.\n :return: ``text`` with all common indentation removed, and then the specified amount of indentation added."} {"_id": "q_1116", "text": "This function will extract the various operation time for GLRM model building iterations.\n\n :param javaLogText:\n :return:"} {"_id": "q_1117", "text": "Main program. Take user input, parse it and call other functions to execute the commands\n and extract run summary and store run result in json file\n\n @return: none"} {"_id": "q_1118", "text": "Close an existing connection; once closed it cannot be used again.\n\n Strictly speaking it is not necessary to close all connection that you opened -- we have several mechanisms\n in place that will do so automatically (__del__(), __exit__() and atexit() handlers), however there is also\n no good reason to make this method private."} {"_id": "q_1119", "text": "Return the session id of the current connection.\n\n The session id is issued (through an API request) the first time it is requested, but no sooner. This is\n because generating a session id puts it into the DKV on the server, which effectively locks the cluster. Once\n issued, the session id will stay the same until the connection is closed."} {"_id": "q_1120", "text": "Prepare `filename` to be sent to the server.\n\n The \"preparation\" consists of creating a data structure suitable\n for passing to requests.request()."} {"_id": "q_1121", "text": "Log response from an API request."} {"_id": "q_1122", "text": "Given a response object, prepare it to be handed over to the external caller.\n\n Preparation steps include:\n * detect if the response has error status, and convert it to an appropriate exception;\n * detect Content-Type, and based on that either parse the response as JSON or return as plain text."} {"_id": "q_1123", "text": "Helper function to print connection status messages when in verbose mode."} {"_id": "q_1124", "text": "Download the leader model in AutoML in MOJO format.\n\n :param path: the path where MOJO file should be saved.\n :param get_genmodel_jar: if True, then also download h2o-genmodel.jar and store it in folder ``path``.\n :param genmodel_name Custom name of genmodel jar\n :returns: name of the MOJO file written."} {"_id": "q_1125", "text": "Fit this object by computing the means and standard deviations used by the transform method.\n\n :param X: An H2OFrame; may contain NAs and/or categoricals.\n :param y: None (Ignored)\n :param params: Ignored\n :returns: This H2OScaler instance"} {"_id": "q_1126", "text": "Scale an H2OFrame with the fitted means and standard deviations.\n\n :param X: An H2OFrame; may contain NAs and/or categoricals.\n :param y: None (Ignored)\n :param params: (Ignored)\n :returns: A scaled H2OFrame."} {"_id": "q_1127", "text": "remove extra characters before the actual string we are\n looking for. The Jenkins console output is encoded using utf-8. However, the stupid\n redirect function can only encode using ASCII. I have googled for half a day with no\n results to how to resolve the issue. Hence, we are going to the heat and just manually\n get rid of the junk.\n\n Parameters\n ----------\n\n string_content : str\n contains a line read in from jenkins console\n\n :return: str: contains the content of the line after the string '[0m'"} {"_id": "q_1128", "text": "Find the slave machine where a Jenkins job was executed on. It will save this\n information in g_failed_test_info_dict. In addition, it will\n delete this particular function handle off the temp_func_list as we do not need\n to perform this action again.\n\n Parameters\n ----------\n\n each_line : str\n contains a line read in from jenkins console\n temp_func_list : list of Python function handles\n contains a list of functions that we want to invoke to extract information from\n the Jenkins console text.\n\n :return: bool to determine if text mining should continue on the jenkins console text"} {"_id": "q_1129", "text": "Find if a Jenkins job has taken too long to finish and was killed. It will save this\n information in g_failed_test_info_dict.\n\n Parameters\n ----------\n\n each_line : str\n contains a line read in from jenkins console\n temp_func_list : list of Python function handles\n contains a list of functions that we want to invoke to extract information from\n the Jenkins console text.\n\n :return: bool to determine if text mining should continue on the jenkins console text"} {"_id": "q_1130", "text": "Find if a Jenkins job has failed to build. It will save this\n information in g_failed_test_info_dict. In addition, it will delete this particular\n function handle off the temp_func_list as we do not need to perform this action again.\n\n Parameters\n ----------\n\n each_line : str\n contains a line read in from jenkins console\n temp_func_list : list of Python function handles\n contains a list of functions that we want to invoke to extract information from\n the Jenkins console text.\n\n :return: bool to determine if text mining should continue on the jenkins console text"} {"_id": "q_1131", "text": "Find the build id of a jenkins job. It will save this\n information in g_failed_test_info_dict. In addition, it will delete this particular\n function handle off the temp_func_list as we do not need to perform this action again.\n\n Parameters\n ----------\n\n each_line : str\n contains a line read in from jenkins console\n temp_func_list : list of Python function handles\n contains a list of functions that we want to invoke to extract information from\n the Jenkins console text.\n\n :return: bool to determine if text mining should continue on the jenkins console text"} {"_id": "q_1132", "text": "Save the log scraping results into logs denoted by g_output_filename_failed_tests and\n g_output_filename_passed_tests.\n\n :return: none"} {"_id": "q_1133", "text": "Concatecate all log file into a summary text file to be sent to users\n at the end of a daily log scraping.\n\n :return: none"} {"_id": "q_1134", "text": "Write one log file into the summary text file.\n\n Parameters\n ----------\n fhandle : Python file handle\n file handle to the summary text file\n file2read : Python file handle\n file handle to log file where we want to add its content to the summary text file.\n\n :return: none"} {"_id": "q_1135", "text": "Loop through all java messages that are not associated with a unit test and\n write them into a log file.\n\n Parameters\n ----------\n key : str\n 9.general_bad_java_messages\n val : list of list of str\n contains the bad java messages and the message types.\n\n\n :return: none"} {"_id": "q_1136", "text": "Load in pickle file that contains dict structure with bad java messages to ignore per unit test\n or for all cases. The ignored bad java info is stored in g_ok_java_messages dict.\n\n :return:"} {"_id": "q_1137", "text": "Return enum constant `s` converted to a canonical snake-case."} {"_id": "q_1138", "text": "Find the percentile of a list of values.\n\n @parameter N - is a list of values. Note N MUST BE already sorted.\n @parameter percent - a float value from 0.0 to 1.0.\n @parameter key - optional key function to compute value from each element of N.\n\n @return - the percentile of the values"} {"_id": "q_1139", "text": "Dictionary of the default parameters of the model."} {"_id": "q_1140", "text": "Retrieve Model Score History.\n\n :returns: The score history as an H2OTwoDimTable or a Pandas DataFrame."} {"_id": "q_1141", "text": "Print innards of model, without regards to type."} {"_id": "q_1142", "text": "Retreive the residual degress of freedom if this model has the attribute, or None otherwise.\n\n :param bool train: Get the residual dof for the training set. If both train and valid are False, then train\n is selected by default.\n :param bool valid: Get the residual dof for the validation set. If both train and valid are True, then train\n is selected by default.\n\n :returns: Return the residual dof, or None if it is not present."} {"_id": "q_1143", "text": "Return the coefficients which can be applied to the non-standardized data.\n\n Note: standardize = True by default, if set to False then coef() return the coefficients which are fit directly."} {"_id": "q_1144", "text": "Download the POJO for this model to the directory specified by path.\n\n If path is an empty string, then dump the output to screen.\n\n :param path: An absolute path to the directory where POJO should be saved.\n :param get_genmodel_jar: if True, then also download h2o-genmodel.jar and store it in folder ``path``.\n :param genmodel_name Custom name of genmodel jar\n :returns: name of the POJO file written."} {"_id": "q_1145", "text": "Check that y_actual and y_predicted have the same length.\n\n :param H2OFrame y_actual:\n :param H2OFrame y_predicted:\n\n :returns: None"} {"_id": "q_1146", "text": "Deep Learning model demo."} {"_id": "q_1147", "text": "GLM model demo."} {"_id": "q_1148", "text": "Wait for a key press on the console and return it.\n\n Borrowed from http://stackoverflow.com/questions/983354/how-do-i-make-python-to-wait-for-a-pressed-key"} {"_id": "q_1149", "text": "Convert to a python 'data frame'."} {"_id": "q_1150", "text": "Print the contents of this table."} {"_id": "q_1151", "text": "Return the location of an h2o.jar executable.\n\n :param path0: Explicitly given h2o.jar path. If provided, then we will simply check whether the file is there,\n otherwise we will search for an executable in locations returned by ._jar_paths().\n\n :raises H2OStartupError: if no h2o.jar executable can be found."} {"_id": "q_1152", "text": "Produce potential paths for an h2o.jar executable."} {"_id": "q_1153", "text": "Convert uri to absolute filepath\n\n Parameters\n ----------\n uri : string\n URI of python module to return path for\n\n Returns\n -------\n path : None or string\n Returns None if there is no valid path for this URI\n Otherwise returns absolute file system path for URI\n\n Examples\n --------\n >>> docwriter = ApiDocWriter('sphinx')\n >>> import sphinx\n >>> modpath = sphinx.__path__[0]\n >>> res = docwriter._uri2path('sphinx.builder')\n >>> res == os.path.join(modpath, 'builder.py')\n True\n >>> res = docwriter._uri2path('sphinx')\n >>> res == os.path.join(modpath, '__init__.py')\n True\n >>> docwriter._uri2path('sphinx.does_not_exist')"} {"_id": "q_1154", "text": "Parse lines of text for functions and classes"} {"_id": "q_1155", "text": "Generate API reST files.\n\n Parameters\n ----------\n outdir : string\n Directory name in which to store files\n We create automatic filenames for each module\n \n Returns\n -------\n None\n\n Notes\n -----\n Sets self.written_modules to list of written modules"} {"_id": "q_1156", "text": "Update the g_ok_java_messages dict structure by\n 1. add the new java ignored messages stored in message_dict if action == 1\n 2. remove the java ignored messages stired in message_dict if action == 2.\n\n Parameters\n ----------\n\n message_dict : Python dict\n key: unit test name or \"general\"\n value: list of java messages that are to be ignored if they are found when running the test stored as the key. If\n the key is \"general\", the list of java messages are to be ignored when running all tests.\n action : int\n if 1: add java ignored messages stored in message_dict to g_ok_java_messages dict;\n if 2: remove java ignored messages stored in message_dict from g_ok_java_messages dict.\n\n :return: none"} {"_id": "q_1157", "text": "Save the ignored java message dict stored in g_ok_java_messages into a pickle file for future use.\n\n :return: none"} {"_id": "q_1158", "text": "Write the java ignored messages in g_ok_java_messages into a text file for humans to read.\n\n :return: none"} {"_id": "q_1159", "text": "Parse user inputs and set the corresponing global variables to perform the\n necessary tasks.\n\n Parameters\n ----------\n\n argv : string array\n contains flags and input options from users\n\n :return:"} {"_id": "q_1160", "text": "Find all python files in the given directory and all subfolders."} {"_id": "q_1161", "text": "Search the file for any magic incantations.\n\n :param filename: file to search\n :returns: a tuple containing the spell and then maybe some extra words (or None if no magic present)"} {"_id": "q_1162", "text": "Transform H2OFrame using a MOJO Pipeline.\n\n :param data: Frame to be transformed.\n :param allow_timestamps: Allows datetime columns to be used directly with MOJO pipelines. It is recommended\n to parse your datetime columns as Strings when using pipelines because pipelines can interpret certain datetime\n formats in a different way. If your H2OFrame is parsed from a binary file format (eg. Parquet) instead of CSV\n it is safe to turn this option on and use datetime columns directly.\n\n :returns: A new H2OFrame."} {"_id": "q_1163", "text": "This function will print out the intermittents onto the screen for casual viewing. It will also print out\n where the giant summary dictionary is going to be stored.\n\n :return: None"} {"_id": "q_1164", "text": "Produce the desired metric plot.\n\n :param type: the type of metric plot (currently, only ROC supported).\n :param server: if True, generate plot inline using matplotlib's \"Agg\" backend.\n :returns: None"} {"_id": "q_1165", "text": "Get the confusion matrix for the specified metric\n\n :param metrics: A string (or list of strings) among metrics listed in :const:`max_metrics`. Defaults to 'f1'.\n :param thresholds: A value (or list of values) between 0 and 1.\n :returns: a list of ConfusionMatrix objects (if there are more than one to return), or a single ConfusionMatrix\n (if there is only one)."} {"_id": "q_1166", "text": "Returns True if a deep water model can be built, or False otherwise."} {"_id": "q_1167", "text": "This method will remove data from the summary text file and the dictionary file for tests that occurs before\n the number of months specified by monthToKeep.\n\n :param monthToKeep:\n :return:"} {"_id": "q_1168", "text": "Set site domain and name."} {"_id": "q_1169", "text": "Append a header, preserving any duplicate entries."} {"_id": "q_1170", "text": "Given a function, parse the docstring as YAML and return a dictionary of info."} {"_id": "q_1171", "text": "Given `directory` and `packages` arugments, return a list of all the\n directories that should be used for serving static files from."} {"_id": "q_1172", "text": "Returns an HTTP response, given the incoming path, method and request headers."} {"_id": "q_1173", "text": "Send ASGI websocket messages, ensuring valid state transitions."} {"_id": "q_1174", "text": "Adds the default_data to data and dumps it to a json."} {"_id": "q_1175", "text": "Comments last user_id's medias"} {"_id": "q_1176", "text": "Returns login and password stored in `secret.txt`."} {"_id": "q_1177", "text": "Likes last user_id's medias"} {"_id": "q_1178", "text": "Reads list from file. One line - one item.\n Returns the list if file items."} {"_id": "q_1179", "text": "Finds the max and median long and short position concentrations\n in each time period specified by the index of positions.\n\n Parameters\n ----------\n positions : pd.DataFrame\n The positions that the strategy takes over time.\n\n Returns\n -------\n pd.DataFrame\n Columns are max long, max short, median long, and median short\n position concentrations. Rows are timeperiods."} {"_id": "q_1180", "text": "Determines the long and short allocations in a portfolio.\n\n Parameters\n ----------\n positions : pd.DataFrame\n The positions that the strategy takes over time.\n\n Returns\n -------\n df_long_short : pd.DataFrame\n Long and short allocations as a decimal\n percentage of the total net liquidation"} {"_id": "q_1181", "text": "Returns style factor exposure of an algorithm's positions\n\n Parameters\n ----------\n positions : pd.DataFrame\n Daily equity positions of algorithm, in dollars.\n - See full explanation in create_risk_tear_sheet\n\n risk_factor : pd.DataFrame\n Daily risk factor per asset.\n - DataFrame with dates as index and equities as columns\n - Example:\n Equity(24 Equity(62\n [AAPL]) [ABT])\n 2017-04-03\t -0.51284 1.39173\n 2017-04-04\t -0.73381 0.98149\n 2017-04-05\t -0.90132 1.13981"} {"_id": "q_1182", "text": "Plots DataFrame output of compute_style_factor_exposures as a line graph\n\n Parameters\n ----------\n tot_style_factor_exposure : pd.Series\n Daily style factor exposures (output of compute_style_factor_exposures)\n - Time series with decimal style factor exposures\n - Example:\n 2017-04-24 0.037820\n 2017-04-25 0.016413\n 2017-04-26 -0.021472\n 2017-04-27 -0.024859\n\n factor_name : string\n Name of style factor, for use in graph title\n - Defaults to tot_style_factor_exposure.name"} {"_id": "q_1183", "text": "Plots outputs of compute_sector_exposures as area charts\n\n Parameters\n ----------\n long_exposures, short_exposures : arrays\n Arrays of long and short sector exposures (output of\n compute_sector_exposures).\n\n sector_dict : dict or OrderedDict\n Dictionary of all sectors\n - See full description in compute_sector_exposures"} {"_id": "q_1184", "text": "Plots output of compute_sector_exposures as line graphs\n\n Parameters\n ----------\n net_exposures : arrays\n Arrays of net sector exposures (output of compute_sector_exposures).\n\n sector_dict : dict or OrderedDict\n Dictionary of all sectors\n - See full description in compute_sector_exposures"} {"_id": "q_1185", "text": "Generate a number of tear sheets that are useful\n for analyzing a strategy's performance.\n\n - Fetches benchmarks if needed.\n - Creates tear sheets for returns, and significant events.\n If possible, also creates tear sheets for position analysis,\n transaction analysis, and Bayesian analysis.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - Time series with decimal returns.\n - Example:\n 2015-07-16 -0.012143\n 2015-07-17 0.045350\n 2015-07-20 0.030957\n 2015-07-21 0.004902\n positions : pd.DataFrame, optional\n Daily net position values.\n - Time series of dollar amount invested in each position and cash.\n - Days where stocks are not held can be represented by 0 or NaN.\n - Non-working capital is labelled 'cash'\n - Example:\n index 'AAPL' 'MSFT' cash\n 2004-01-09 13939.3800 -14012.9930 711.5585\n 2004-01-12 14492.6300 -14624.8700 27.1821\n 2004-01-13 -13853.2800 13653.6400 -43.6375\n transactions : pd.DataFrame, optional\n Executed trade volumes and fill prices.\n - One row per trade.\n - Trades on different names that occur at the\n same time will have identical indicies.\n - Example:\n index amount price symbol\n 2004-01-09 12:18:01 483 324.12 'AAPL'\n 2004-01-09 12:18:01 122 83.10 'MSFT'\n 2004-01-13 14:12:23 -75 340.43 'AAPL'\n market_data : pd.Panel, optional\n Panel with items axis of 'price' and 'volume' DataFrames.\n The major and minor axes should match those of the\n the passed positions DataFrame (same dates and symbols).\n slippage : int/float, optional\n Basis points of slippage to apply to returns before generating\n tearsheet stats and plots.\n If a value is provided, slippage parameter sweep\n plots will be generated from the unadjusted returns.\n Transactions and positions must also be passed.\n - See txn.adjust_returns_for_slippage for more details.\n live_start_date : datetime, optional\n The point in time when the strategy began live trading,\n after its backtest period. This datetime should be normalized.\n hide_positions : bool, optional\n If True, will not output any symbol names.\n bayesian: boolean, optional\n If True, causes the generation of a Bayesian tear sheet.\n round_trips: boolean, optional\n If True, causes the generation of a round trip tear sheet.\n sector_mappings : dict or pd.Series, optional\n Security identifier to sector mapping.\n Security ids as keys, sectors as values.\n estimate_intraday: boolean or str, optional\n Instead of using the end-of-day positions, use the point in the day\n where we have the most $ invested. This will adjust positions to\n better approximate and represent how an intraday strategy behaves.\n By default, this is 'infer', and an attempt will be made to detect\n an intraday strategy. Specifying this value will prevent detection.\n cone_std : float, or tuple, optional\n If float, The standard deviation to use for the cone plots.\n If tuple, Tuple of standard deviation values to use for the cone plots\n - The cone is a normal distribution with this standard deviation\n centered around a linear regression.\n bootstrap : boolean (optional)\n Whether to perform bootstrap analysis for the performance\n metrics. Takes a few minutes longer.\n turnover_denom : str\n Either AGB or portfolio_value, default AGB.\n - See full explanation in txn.get_turnover.\n factor_returns : pd.Dataframe, optional\n Returns by factor, with date as index and factors as columns\n factor_loadings : pd.Dataframe, optional\n Factor loadings for all days in the date range, with date and\n ticker as index, and factors as columns.\n pos_in_dollars : boolean, optional\n indicates whether positions is in dollars\n header_rows : dict or OrderedDict, optional\n Extra rows to display at the top of the perf stats table.\n set_context : boolean, optional\n If True, set default plotting style context.\n - See plotting.context().\n factor_partitions : dict, optional\n dict specifying how factors should be separated in perf attrib\n factor returns and risk exposures plots\n - See create_perf_attrib_tear_sheet()."} {"_id": "q_1186", "text": "Generate a number of plots for analyzing a\n strategy's positions and holdings.\n\n - Plots: gross leverage, exposures, top positions, and holdings.\n - Will also print the top positions held.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in create_full_tear_sheet.\n positions : pd.DataFrame\n Daily net position values.\n - See full explanation in create_full_tear_sheet.\n show_and_plot_top_pos : int, optional\n By default, this is 2, and both prints and plots the\n top 10 positions.\n If this is 0, it will only plot; if 1, it will only print.\n hide_positions : bool, optional\n If True, will not output any symbol names.\n Overrides show_and_plot_top_pos to 0 to suppress text output.\n return_fig : boolean, optional\n If True, returns the figure that was plotted on.\n sector_mappings : dict or pd.Series, optional\n Security identifier to sector mapping.\n Security ids as keys, sectors as values.\n transactions : pd.DataFrame, optional\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in create_full_tear_sheet.\n estimate_intraday: boolean or str, optional\n Approximate returns for intraday strategies.\n See description in create_full_tear_sheet."} {"_id": "q_1187", "text": "Generate plots and tables for analyzing a strategy's performance.\n\n Parameters\n ----------\n returns : pd.Series\n Returns for each day in the date range.\n\n positions: pd.DataFrame\n Daily holdings (in dollars or percentages), indexed by date.\n Will be converted to percentages if positions are in dollars.\n Short positions show up as cash in the 'cash' column.\n\n factor_returns : pd.DataFrame\n Returns by factor, with date as index and factors as columns\n\n factor_loadings : pd.DataFrame\n Factor loadings for all days in the date range, with date\n and ticker as index, and factors as columns.\n\n transactions : pd.DataFrame, optional\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in create_full_tear_sheet.\n - Default is None.\n\n pos_in_dollars : boolean, optional\n Flag indicating whether `positions` are in dollars or percentages\n If True, positions are in dollars.\n\n return_fig : boolean, optional\n If True, returns the figure that was plotted on.\n\n factor_partitions : dict\n dict specifying how factors should be separated in factor returns\n and risk exposures plots\n - Example:\n {'style': ['momentum', 'size', 'value', ...],\n 'sector': ['technology', 'materials', ... ]}"} {"_id": "q_1188", "text": "Sums the absolute value of shares traded in each name on each day.\n Adds columns containing the closing price and total daily volume for\n each day-ticker combination.\n\n Parameters\n ----------\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in tears.create_full_tear_sheet\n market_data : pd.Panel\n Contains \"volume\" and \"price\" DataFrames for the tickers\n in the passed positions DataFrames\n\n Returns\n -------\n txn_daily : pd.DataFrame\n Daily totals for transacted shares in each traded name.\n price and volume columns for close price and daily volume for\n the corresponding ticker, respectively."} {"_id": "q_1189", "text": "For each traded name, find the daily transaction total that consumed\n the greatest proportion of available daily bar volume.\n\n Parameters\n ----------\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in create_full_tear_sheet.\n market_data : pd.Panel\n Panel with items axis of 'price' and 'volume' DataFrames.\n The major and minor axes should match those of the\n the passed positions DataFrame (same dates and symbols).\n last_n_days : integer\n Compute for only the last n days of the passed backtest data."} {"_id": "q_1190", "text": "Maps a single transaction row to a dictionary.\n\n Parameters\n ----------\n txn : pd.DataFrame\n A single transaction object to convert to a dictionary.\n\n Returns\n -------\n dict\n Mapped transaction."} {"_id": "q_1191", "text": "Extract daily transaction data from set of transaction objects.\n\n Parameters\n ----------\n transactions : pd.DataFrame\n Time series containing one row per symbol (and potentially\n duplicate datetime indices) and columns for amount and\n price.\n\n Returns\n -------\n pd.DataFrame\n Daily transaction volume and number of shares.\n - See full explanation in tears.create_full_tear_sheet."} {"_id": "q_1192", "text": "Apply a slippage penalty for every dollar traded.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in create_full_tear_sheet.\n positions : pd.DataFrame\n Daily net position values.\n - See full explanation in create_full_tear_sheet.\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in create_full_tear_sheet.\n slippage_bps: int/float\n Basis points of slippage to apply.\n\n Returns\n -------\n pd.Series\n Time series of daily returns, adjusted for slippage."} {"_id": "q_1193", "text": "Merge transactions of the same direction separated by less than\n max_delta time duration.\n\n Parameters\n ----------\n transactions : pd.DataFrame\n Prices and amounts of executed round_trips. One row per trade.\n - See full explanation in tears.create_full_tear_sheet\n\n max_delta : pandas.Timedelta (optional)\n Merge transactions in the same direction separated by less\n than max_delta time duration.\n\n\n Returns\n -------\n transactions : pd.DataFrame"} {"_id": "q_1194", "text": "Group transactions into \"round trips\". First, transactions are\n grouped by day and directionality. Then, long and short\n transactions are matched to create round-trip round_trips for which\n PnL, duration and returns are computed. Crossings where a position\n changes from long to short and vice-versa are handled correctly.\n\n Under the hood, we reconstruct the individual shares in a\n portfolio over time and match round_trips in a FIFO-order.\n\n For example, the following transactions would constitute one round trip:\n index amount price symbol\n 2004-01-09 12:18:01 10 50 'AAPL'\n 2004-01-09 15:12:53 10 100 'AAPL'\n 2004-01-13 14:41:23 -10 100 'AAPL'\n 2004-01-13 15:23:34 -10 200 'AAPL'\n\n First, the first two and last two round_trips will be merged into a two\n single transactions (computing the price via vwap). Then, during\n the portfolio reconstruction, the two resulting transactions will\n be merged and result in 1 round-trip trade with a PnL of\n (150 * 20) - (75 * 20) = 1500.\n\n Note, that round trips do not have to close out positions\n completely. For example, we could have removed the last\n transaction in the example above and still generated a round-trip\n over 10 shares with 10 shares left in the portfolio to be matched\n with a later transaction.\n\n Parameters\n ----------\n transactions : pd.DataFrame\n Prices and amounts of executed round_trips. One row per trade.\n - See full explanation in tears.create_full_tear_sheet\n\n portfolio_value : pd.Series (optional)\n Portfolio value (all net assets including cash) over time.\n Note that portfolio_value needs to beginning of day, so either\n use .shift() or positions.sum(axis='columns') / (1+returns).\n\n Returns\n -------\n round_trips : pd.DataFrame\n DataFrame with one row per round trip. The returns column\n contains returns in respect to the portfolio value while\n rt_returns are the returns in regards to the invested capital\n into that partiulcar round-trip."} {"_id": "q_1195", "text": "Generate various round-trip statistics.\n\n Parameters\n ----------\n round_trips : pd.DataFrame\n DataFrame with one row per round trip trade.\n - See full explanation in round_trips.extract_round_trips\n\n Returns\n -------\n stats : dict\n A dictionary where each value is a pandas DataFrame containing\n various round-trip statistics.\n\n See also\n --------\n round_trips.print_round_trip_stats"} {"_id": "q_1196", "text": "Print various round-trip statistics. Tries to pretty-print tables\n with HTML output if run inside IPython NB.\n\n Parameters\n ----------\n round_trips : pd.DataFrame\n DataFrame with one row per round trip trade.\n - See full explanation in round_trips.extract_round_trips\n\n See also\n --------\n round_trips.gen_round_trip_stats"} {"_id": "q_1197", "text": "Attributes the performance of a returns stream to a set of risk factors.\n\n Preprocesses inputs, and then calls empyrical.perf_attrib. See\n empyrical.perf_attrib for more info.\n\n Performance attribution determines how much each risk factor, e.g.,\n momentum, the technology sector, etc., contributed to total returns, as\n well as the daily exposure to each of the risk factors. The returns that\n can be attributed to one of the given risk factors are the\n `common_returns`, and the returns that _cannot_ be attributed to a risk\n factor are the `specific_returns`, or the alpha. The common_returns and\n specific_returns summed together will always equal the total returns.\n\n Parameters\n ----------\n returns : pd.Series\n Returns for each day in the date range.\n - Example:\n 2017-01-01 -0.017098\n 2017-01-02 0.002683\n 2017-01-03 -0.008669\n\n positions: pd.DataFrame\n Daily holdings (in dollars or percentages), indexed by date.\n Will be converted to percentages if positions are in dollars.\n Short positions show up as cash in the 'cash' column.\n - Examples:\n AAPL TLT XOM cash\n 2017-01-01 34 58 10 0\n 2017-01-02 22 77 18 0\n 2017-01-03 -15 27 30 15\n\n AAPL TLT XOM cash\n 2017-01-01 0.333333 0.568627 0.098039 0.0\n 2017-01-02 0.188034 0.658120 0.153846 0.0\n 2017-01-03 0.208333 0.375000 0.416667 0.0\n\n factor_returns : pd.DataFrame\n Returns by factor, with date as index and factors as columns\n - Example:\n momentum reversal\n 2017-01-01 0.002779 -0.005453\n 2017-01-02 0.001096 0.010290\n\n factor_loadings : pd.DataFrame\n Factor loadings for all days in the date range, with date and ticker as\n index, and factors as columns.\n - Example:\n momentum reversal\n dt ticker\n 2017-01-01 AAPL -1.592914 0.852830\n TLT 0.184864 0.895534\n XOM 0.993160 1.149353\n 2017-01-02 AAPL -0.140009 -0.524952\n TLT -1.066978 0.185435\n XOM -1.798401 0.761549\n\n\n transactions : pd.DataFrame, optional\n Executed trade volumes and fill prices. Used to check the turnover of\n the algorithm. Default is None, in which case the turnover check is\n skipped.\n\n - One row per trade.\n - Trades on different names that occur at the\n same time will have identical indicies.\n - Example:\n index amount price symbol\n 2004-01-09 12:18:01 483 324.12 'AAPL'\n 2004-01-09 12:18:01 122 83.10 'MSFT'\n 2004-01-13 14:12:23 -75 340.43 'AAPL'\n\n pos_in_dollars : bool\n Flag indicating whether `positions` are in dollars or percentages\n If True, positions are in dollars.\n\n Returns\n -------\n tuple of (risk_exposures_portfolio, perf_attribution)\n\n risk_exposures_portfolio : pd.DataFrame\n df indexed by datetime, with factors as columns\n - Example:\n momentum reversal\n dt\n 2017-01-01 -0.238655 0.077123\n 2017-01-02 0.821872 1.520515\n\n perf_attribution : pd.DataFrame\n df with factors, common returns, and specific returns as columns,\n and datetimes as index\n - Example:\n momentum reversal common_returns specific_returns\n dt\n 2017-01-01 0.249087 0.935925 1.185012 1.185012\n 2017-01-02 -0.003194 -0.400786 -0.403980 -0.403980"} {"_id": "q_1198", "text": "Calls `perf_attrib` using inputs, and displays outputs using\n `utils.print_table`."} {"_id": "q_1199", "text": "Plot total, specific, and common returns.\n\n Parameters\n ----------\n perf_attrib_data : pd.DataFrame\n df with factors, common returns, and specific returns as columns,\n and datetimes as index. Assumes the `total_returns` column is NOT\n cost adjusted.\n - Example:\n momentum reversal common_returns specific_returns\n dt\n 2017-01-01 0.249087 0.935925 1.185012 1.185012\n 2017-01-02 -0.003194 -0.400786 -0.403980 -0.403980\n\n cost : pd.Series, optional\n if present, gets subtracted from `perf_attrib_data['total_returns']`,\n and gets plotted separately\n\n ax : matplotlib.axes.Axes\n axes on which plots are made. if None, current axes will be used\n\n Returns\n -------\n ax : matplotlib.axes.Axes"} {"_id": "q_1200", "text": "Plot each factor's contribution to performance.\n\n Parameters\n ----------\n perf_attrib_data : pd.DataFrame\n df with factors, common returns, and specific returns as columns,\n and datetimes as index\n - Example:\n momentum reversal common_returns specific_returns\n dt\n 2017-01-01 0.249087 0.935925 1.185012 1.185012\n 2017-01-02 -0.003194 -0.400786 -0.403980 -0.403980\n\n ax : matplotlib.axes.Axes\n axes on which plots are made. if None, current axes will be used\n\n title : str, optional\n title of plot\n\n Returns\n -------\n ax : matplotlib.axes.Axes"} {"_id": "q_1201", "text": "Convert positions to percentages if necessary, and change them\n to long format.\n\n Parameters\n ----------\n positions: pd.DataFrame\n Daily holdings (in dollars or percentages), indexed by date.\n Will be converted to percentages if positions are in dollars.\n Short positions show up as cash in the 'cash' column.\n\n pos_in_dollars : bool\n Flag indicating whether `positions` are in dollars or percentages\n If True, positions are in dollars."} {"_id": "q_1202", "text": "Compute cumulative returns, less costs."} {"_id": "q_1203", "text": "If zipline asset objects are used, we want to print them out prettily\n within the tear sheet. This function should only be applied directly\n before displaying."} {"_id": "q_1204", "text": "Logic for checking if a strategy is intraday and processing it.\n\n Parameters\n ----------\n estimate: boolean or str, optional\n Approximate returns for intraday strategies.\n See description in tears.create_full_tear_sheet.\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in create_full_tear_sheet.\n positions : pd.DataFrame\n Daily net position values.\n - See full explanation in create_full_tear_sheet.\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in create_full_tear_sheet.\n\n Returns\n -------\n pd.DataFrame\n Daily net position values, adjusted for intraday movement."} {"_id": "q_1205", "text": "Intraday strategies will often not hold positions at the day end.\n This attempts to find the point in the day that best represents\n the activity of the strategy on that day, and effectively resamples\n the end-of-day positions with the positions at this point of day.\n The point of day is found by detecting when our exposure in the\n market is at its maximum point. Note that this is an estimate.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in create_full_tear_sheet.\n positions : pd.DataFrame\n Daily net position values.\n - See full explanation in create_full_tear_sheet.\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in create_full_tear_sheet.\n\n Returns\n -------\n pd.DataFrame\n Daily net position values, resampled for intraday behavior."} {"_id": "q_1206", "text": "Drop entries from rets so that the start and end dates of rets match those\n of benchmark_rets.\n\n Parameters\n ----------\n rets : pd.Series\n Daily returns of the strategy, noncumulative.\n - See pf.tears.create_full_tear_sheet for more details\n\n benchmark_rets : pd.Series\n Daily returns of the benchmark, noncumulative.\n\n Returns\n -------\n clipped_rets : pd.Series\n Daily noncumulative returns with index clipped to match that of\n benchmark returns."} {"_id": "q_1207", "text": "Calls the currently registered 'returns_func'\n\n Parameters\n ----------\n symbol : object\n An identifier for the asset whose return\n series is desired.\n e.g. ticker symbol or database ID\n start : date, optional\n Earliest date to fetch data for.\n Defaults to earliest date available.\n end : date, optional\n Latest date to fetch data for.\n Defaults to latest date available.\n\n Returns\n -------\n pandas.Series\n Returned by the current 'returns_func'"} {"_id": "q_1208", "text": "Decorator to set plotting context and axes style during function call."} {"_id": "q_1209", "text": "Create pyfolio default axes style context.\n\n Under the hood, calls and returns seaborn.axes_style() with\n some custom settings. Usually you would use in a with-context.\n\n Parameters\n ----------\n style : str, optional\n Name of seaborn style.\n rc : dict, optional\n Config flags.\n\n Returns\n -------\n seaborn plotting context\n\n Example\n -------\n >>> with pyfolio.plotting.axes_style(style='whitegrid'):\n >>> pyfolio.create_full_tear_sheet(..., set_context=False)\n\n See also\n --------\n For more information, see seaborn.plotting_context()."} {"_id": "q_1210", "text": "Plots a heatmap of returns by month.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to seaborn plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."} {"_id": "q_1211", "text": "Plots a distribution of monthly returns.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."} {"_id": "q_1212", "text": "Plots total amount of stocks with an active position, breaking out\n short and long into transparent filled regions.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n positions : pd.DataFrame, optional\n Daily net position values.\n - See full explanation in tears.create_full_tear_sheet.\n legend_loc : matplotlib.loc, optional\n The location of the legend on the plot.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."} {"_id": "q_1213", "text": "Plots cumulative returns highlighting top drawdown periods.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n top : int, optional\n Amount of top drawdowns periods to plot (default 10).\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."} {"_id": "q_1214", "text": "Plots how far underwaterr returns are over time, or plots current\n drawdown vs. date.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."} {"_id": "q_1215", "text": "Plots cumulative rolling returns versus some benchmarks'.\n\n Backtest returns are in green, and out-of-sample (live trading)\n returns are in red.\n\n Additionally, a non-parametric cone plot may be added to the\n out-of-sample returns region.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n factor_returns : pd.Series, optional\n Daily noncumulative returns of the benchmark factor to which betas are\n computed. Usually a benchmark such as market returns.\n - This is in the same style as returns.\n live_start_date : datetime, optional\n The date when the strategy began live trading, after\n its backtest period. This date should be normalized.\n logy : bool, optional\n Whether to log-scale the y-axis.\n cone_std : float, or tuple, optional\n If float, The standard deviation to use for the cone plots.\n If tuple, Tuple of standard deviation values to use for the cone plots\n - See timeseries.forecast_cone_bounds for more details.\n legend_loc : matplotlib.loc, optional\n The location of the legend on the plot.\n volatility_match : bool, optional\n Whether to normalize the volatility of the returns to those of the\n benchmark returns. This helps compare strategies with different\n volatilities. Requires passing of benchmark_rets.\n cone_function : function, optional\n Function to use when generating forecast probability cone.\n The function signiture must follow the form:\n def cone(in_sample_returns (pd.Series),\n days_to_project_forward (int),\n cone_std= (float, or tuple),\n starting_value= (int, or float))\n See timeseries.forecast_cone_bootstrap for an example.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."} {"_id": "q_1216", "text": "Plots the rolling 6-month and 12-month beta versus date.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n factor_returns : pd.Series\n Daily noncumulative returns of the benchmark factor to which betas are\n computed. Usually a benchmark such as market returns.\n - This is in the same style as returns.\n legend_loc : matplotlib.loc, optional\n The location of the legend on the plot.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."} {"_id": "q_1217", "text": "Plots the rolling Sharpe ratio versus date.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n factor_returns : pd.Series, optional\n Daily noncumulative returns of the benchmark factor for\n which the benchmark rolling Sharpe is computed. Usually\n a benchmark such as market returns.\n - This is in the same style as returns.\n rolling_window : int, optional\n The days window over which to compute the sharpe ratio.\n legend_loc : matplotlib.loc, optional\n The location of the legend on the plot.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."} {"_id": "q_1218", "text": "Plots the sector exposures of the portfolio over time.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n sector_alloc : pd.DataFrame\n Portfolio allocation of positions. See pos.get_sector_alloc.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."} {"_id": "q_1219", "text": "Plots equity curves at different per-dollar slippage assumptions.\n\n Parameters\n ----------\n returns : pd.Series\n Timeseries of portfolio returns to be adjusted for various\n degrees of slippage.\n positions : pd.DataFrame\n Daily net position values.\n - See full explanation in tears.create_full_tear_sheet.\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in tears.create_full_tear_sheet.\n slippage_params: tuple\n Slippage pameters to apply to the return time series (in\n basis points).\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to seaborn plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."} {"_id": "q_1220", "text": "Plots trading volume per day vs. date.\n\n Also displays all-time daily average.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in tears.create_full_tear_sheet.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."} {"_id": "q_1221", "text": "Plots a histogram of transaction times, binning the times into\n buckets of a given duration.\n\n Parameters\n ----------\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in tears.create_full_tear_sheet.\n bin_minutes : float, optional\n Sizes of the bins in minutes, defaults to 5 minutes.\n tz : str, optional\n Time zone to plot against. Note that if the specified\n zone does not apply daylight savings, the distribution\n may be partially offset.\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n **kwargs, optional\n Passed to plotting function.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."} {"_id": "q_1222", "text": "Prints information about the worst drawdown periods.\n\n Prints peak dates, valley dates, recovery dates, and net\n drawdowns.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n top : int, optional\n Amount of top drawdowns periods to plot (default 5)."} {"_id": "q_1223", "text": "Plots timespans and directions of a sample of round trip trades.\n\n Parameters\n ----------\n round_trips : pd.DataFrame\n DataFrame with one row per round trip trade.\n - See full explanation in round_trips.extract_round_trips\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."} {"_id": "q_1224", "text": "Prints the share of total PnL contributed by each\n traded name.\n\n Parameters\n ----------\n round_trips : pd.DataFrame\n DataFrame with one row per round trip trade.\n - See full explanation in round_trips.extract_round_trips\n ax : matplotlib.Axes, optional\n Axes upon which to plot.\n\n Returns\n -------\n ax : matplotlib.Axes\n The axes that were plotted on."} {"_id": "q_1225", "text": "Determines the Sortino ratio of a strategy.\n\n Parameters\n ----------\n returns : pd.Series or pd.DataFrame\n Daily returns of the strategy, noncumulative.\n - See full explanation in :func:`~pyfolio.timeseries.cum_returns`.\n required_return: float / series\n minimum acceptable return\n period : str, optional\n Defines the periodicity of the 'returns' data for purposes of\n annualizing. Can be 'monthly', 'weekly', or 'daily'.\n - Defaults to 'daily'.\n\n Returns\n -------\n depends on input type\n series ==> float\n DataFrame ==> np.array\n\n Annualized Sortino ratio."} {"_id": "q_1226", "text": "Determines the Sharpe ratio of a strategy.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in :func:`~pyfolio.timeseries.cum_returns`.\n risk_free : int, float\n Constant risk-free return throughout the period.\n period : str, optional\n Defines the periodicity of the 'returns' data for purposes of\n annualizing. Can be 'monthly', 'weekly', or 'daily'.\n - Defaults to 'daily'.\n\n Returns\n -------\n float\n Sharpe ratio.\n np.nan\n If insufficient length of returns or if if adjusted returns are 0.\n\n Note\n -----\n See https://en.wikipedia.org/wiki/Sharpe_ratio for more details."} {"_id": "q_1227", "text": "Determines the rolling beta of a strategy.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n factor_returns : pd.Series or pd.DataFrame\n Daily noncumulative returns of the benchmark factor to which betas are\n computed. Usually a benchmark such as market returns.\n - If DataFrame is passed, computes rolling beta for each column.\n - This is in the same style as returns.\n rolling_window : int, optional\n The size of the rolling window, in days, over which to compute\n beta (default 6 months).\n\n Returns\n -------\n pd.Series\n Rolling beta.\n\n Note\n -----\n See https://en.wikipedia.org/wiki/Beta_(finance) for more details."} {"_id": "q_1228", "text": "Calculates the gross leverage of a strategy.\n\n Parameters\n ----------\n positions : pd.DataFrame\n Daily net position values.\n - See full explanation in tears.create_full_tear_sheet.\n\n Returns\n -------\n pd.Series\n Gross leverage."} {"_id": "q_1229", "text": "Calculates various performance metrics of a strategy, for use in\n plotting.show_perf_stats.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n factor_returns : pd.Series, optional\n Daily noncumulative returns of the benchmark factor to which betas are\n computed. Usually a benchmark such as market returns.\n - This is in the same style as returns.\n - If None, do not compute alpha, beta, and information ratio.\n positions : pd.DataFrame\n Daily net position values.\n - See full explanation in tears.create_full_tear_sheet.\n transactions : pd.DataFrame\n Prices and amounts of executed trades. One row per trade.\n - See full explanation in tears.create_full_tear_sheet.\n turnover_denom : str\n Either AGB or portfolio_value, default AGB.\n - See full explanation in txn.get_turnover.\n\n Returns\n -------\n pd.Series\n Performance metrics."} {"_id": "q_1230", "text": "Calculate various summary statistics of data.\n\n Parameters\n ----------\n x : numpy.ndarray or pandas.Series\n Array to compute summary statistics for.\n\n Returns\n -------\n pandas.Series\n Series containing mean, median, std, as well as 5, 25, 75 and\n 95 percentiles of passed in values."} {"_id": "q_1231", "text": "Finds top drawdowns, sorted by drawdown amount.\n\n Parameters\n ----------\n returns : pd.Series\n Daily returns of the strategy, noncumulative.\n - See full explanation in tears.create_full_tear_sheet.\n top : int, optional\n The amount of top drawdowns to find (default 10).\n\n Returns\n -------\n drawdowns : list\n List of drawdown peaks, valleys, and recoveries. See get_max_drawdown."} {"_id": "q_1232", "text": "Gnerate alternate paths using available values from in-sample returns.\n\n Parameters\n ----------\n is_returns : pandas.core.frame.DataFrame\n Non-cumulative in-sample returns.\n num_days : int\n Number of days to project the probability cone forward.\n starting_value : int or float\n Starting value of the out of sample period.\n num_samples : int\n Number of samples to draw from the in-sample daily returns.\n Each sample will be an array with length num_days.\n A higher number of samples will generate a more accurate\n bootstrap cone.\n random_seed : int\n Seed for the pseudorandom number generator used by the pandas\n sample method.\n\n Returns\n -------\n samples : numpy.ndarray"} {"_id": "q_1233", "text": "Gnerate the upper and lower bounds of an n standard deviation\n cone of forecasted cumulative returns.\n\n Parameters\n ----------\n samples : numpy.ndarray\n Alternative paths, or series of possible outcomes.\n cone_std : list of int/float\n Number of standard devations to use in the boundaries of\n the cone. If multiple values are passed, cone bounds will\n be generated for each value.\n\n Returns\n -------\n samples : pandas.core.frame.DataFrame"} {"_id": "q_1234", "text": "Generate plot for stochastic volatility model.\n\n Parameters\n ----------\n data : pandas.Series\n Returns to model.\n trace : pymc3.sampling.BaseTrace object, optional\n trace as returned by model_stoch_vol\n If not passed, sample from model.\n ax : matplotlib.axes object, optional\n Plot into axes object\n\n Returns\n -------\n ax object\n\n See Also\n --------\n model_stoch_vol : run stochastic volatility model"} {"_id": "q_1235", "text": "Compute 5, 25, 75 and 95 percentiles of cumulative returns, used\n for the Bayesian cone.\n\n Parameters\n ----------\n preds : numpy.array\n Multiple (simulated) cumulative returns.\n starting_value : int (optional)\n Have cumulative returns start around this value.\n Default = 1.\n\n Returns\n -------\n dict of percentiles over time\n Dictionary mapping percentiles (5, 25, 75, 95) to a\n timeseries."} {"_id": "q_1236", "text": "Compute Bayesian consistency score.\n\n Parameters\n ----------\n returns_test : pd.Series\n Observed cumulative returns.\n preds : numpy.array\n Multiple (simulated) cumulative returns.\n\n Returns\n -------\n Consistency score\n Score from 100 (returns_test perfectly on the median line of the\n Bayesian cone spanned by preds) to 0 (returns_test completely\n outside of Bayesian cone.)"} {"_id": "q_1237", "text": "Generate cumulative returns plot with Bayesian cone.\n\n Parameters\n ----------\n returns_train : pd.Series\n Timeseries of simple returns\n returns_test : pd.Series\n Out-of-sample returns. Datetimes in returns_test will be added to\n returns_train as missing values and predictions will be generated\n for them.\n ppc : np.array\n Posterior predictive samples of shape samples x\n len(returns_test).\n plot_train_len : int (optional)\n How many data points to plot of returns_train. Useful to zoom in on\n the prediction if there is a long backtest period.\n ax : matplotlib.Axis (optional)\n Axes upon which to plot.\n\n Returns\n -------\n score : float\n Consistency score (see compute_consistency_score)\n trace : pymc3.sampling.BaseTrace\n A PyMC3 trace object that contains samples for each parameter\n of the posterior."} {"_id": "q_1238", "text": "Defer the message.\n\n This message will remain in the queue but must be received\n specifically by its sequence number in order to be processed.\n\n :raises: ~azure.servicebus.common.errors.MessageAlreadySettled if the message has been settled.\n :raises: ~azure.servicebus.common.errors.MessageLockExpired if message lock has already expired.\n :raises: ~azure.servicebus.common.errors.SessionLockExpired if session lock has already expired.\n :raises: ~azure.servicebus.common.errors.MessageSettleFailed if message settle operation fails."} {"_id": "q_1239", "text": "Guess Python Autorest options based on the spec path.\n\n Expected path:\n specification/compute/resource-manager/readme.md"} {"_id": "q_1240", "text": "Deletes the managed application definition.\n\n :param application_definition_id: The fully qualified ID of the\n managed application definition, including the managed application name\n and the managed application definition resource type. Use the format,\n /subscriptions/{guid}/resourceGroups/{resource-group-name}/Microsoft.Solutions/applicationDefinitions/{applicationDefinition-name}\n :type application_definition_id: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns None or\n ClientRawResponse if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]\n :raises:\n :class:`ErrorResponseException`"} {"_id": "q_1241", "text": "Creates a new managed application definition.\n\n :param application_definition_id: The fully qualified ID of the\n managed application definition, including the managed application name\n and the managed application definition resource type. Use the format,\n /subscriptions/{guid}/resourceGroups/{resource-group-name}/Microsoft.Solutions/applicationDefinitions/{applicationDefinition-name}\n :type application_definition_id: str\n :param parameters: Parameters supplied to the create or update a\n managed application definition.\n :type parameters:\n ~azure.mgmt.resource.managedapplications.models.ApplicationDefinition\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns ApplicationDefinition\n or ClientRawResponse if raw==True\n :rtype:\n ~msrestazure.azure_operation.AzureOperationPoller[~azure.mgmt.resource.managedapplications.models.ApplicationDefinition]\n or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[~azure.mgmt.resource.managedapplications.models.ApplicationDefinition]]\n :raises:\n :class:`ErrorResponseException`"} {"_id": "q_1242", "text": "Return the target uri for the request."} {"_id": "q_1243", "text": "Sends request to cloud service server and return the response."} {"_id": "q_1244", "text": "Executes script actions on the specified HDInsight cluster.\n\n :param resource_group_name: The name of the resource group.\n :type resource_group_name: str\n :param cluster_name: The name of the cluster.\n :type cluster_name: str\n :param persist_on_success: Gets or sets if the scripts needs to be\n persisted.\n :type persist_on_success: bool\n :param script_actions: The list of run time script actions.\n :type script_actions:\n list[~azure.mgmt.hdinsight.models.RuntimeScriptAction]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns None or\n ClientRawResponse if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]\n :raises:\n :class:`ErrorResponseException`"} {"_id": "q_1245", "text": "Check the availability of a Front Door resource name.\n\n :param name: The resource name to validate.\n :type name: str\n :param type: The type of the resource whose name is to be validated.\n Possible values include: 'Microsoft.Network/frontDoors',\n 'Microsoft.Network/frontDoors/frontendEndpoints'\n :type type: str or ~azure.mgmt.frontdoor.models.ResourceType\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: CheckNameAvailabilityOutput or ClientRawResponse if raw=true\n :rtype: ~azure.mgmt.frontdoor.models.CheckNameAvailabilityOutput or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`ErrorResponseException`"} {"_id": "q_1246", "text": "Extracts the host authority from the given URI."} {"_id": "q_1247", "text": "Return Credentials and default SubscriptionID of current loaded profile of the CLI.\n\n Credentials will be the \"az login\" command:\n https://docs.microsoft.com/cli/azure/authenticate-azure-cli\n\n Default subscription ID is either the only one you have, or you can define it:\n https://docs.microsoft.com/cli/azure/manage-azure-subscriptions-azure-cli\n\n .. versionadded:: 1.1.6\n\n :param str resource: The alternative resource for credentials if not ARM (GraphRBac, etc.)\n :param bool with_tenant: If True, return a three-tuple with last as tenant ID\n :return: tuple of Credentials and SubscriptionID (and tenant ID if with_tenant)\n :rtype: tuple"} {"_id": "q_1248", "text": "Opens the request.\n\n method:\n the request VERB 'GET', 'POST', etc.\n url:\n the url to connect"} {"_id": "q_1249", "text": "Sets up the timeout for the request."} {"_id": "q_1250", "text": "Sets the request header."} {"_id": "q_1251", "text": "Sends the request body."} {"_id": "q_1252", "text": "Gets status text of response."} {"_id": "q_1253", "text": "Gets response body as a SAFEARRAY and converts the SAFEARRAY to str."} {"_id": "q_1254", "text": "Sets client certificate for the request."} {"_id": "q_1255", "text": "Connects to host and sends the request."} {"_id": "q_1256", "text": "Sends the headers of request."} {"_id": "q_1257", "text": "Sends request body."} {"_id": "q_1258", "text": "simplified an id to be more friendly for us people"} {"_id": "q_1259", "text": "converts a Python name into a serializable name"} {"_id": "q_1260", "text": "Verify whether two faces belong to a same person. Compares a face Id\n with a Person Id.\n\n :param face_id: FaceId of the face, comes from Face - Detect\n :type face_id: str\n :param person_id: Specify a certain person in a person group or a\n large person group. personId is created in PersonGroup Person - Create\n or LargePersonGroup Person - Create.\n :type person_id: str\n :param person_group_id: Using existing personGroupId and personId for\n fast loading a specified person. personGroupId is created in\n PersonGroup - Create. Parameter personGroupId and largePersonGroupId\n should not be provided at the same time.\n :type person_group_id: str\n :param large_person_group_id: Using existing largePersonGroupId and\n personId for fast loading a specified person. largePersonGroupId is\n created in LargePersonGroup - Create. Parameter personGroupId and\n largePersonGroupId should not be provided at the same time.\n :type large_person_group_id: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: VerifyResult or ClientRawResponse if raw=true\n :rtype: ~azure.cognitiveservices.vision.face.models.VerifyResult or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`APIErrorException`"} {"_id": "q_1261", "text": "Adds a job to the specified account.\n\n The Batch service supports two ways to control the work done as part of\n a job. In the first approach, the user specifies a Job Manager task.\n The Batch service launches this task when it is ready to start the job.\n The Job Manager task controls all other tasks that run under this job,\n by using the Task APIs. In the second approach, the user directly\n controls the execution of tasks under an active job, by using the Task\n APIs. Also note: when naming jobs, avoid including sensitive\n information such as user names or secret project names. This\n information may appear in telemetry logs accessible to Microsoft\n Support engineers.\n\n :param job: The job to be added.\n :type job: ~azure.batch.models.JobAddParameter\n :param job_add_options: Additional parameters for the operation\n :type job_add_options: ~azure.batch.models.JobAddOptions\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: None or ClientRawResponse if raw=true\n :rtype: None or ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`BatchErrorException`"} {"_id": "q_1262", "text": "descends through a hierarchy of nodes returning the list of children\n at the inner most level. Only returns children who share a common parent,\n not cousins."} {"_id": "q_1263", "text": "Recursively searches from the parent to the child,\n gathering all the applicable namespaces along the way"} {"_id": "q_1264", "text": "Converts xml response to service bus namespace\n\n The xml format for namespace:\n\nuuid:00000000-0000-0000-0000-000000000000;id=0000000\nmyunittests\n2012-08-22T16:48:10Z\n\n \n myunittests\n West US\n 0000000000000000000000000000000000000000000=\n Active\n 2012-08-22T16:48:10.217Z\n https://myunittests-sb.accesscontrol.windows.net/\n https://myunittests.servicebus.windows.net/\n Endpoint=sb://myunittests.servicebus.windows.net/;SharedSecretIssuer=owner;SharedSecretValue=0000000000000000000000000000000000000000000=\n 00000000000000000000000000000000\n true\n \n\n"} {"_id": "q_1265", "text": "Converts xml response to service bus region\n\n The xml format for region:\n\nuuid:157c311f-081f-4b4a-a0ba-a8f990ffd2a3;id=1756759\n\n2013-04-10T18:25:29Z\n\n \n East Asia\n East Asia\n \n\n"} {"_id": "q_1266", "text": "Replaces the runbook draft content.\n\n :param resource_group_name: Name of an Azure Resource group.\n :type resource_group_name: str\n :param automation_account_name: The name of the automation account.\n :type automation_account_name: str\n :param runbook_name: The runbook name.\n :type runbook_name: str\n :param runbook_content: The\u00a0runbook\u00a0draft\u00a0content.\n :type runbook_content: Generator\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns object or\n ClientRawResponse if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[Generator]\n or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[Generator]]\n :raises:\n :class:`ErrorResponseException`"} {"_id": "q_1267", "text": "Asynchronous operation to modify a knowledgebase.\n\n :param kb_id: Knowledgebase id.\n :type kb_id: str\n :param update_kb: Post body of the request.\n :type update_kb:\n ~azure.cognitiveservices.knowledge.qnamaker.models.UpdateKbOperationDTO\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: Operation or ClientRawResponse if raw=true\n :rtype: ~azure.cognitiveservices.knowledge.qnamaker.models.Operation\n or ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`ErrorResponseException`"} {"_id": "q_1268", "text": "Gets a collection that contains the object IDs of the groups of which\n the user is a member.\n\n :param object_id: The object ID of the user for which to get group\n membership.\n :type object_id: str\n :param security_enabled_only: If true, only membership in\n security-enabled groups should be checked. Otherwise, membership in\n all groups should be checked.\n :type security_enabled_only: bool\n :param additional_properties: Unmatched properties from the message\n are deserialized this collection\n :type additional_properties: dict[str, object]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: An iterator like instance of str\n :rtype: ~azure.graphrbac.models.StrPaged[str]\n :raises:\n :class:`GraphErrorException`"} {"_id": "q_1269", "text": "Will clone the given PR branch and vuild the package with the given name."} {"_id": "q_1270", "text": "Import data into Redis cache.\n\n :param resource_group_name: The name of the resource group.\n :type resource_group_name: str\n :param name: The name of the Redis cache.\n :type name: str\n :param files: files to import.\n :type files: list[str]\n :param format: File format.\n :type format: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns None or\n ClientRawResponse if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]\n :raises: :class:`CloudError`"} {"_id": "q_1271", "text": "Replace alterations data.\n\n :param word_alterations: Collection of word alterations.\n :type word_alterations:\n list[~azure.cognitiveservices.knowledge.qnamaker.models.AlterationsDTO]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: None or ClientRawResponse if raw=true\n :rtype: None or ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`ErrorResponseException`"} {"_id": "q_1272", "text": "Returns system properties for the specified storage account.\n\n service_name:\n Name of the storage service account."} {"_id": "q_1273", "text": "Returns the primary and secondary access keys for the specified\n storage account.\n\n service_name:\n Name of the storage service account."} {"_id": "q_1274", "text": "Deletes the specified storage account from Windows Azure.\n\n service_name:\n Name of the storage service account."} {"_id": "q_1275", "text": "Checks to see if the specified storage account name is available, or\n if it has already been taken.\n\n service_name:\n Name of the storage service account."} {"_id": "q_1276", "text": "Retrieves system properties for the specified hosted service. These\n properties include the service name and service type; the name of the\n affinity group to which the service belongs, or its location if it is\n not part of an affinity group; and optionally, information on the\n service's deployments.\n\n service_name:\n Name of the hosted service.\n embed_detail:\n When True, the management service returns properties for all\n deployments of the service, as well as for the service itself."} {"_id": "q_1277", "text": "Creates a new hosted service in Windows Azure.\n\n service_name:\n A name for the hosted service that is unique within Windows Azure.\n This name is the DNS prefix name and can be used to access the\n hosted service.\n label:\n A name for the hosted service. The name can be up to 100 characters\n in length. The name can be used to identify the storage account for\n your tracking purposes.\n description:\n A description for the hosted service. The description can be up to\n 1024 characters in length.\n location:\n The location where the hosted service will be created. You can\n specify either a location or affinity_group, but not both.\n affinity_group:\n The name of an existing affinity group associated with this\n subscription. This name is a GUID and can be retrieved by examining\n the name element of the response body returned by\n list_affinity_groups. You can specify either a location or\n affinity_group, but not both.\n extended_properties:\n Dictionary containing name/value pairs of storage account\n properties. You can have a maximum of 50 extended property\n name/value pairs. The maximum length of the Name element is 64\n characters, only alphanumeric characters and underscores are valid\n in the Name, and the name must start with a letter. The value has\n a maximum length of 255 characters."} {"_id": "q_1278", "text": "Deletes the specified hosted service from Windows Azure.\n\n service_name:\n Name of the hosted service.\n complete:\n True if all OS/data disks and the source blobs for the disks should\n also be deleted from storage."} {"_id": "q_1279", "text": "Uploads a new service package and creates a new deployment on staging\n or production.\n\n service_name:\n Name of the hosted service.\n deployment_slot:\n The environment to which the hosted service is deployed. Valid\n values are: staging, production\n name:\n The name for the deployment. The deployment name must be unique\n among other deployments for the hosted service.\n package_url:\n A URL that refers to the location of the service package in the\n Blob service. The service package can be located either in a\n storage account beneath the same subscription or a Shared Access\n Signature (SAS) URI from any storage account.\n label:\n A name for the hosted service. The name can be up to 100 characters\n in length. It is recommended that the label be unique within the\n subscription. The name can be used to identify the hosted service\n for your tracking purposes.\n configuration:\n The base-64 encoded service configuration file for the deployment.\n start_deployment:\n Indicates whether to start the deployment immediately after it is\n created. If false, the service model is still deployed to the\n virtual machines but the code is not run immediately. Instead, the\n service is Suspended until you call Update Deployment Status and\n set the status to Running, at which time the service will be\n started. A deployed service still incurs charges, even if it is\n suspended.\n treat_warnings_as_error:\n Indicates whether to treat package validation warnings as errors.\n If set to true, the Created Deployment operation fails if there\n are validation warnings on the service package.\n extended_properties:\n Dictionary containing name/value pairs of storage account\n properties. You can have a maximum of 50 extended property\n name/value pairs. The maximum length of the Name element is 64\n characters, only alphanumeric characters and underscores are valid\n in the Name, and the name must start with a letter. The value has\n a maximum length of 255 characters."} {"_id": "q_1280", "text": "Deletes the specified deployment.\n\n service_name:\n Name of the hosted service.\n deployment_name:\n The name of the deployment."} {"_id": "q_1281", "text": "Initiates a virtual IP swap between the staging and production\n deployment environments for a service. If the service is currently\n running in the staging environment, it will be swapped to the\n production environment. If it is running in the production\n environment, it will be swapped to staging.\n\n service_name:\n Name of the hosted service.\n production:\n The name of the production deployment.\n source_deployment:\n The name of the source deployment."} {"_id": "q_1282", "text": "Initiates a change to the deployment configuration.\n\n service_name:\n Name of the hosted service.\n deployment_name:\n The name of the deployment.\n configuration:\n The base-64 encoded service configuration file for the deployment.\n treat_warnings_as_error:\n Indicates whether to treat package validation warnings as errors.\n If set to true, the Created Deployment operation fails if there\n are validation warnings on the service package.\n mode:\n If set to Manual, WalkUpgradeDomain must be called to apply the\n update. If set to Auto, the Windows Azure platform will\n automatically apply the update To each upgrade domain for the\n service. Possible values are: Auto, Manual\n extended_properties:\n Dictionary containing name/value pairs of storage account\n properties. You can have a maximum of 50 extended property\n name/value pairs. The maximum length of the Name element is 64\n characters, only alphanumeric characters and underscores are valid\n in the Name, and the name must start with a letter. The value has\n a maximum length of 255 characters."} {"_id": "q_1283", "text": "Initiates a change in deployment status.\n\n service_name:\n Name of the hosted service.\n deployment_name:\n The name of the deployment.\n status:\n The change to initiate to the deployment status. Possible values\n include:\n Running, Suspended"} {"_id": "q_1284", "text": "Specifies the next upgrade domain to be walked during manual in-place\n upgrade or configuration change.\n\n service_name:\n Name of the hosted service.\n deployment_name:\n The name of the deployment.\n upgrade_domain:\n An integer value that identifies the upgrade domain to walk.\n Upgrade domains are identified with a zero-based index: the first\n upgrade domain has an ID of 0, the second has an ID of 1, and so on."} {"_id": "q_1285", "text": "Reinstalls the operating system on instances of web roles or worker\n roles and initializes the storage resources that are used by them. If\n you do not want to initialize storage resources, you can use\n reimage_role_instance.\n\n service_name:\n Name of the hosted service.\n deployment_name:\n The name of the deployment.\n role_instance_names:\n List of role instance names."} {"_id": "q_1286", "text": "Deletes a service certificate from the certificate store of a hosted\n service.\n\n service_name:\n Name of the hosted service.\n thumbalgorithm:\n The algorithm for the certificate's thumbprint.\n thumbprint:\n The hexadecimal representation of the thumbprint."} {"_id": "q_1287", "text": "The Add Management Certificate operation adds a certificate to the\n list of management certificates. Management certificates, which are\n also known as subscription certificates, authenticate clients\n attempting to connect to resources associated with your Windows Azure\n subscription.\n\n public_key:\n A base64 representation of the management certificate public key.\n thumbprint:\n The thumb print that uniquely identifies the management\n certificate.\n data:\n The certificate's raw data in base-64 encoded .cer format."} {"_id": "q_1288", "text": "The Delete Management Certificate operation deletes a certificate from\n the list of management certificates. Management certificates, which\n are also known as subscription certificates, authenticate clients\n attempting to connect to resources associated with your Windows Azure\n subscription.\n\n thumbprint:\n The thumb print that uniquely identifies the management\n certificate."} {"_id": "q_1289", "text": "Returns the system properties associated with the specified affinity\n group.\n\n affinity_group_name:\n The name of the affinity group."} {"_id": "q_1290", "text": "Deletes an affinity group in the specified subscription.\n\n affinity_group_name:\n The name of the affinity group."} {"_id": "q_1291", "text": "List subscription operations.\n\n start_time: Required. An ISO8601 date.\n end_time: Required. An ISO8601 date.\n object_id_filter: Optional. Returns subscription operations only for the specified object type and object ID\n operation_result_filter: Optional. Returns subscription operations only for the specified result status, either Succeeded, Failed, or InProgress.\n continuation_token: Optional.\n More information at:\n https://msdn.microsoft.com/en-us/library/azure/gg715318.aspx"} {"_id": "q_1292", "text": "Deletes a reserved IP address from the specified subscription.\n\n name:\n Required. Name of the reserved IP address."} {"_id": "q_1293", "text": "Disassociate an existing reservedIP from the given deployment.\n\n name:\n Required. Name of the reserved IP address.\n\n service_name:\n Required. Name of the hosted service.\n\n deployment_name:\n Required. Name of the deployment.\n\n virtual_ip_name:\n Optional. Name of the VirtualIP in case of multi Vip tenant.\n If this value is not specified default virtualIP is used\n for this operation."} {"_id": "q_1294", "text": "Retrieves information about the specified reserved IP address.\n\n name:\n Required. Name of the reserved IP address."} {"_id": "q_1295", "text": "Retrieves the specified virtual machine.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role."} {"_id": "q_1296", "text": "Provisions a virtual machine based on the supplied configuration.\n\n service_name:\n Name of the hosted service.\n deployment_name:\n The name for the deployment. The deployment name must be unique\n among other deployments for the hosted service.\n deployment_slot:\n The environment to which the hosted service is deployed. Valid\n values are: staging, production\n label:\n Specifies an identifier for the deployment. The label can be up to\n 100 characters long. The label can be used for tracking purposes.\n role_name:\n The name of the role.\n system_config:\n Contains the metadata required to provision a virtual machine from\n a Windows or Linux OS image. Use an instance of\n WindowsConfigurationSet or LinuxConfigurationSet.\n os_virtual_hard_disk:\n Contains the parameters Windows Azure uses to create the operating\n system disk for the virtual machine. If you are creating a Virtual\n Machine by using a VM Image, this parameter is not used.\n network_config:\n Encapsulates the metadata required to create the virtual network\n configuration for a virtual machine. If you do not include a\n network configuration set you will not be able to access the VM\n through VIPs over the internet. If your virtual machine belongs to\n a virtual network you can not specify which subnet address space\n it resides under. Use an instance of ConfigurationSet.\n availability_set_name:\n Specifies the name of an availability set to which to add the\n virtual machine. This value controls the virtual machine\n allocation in the Windows Azure environment. Virtual machines\n specified in the same availability set are allocated to different\n nodes to maximize availability.\n data_virtual_hard_disks:\n Contains the parameters Windows Azure uses to create a data disk\n for a virtual machine.\n role_size:\n The size of the virtual machine to allocate. The default value is\n Small. Possible values are: ExtraSmall,Small,Medium,Large,\n ExtraLarge,A5,A6,A7,A8,A9,Basic_A0,Basic_A1,Basic_A2,Basic_A3,\n Basic_A4,Standard_D1,Standard_D2,Standard_D3,Standard_D4,\n Standard_D11,Standard_D12,Standard_D13,Standard_D14,Standard_G1,\n Standard_G2,Sandard_G3,Standard_G4,Standard_G5. The specified\n value must be compatible with the disk selected in the \n OSVirtualHardDisk values.\n role_type:\n The type of the role for the virtual machine. The only supported\n value is PersistentVMRole.\n virtual_network_name:\n Specifies the name of an existing virtual network to which the\n deployment will belong.\n resource_extension_references:\n Optional. Contains a collection of resource extensions that are to\n be installed on the Virtual Machine. This element is used if\n provision_guest_agent is set to True. Use an iterable of instances\n of ResourceExtensionReference.\n provision_guest_agent:\n Optional. Indicates whether the VM Agent is installed on the\n Virtual Machine. To run a resource extension in a Virtual Machine,\n this service must be installed.\n vm_image_name:\n Optional. Specifies the name of the VM Image that is to be used to\n create the Virtual Machine. If this is specified, the\n system_config and network_config parameters are not used.\n media_location:\n Optional. Required if the Virtual Machine is being created from a\n published VM Image. Specifies the location of the VHD file that is\n created when VMImageName specifies a published VM Image.\n dns_servers:\n Optional. List of DNS servers (use DnsServer class) to associate\n with the Virtual Machine.\n reserved_ip_name:\n Optional. Specifies the name of a reserved IP address that is to be\n assigned to the deployment. You must run create_reserved_ip_address\n before you can assign the address to the deployment using this\n element."} {"_id": "q_1297", "text": "Adds a virtual machine to an existing deployment.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role.\n system_config:\n Contains the metadata required to provision a virtual machine from\n a Windows or Linux OS image. Use an instance of\n WindowsConfigurationSet or LinuxConfigurationSet.\n os_virtual_hard_disk:\n Contains the parameters Windows Azure uses to create the operating\n system disk for the virtual machine. If you are creating a Virtual\n Machine by using a VM Image, this parameter is not used.\n network_config:\n Encapsulates the metadata required to create the virtual network\n configuration for a virtual machine. If you do not include a\n network configuration set you will not be able to access the VM\n through VIPs over the internet. If your virtual machine belongs to\n a virtual network you can not specify which subnet address space\n it resides under.\n availability_set_name:\n Specifies the name of an availability set to which to add the\n virtual machine. This value controls the virtual machine allocation\n in the Windows Azure environment. Virtual machines specified in the\n same availability set are allocated to different nodes to maximize\n availability.\n data_virtual_hard_disks:\n Contains the parameters Windows Azure uses to create a data disk\n for a virtual machine.\n role_size:\n The size of the virtual machine to allocate. The default value is\n Small. Possible values are: ExtraSmall, Small, Medium, Large,\n ExtraLarge. The specified value must be compatible with the disk\n selected in the OSVirtualHardDisk values.\n role_type:\n The type of the role for the virtual machine. The only supported\n value is PersistentVMRole.\n resource_extension_references:\n Optional. Contains a collection of resource extensions that are to\n be installed on the Virtual Machine. This element is used if\n provision_guest_agent is set to True.\n provision_guest_agent:\n Optional. Indicates whether the VM Agent is installed on the\n Virtual Machine. To run a resource extension in a Virtual Machine,\n this service must be installed.\n vm_image_name:\n Optional. Specifies the name of the VM Image that is to be used to\n create the Virtual Machine. If this is specified, the\n system_config and network_config parameters are not used.\n media_location:\n Optional. Required if the Virtual Machine is being created from a\n published VM Image. Specifies the location of the VHD file that is\n created when VMImageName specifies a published VM Image."} {"_id": "q_1298", "text": "Deletes the specified virtual machine.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role.\n complete:\n True if all OS/data disks and the source blobs for the disks should\n also be deleted from storage."} {"_id": "q_1299", "text": "The Capture Role operation captures a virtual machine image to your\n image gallery. From the captured image, you can create additional\n customized virtual machines.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role.\n post_capture_action:\n Specifies the action after capture operation completes. Possible\n values are: Delete, Reprovision.\n target_image_name:\n Specifies the image name of the captured virtual machine.\n target_image_label:\n Specifies the friendly name of the captured virtual machine.\n provisioning_configuration:\n Use an instance of WindowsConfigurationSet or LinuxConfigurationSet."} {"_id": "q_1300", "text": "Starts the specified virtual machine.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role."} {"_id": "q_1301", "text": "Starts the specified virtual machines.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_names:\n The names of the roles, as an enumerable of strings."} {"_id": "q_1302", "text": "Shuts down the specified virtual machine.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role.\n post_shutdown_action:\n Specifies how the Virtual Machine should be shut down. Values are:\n Stopped\n Shuts down the Virtual Machine but retains the compute\n resources. You will continue to be billed for the resources\n that the stopped machine uses.\n StoppedDeallocated\n Shuts down the Virtual Machine and releases the compute\n resources. You are not billed for the compute resources that\n this Virtual Machine uses. If a static Virtual Network IP\n address is assigned to the Virtual Machine, it is reserved."} {"_id": "q_1303", "text": "Shuts down the specified virtual machines.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_names:\n The names of the roles, as an enumerable of strings.\n post_shutdown_action:\n Specifies how the Virtual Machine should be shut down. Values are:\n Stopped\n Shuts down the Virtual Machine but retains the compute\n resources. You will continue to be billed for the resources\n that the stopped machine uses.\n StoppedDeallocated\n Shuts down the Virtual Machine and releases the compute\n resources. You are not billed for the compute resources that\n this Virtual Machine uses. If a static Virtual Network IP\n address is assigned to the Virtual Machine, it is reserved."} {"_id": "q_1304", "text": "Adds a DNS server definition to an existing deployment.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n dns_server_name:\n Specifies the name of the DNS server.\n address:\n Specifies the IP address of the DNS server."} {"_id": "q_1305", "text": "Updates the ip address of a DNS server.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n dns_server_name:\n Specifies the name of the DNS server.\n address:\n Specifies the IP address of the DNS server."} {"_id": "q_1306", "text": "Deletes a DNS server from a deployment.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n dns_server_name:\n Name of the DNS server that you want to delete."} {"_id": "q_1307", "text": "Lists the versions of a resource extension that are available to add\n to a Virtual Machine.\n\n publisher_name:\n Name of the resource extension publisher.\n extension_name:\n Name of the resource extension."} {"_id": "q_1308", "text": "Unreplicate a VM image from all regions This operation\n is only for publishers. You have to be registered as image publisher\n with Microsoft Azure to be able to call this\n\n vm_image_name:\n Specifies the name of the VM Image that is to be used for\n unreplication. The VM Image Name should be the user VM Image,\n not the published name of the VM Image."} {"_id": "q_1309", "text": "Share an already replicated OS image. This operation is only for\n publishers. You have to be registered as image publisher with Windows\n Azure to be able to call this.\n\n vm_image_name:\n The name of the virtual machine image to share\n permission:\n The sharing permission: public, msdn, or private"} {"_id": "q_1310", "text": "Creates a VM Image in the image repository that is associated with the\n specified subscription using a specified set of virtual hard disks.\n\n vm_image:\n An instance of VMImage class.\n vm_image.name: Required. Specifies the name of the image.\n vm_image.label: Required. Specifies an identifier for the image.\n vm_image.description: Optional. Specifies the description of the image.\n vm_image.os_disk_configuration:\n Required. Specifies configuration information for the operating \n system disk that is associated with the image.\n vm_image.os_disk_configuration.host_caching:\n Optional. Specifies the caching behavior of the operating system disk.\n Possible values are: None, ReadOnly, ReadWrite \n vm_image.os_disk_configuration.os_state:\n Required. Specifies the state of the operating system in the image.\n Possible values are: Generalized, Specialized\n A Virtual Machine that is fully configured and running contains a\n Specialized operating system. A Virtual Machine on which the\n Sysprep command has been run with the generalize option contains a\n Generalized operating system.\n vm_image.os_disk_configuration.os:\n Required. Specifies the operating system type of the image.\n vm_image.os_disk_configuration.media_link:\n Required. Specifies the location of the blob in Windows Azure\n storage. The blob location belongs to a storage account in the\n subscription specified by the value in the\n operation call.\n vm_image.data_disk_configurations:\n Optional. Specifies configuration information for the data disks\n that are associated with the image. A VM Image might not have data\n disks associated with it.\n vm_image.data_disk_configurations[].host_caching:\n Optional. Specifies the caching behavior of the data disk.\n Possible values are: None, ReadOnly, ReadWrite \n vm_image.data_disk_configurations[].lun:\n Optional if the lun for the disk is 0. Specifies the Logical Unit\n Number (LUN) for the data disk.\n vm_image.data_disk_configurations[].media_link:\n Required. Specifies the location of the blob in Windows Azure\n storage. The blob location belongs to a storage account in the\n subscription specified by the value in the\n operation call.\n vm_image.data_disk_configurations[].logical_size_in_gb:\n Required. Specifies the size, in GB, of the data disk.\n vm_image.language: Optional. Specifies the language of the image.\n vm_image.image_family:\n Optional. Specifies a value that can be used to group VM Images.\n vm_image.recommended_vm_size:\n Optional. Specifies the size to use for the Virtual Machine that\n is created from the VM Image.\n vm_image.eula:\n Optional. Specifies the End User License Agreement that is\n associated with the image. The value for this element is a string,\n but it is recommended that the value be a URL that points to a EULA.\n vm_image.icon_uri:\n Optional. Specifies the URI to the icon that is displayed for the\n image in the Management Portal.\n vm_image.small_icon_uri:\n Optional. Specifies the URI to the small icon that is displayed for\n the image in the Management Portal.\n vm_image.privacy_uri:\n Optional. Specifies the URI that points to a document that contains\n the privacy policy related to the image.\n vm_image.published_date:\n Optional. Specifies the date when the image was added to the image\n repository.\n vm_image.show_in_gui:\n Optional. Indicates whether the VM Images should be listed in the\n portal."} {"_id": "q_1311", "text": "Deletes the specified VM Image from the image repository that is\n associated with the specified subscription.\n\n vm_image_name:\n The name of the image.\n delete_vhd:\n Deletes the underlying vhd blob in Azure storage."} {"_id": "q_1312", "text": "Adds an OS image that is currently stored in a storage account in your\n subscription to the image repository.\n\n label:\n Specifies the friendly name of the image.\n media_link:\n Specifies the location of the blob in Windows Azure blob store\n where the media for the image is located. The blob location must\n belong to a storage account in the subscription specified by the\n value in the operation call. Example:\n http://example.blob.core.windows.net/disks/mydisk.vhd\n name:\n Specifies a name for the OS image that Windows Azure uses to\n identify the image when creating one or more virtual machines.\n os:\n The operating system type of the OS image. Possible values are:\n Linux, Windows"} {"_id": "q_1313", "text": "Updates an OS image that in your image repository.\n\n image_name:\n The name of the image to update.\n label:\n Specifies the friendly name of the image to be updated. You cannot\n use this operation to update images provided by the Windows Azure\n platform.\n media_link:\n Specifies the location of the blob in Windows Azure blob store\n where the media for the image is located. The blob location must\n belong to a storage account in the subscription specified by the\n value in the operation call. Example:\n http://example.blob.core.windows.net/disks/mydisk.vhd\n name:\n Specifies a name for the OS image that Windows Azure uses to\n identify the image when creating one or more VM Roles.\n os:\n The operating system type of the OS image. Possible values are:\n Linux, Windows"} {"_id": "q_1314", "text": "Updates metadata elements from a given OS image reference.\n\n image_name:\n The name of the image to update.\n os_image:\n An instance of OSImage class.\n os_image.label: Optional. Specifies an identifier for the image.\n os_image.description: Optional. Specifies the description of the image.\n os_image.language: Optional. Specifies the language of the image.\n os_image.image_family:\n Optional. Specifies a value that can be used to group VM Images.\n os_image.recommended_vm_size:\n Optional. Specifies the size to use for the Virtual Machine that\n is created from the VM Image.\n os_image.eula:\n Optional. Specifies the End User License Agreement that is\n associated with the image. The value for this element is a string,\n but it is recommended that the value be a URL that points to a EULA.\n os_image.icon_uri:\n Optional. Specifies the URI to the icon that is displayed for the\n image in the Management Portal.\n os_image.small_icon_uri:\n Optional. Specifies the URI to the small icon that is displayed for\n the image in the Management Portal.\n os_image.privacy_uri:\n Optional. Specifies the URI that points to a document that contains\n the privacy policy related to the image.\n os_image.published_date:\n Optional. Specifies the date when the image was added to the image\n repository.\n os.image.media_link:\n Required: Specifies the location of the blob in Windows Azure\n blob store where the media for the image is located. The blob\n location must belong to a storage account in the subscription\n specified by the value in the operation call.\n Example:\n http://example.blob.core.windows.net/disks/mydisk.vhd\n os_image.name:\n Specifies a name for the OS image that Windows Azure uses to\n identify the image when creating one or more VM Roles.\n os_image.os:\n The operating system type of the OS image. Possible values are:\n Linux, Windows"} {"_id": "q_1315", "text": "Deletes the specified OS image from your image repository.\n\n image_name:\n The name of the image.\n delete_vhd:\n Deletes the underlying vhd blob in Azure storage."} {"_id": "q_1316", "text": "Retrieves the specified data disk from a virtual machine.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role.\n lun:\n The Logical Unit Number (LUN) for the disk."} {"_id": "q_1317", "text": "Adds a data disk to a virtual machine.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role.\n lun:\n Specifies the Logical Unit Number (LUN) for the disk. The LUN\n specifies the slot in which the data drive appears when mounted\n for usage by the virtual machine. Valid LUN values are 0 through 15.\n host_caching:\n Specifies the platform caching behavior of data disk blob for\n read/write efficiency. The default vault is ReadOnly. Possible\n values are: None, ReadOnly, ReadWrite\n media_link:\n Specifies the location of the blob in Windows Azure blob store\n where the media for the disk is located. The blob location must\n belong to the storage account in the subscription specified by the\n value in the operation call. Example:\n http://example.blob.core.windows.net/disks/mydisk.vhd\n disk_label:\n Specifies the description of the data disk. When you attach a disk,\n either by directly referencing a media using the MediaLink element\n or specifying the target disk size, you can use the DiskLabel\n element to customize the name property of the target data disk.\n disk_name:\n Specifies the name of the disk. Windows Azure uses the specified\n disk to create the data disk for the machine and populates this\n field with the disk name.\n logical_disk_size_in_gb:\n Specifies the size, in GB, of an empty disk to be attached to the\n role. The disk can be created as part of disk attach or create VM\n role call by specifying the value for this property. Windows Azure\n creates the empty disk based on size preference and attaches the\n newly created disk to the Role.\n source_media_link:\n Specifies the location of a blob in account storage which is\n mounted as a data disk when the virtual machine is created."} {"_id": "q_1318", "text": "Updates the specified data disk attached to the specified virtual\n machine.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role.\n lun:\n Specifies the Logical Unit Number (LUN) for the disk. The LUN\n specifies the slot in which the data drive appears when mounted\n for usage by the virtual machine. Valid LUN values are 0 through\n 15.\n host_caching:\n Specifies the platform caching behavior of data disk blob for\n read/write efficiency. The default vault is ReadOnly. Possible\n values are: None, ReadOnly, ReadWrite\n media_link:\n Specifies the location of the blob in Windows Azure blob store\n where the media for the disk is located. The blob location must\n belong to the storage account in the subscription specified by\n the value in the operation call. Example:\n http://example.blob.core.windows.net/disks/mydisk.vhd\n updated_lun:\n Specifies the Logical Unit Number (LUN) for the disk. The LUN\n specifies the slot in which the data drive appears when mounted\n for usage by the virtual machine. Valid LUN values are 0 through 15.\n disk_label:\n Specifies the description of the data disk. When you attach a disk,\n either by directly referencing a media using the MediaLink element\n or specifying the target disk size, you can use the DiskLabel\n element to customize the name property of the target data disk.\n disk_name:\n Specifies the name of the disk. Windows Azure uses the specified\n disk to create the data disk for the machine and populates this\n field with the disk name.\n logical_disk_size_in_gb:\n Specifies the size, in GB, of an empty disk to be attached to the\n role. The disk can be created as part of disk attach or create VM\n role call by specifying the value for this property. Windows Azure\n creates the empty disk based on size preference and attaches the\n newly created disk to the Role."} {"_id": "q_1319", "text": "Removes the specified data disk from a virtual machine.\n\n service_name:\n The name of the service.\n deployment_name:\n The name of the deployment.\n role_name:\n The name of the role.\n lun:\n The Logical Unit Number (LUN) for the disk.\n delete_vhd:\n Deletes the underlying vhd blob in Azure storage."} {"_id": "q_1320", "text": "Adds a disk to the user image repository. The disk can be an OS disk\n or a data disk.\n\n has_operating_system:\n Deprecated.\n label:\n Specifies the description of the disk.\n media_link:\n Specifies the location of the blob in Windows Azure blob store\n where the media for the disk is located. The blob location must\n belong to the storage account in the current subscription specified\n by the value in the operation call. Example:\n http://example.blob.core.windows.net/disks/mydisk.vhd\n name:\n Specifies a name for the disk. Windows Azure uses the name to\n identify the disk when creating virtual machines from the disk.\n os:\n The OS type of the disk. Possible values are: Linux, Windows"} {"_id": "q_1321", "text": "Updates an existing disk in your image repository.\n\n disk_name:\n The name of the disk to update.\n has_operating_system:\n Deprecated.\n label:\n Specifies the description of the disk.\n media_link:\n Deprecated.\n name:\n Deprecated.\n os:\n Deprecated."} {"_id": "q_1322", "text": "Deletes the specified data or operating system disk from your image\n repository.\n\n disk_name:\n The name of the disk to delete.\n delete_vhd:\n Deletes the underlying vhd blob in Azure storage."} {"_id": "q_1323", "text": "This is a temporary patch pending a fix in uAMQP."} {"_id": "q_1324", "text": "Receive a batch of messages at once.\n\n This approach it optimal if you wish to process multiple messages simultaneously. Note that the\n number of messages retrieved in a single batch will be dependent on\n whether `prefetch` was set for the receiver. This call will prioritize returning\n quickly over meeting a specified batch size, and so will return as soon as at least\n one message is received and there is a gap in incoming messages regardless\n of the specified batch size.\n\n :param max_batch_size: Maximum number of messages in the batch. Actual number\n returned will depend on prefetch size and incoming stream rate.\n :type max_batch_size: int\n :param timeout: The time to wait in seconds for the first message to arrive.\n If no messages arrive, and no timeout is specified, this call will not return\n until the connection is closed. If specified, an no messages arrive within the\n timeout period, an empty list will be returned.\n :rtype: list[~azure.servicebus.common.message.Message]\n\n Example:\n .. literalinclude:: ../examples/test_examples.py\n :start-after: [START fetch_next_messages]\n :end-before: [END fetch_next_messages]\n :language: python\n :dedent: 4\n :caption: Get the messages in batch from the receiver"} {"_id": "q_1325", "text": "Renew the session lock.\n\n This operation must be performed periodically in order to retain a lock on the\n session to continue message processing.\n Once the lock is lost the connection will be closed. This operation can\n also be performed as a threaded background task by registering the session\n with an `azure.servicebus.AutoLockRenew` instance.\n\n Example:\n .. literalinclude:: ../examples/test_examples.py\n :start-after: [START renew_lock]\n :end-before: [END renew_lock]\n :language: python\n :dedent: 4\n :caption: Renew the session lock before it expires"} {"_id": "q_1326", "text": "Converts SinglePlacementGroup property to false for a existing virtual\n machine scale set.\n\n :param resource_group_name: The name of the resource group.\n :type resource_group_name: str\n :param vm_scale_set_name: The name of the virtual machine scale set to\n create or update.\n :type vm_scale_set_name: str\n :param active_placement_group_id: Id of the placement group in which\n you want future virtual machine instances to be placed. To query\n placement group Id, please use Virtual Machine Scale Set VMs - Get\n API. If not provided, the platform will choose one with maximum number\n of virtual machine instances.\n :type active_placement_group_id: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: None or ClientRawResponse if raw=true\n :rtype: None or ~msrest.pipeline.ClientRawResponse\n :raises: :class:`CloudError`"} {"_id": "q_1327", "text": "Imports an externally created key, stores it, and returns key\n parameters and attributes to the client.\n\n The import key operation may be used to import any key type into an\n Azure Key Vault. If the named key already exists, Azure Key Vault\n creates a new version of the key. This operation requires the\n keys/import permission.\n\n :param vault_base_url: The vault name, for example\n https://myvault.vault.azure.net.\n :type vault_base_url: str\n :param key_name: Name for the imported key.\n :type key_name: str\n :param key: The Json web key\n :type key: ~azure.keyvault.v2016_10_01.models.JsonWebKey\n :param hsm: Whether to import as a hardware key (HSM) or software key.\n :type hsm: bool\n :param key_attributes: The key management attributes.\n :type key_attributes: ~azure.keyvault.v2016_10_01.models.KeyAttributes\n :param tags: Application specific metadata in the form of key-value\n pairs.\n :type tags: dict[str, str]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: KeyBundle or ClientRawResponse if raw=true\n :rtype: ~azure.keyvault.v2016_10_01.models.KeyBundle or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`KeyVaultErrorException`"} {"_id": "q_1328", "text": "The update key operation changes specified attributes of a stored key\n and can be applied to any key type and key version stored in Azure Key\n Vault.\n\n In order to perform this operation, the key must already exist in the\n Key Vault. Note: The cryptographic material of a key itself cannot be\n changed. This operation requires the keys/update permission.\n\n :param vault_base_url: The vault name, for example\n https://myvault.vault.azure.net.\n :type vault_base_url: str\n :param key_name: The name of key to update.\n :type key_name: str\n :param key_version: The version of the key to update.\n :type key_version: str\n :param key_ops: Json web key operations. For more information on\n possible key operations, see JsonWebKeyOperation.\n :type key_ops: list[str or\n ~azure.keyvault.v2016_10_01.models.JsonWebKeyOperation]\n :param key_attributes:\n :type key_attributes: ~azure.keyvault.v2016_10_01.models.KeyAttributes\n :param tags: Application specific metadata in the form of key-value\n pairs.\n :type tags: dict[str, str]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: KeyBundle or ClientRawResponse if raw=true\n :rtype: ~azure.keyvault.v2016_10_01.models.KeyBundle or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`KeyVaultErrorException`"} {"_id": "q_1329", "text": "Sets a secret in a specified key vault.\n\n The SET operation adds a secret to the Azure Key Vault. If the named\n secret already exists, Azure Key Vault creates a new version of that\n secret. This operation requires the secrets/set permission.\n\n :param vault_base_url: The vault name, for example\n https://myvault.vault.azure.net.\n :type vault_base_url: str\n :param secret_name: The name of the secret.\n :type secret_name: str\n :param value: The value of the secret.\n :type value: str\n :param tags: Application specific metadata in the form of key-value\n pairs.\n :type tags: dict[str, str]\n :param content_type: Type of the secret value such as a password.\n :type content_type: str\n :param secret_attributes: The secret management attributes.\n :type secret_attributes:\n ~azure.keyvault.v2016_10_01.models.SecretAttributes\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: SecretBundle or ClientRawResponse if raw=true\n :rtype: ~azure.keyvault.v2016_10_01.models.SecretBundle or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`KeyVaultErrorException`"} {"_id": "q_1330", "text": "Create a Service Bus client from a connection string.\n\n :param conn_str: The connection string.\n :type conn_str: str\n\n Example:\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START create_async_servicebus_client_connstr]\n :end-before: [END create_async_servicebus_client_connstr]\n :language: python\n :dedent: 4\n :caption: Create a ServiceBusClient via a connection string."} {"_id": "q_1331", "text": "Get an async client for a subscription entity.\n\n :param topic_name: The name of the topic.\n :type topic_name: str\n :param subscription_name: The name of the subscription.\n :type subscription_name: str\n :rtype: ~azure.servicebus.aio.async_client.SubscriptionClient\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namespace is not found.\n :raises: ~azure.servicebus.common.errors.ServiceBusResourceNotFound if the subscription is not found.\n\n Example:\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START get_async_subscription_client]\n :end-before: [END get_async_subscription_client]\n :language: python\n :dedent: 4\n :caption: Get a TopicClient for the specified topic."} {"_id": "q_1332", "text": "Get an async client for all subscription entities in the topic.\n\n :param topic_name: The topic to list subscriptions for.\n :type topic_name: str\n :rtype: list[~azure.servicebus.aio.async_client.SubscriptionClient]\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namespace is not found.\n :raises: ~azure.servicebus.common.errors.ServiceBusResourceNotFound if the topic is not found."} {"_id": "q_1333", "text": "Send one or more messages to the current entity.\n\n This operation will open a single-use connection, send the supplied messages, and close\n connection. If the entity requires sessions, a session ID must be either\n provided here, or set on each outgoing message.\n\n :param messages: One or more messages to be sent.\n :type messages: ~azure.servicebus.aio.async_message.Message or\n list[~azure.servicebus.aio.async_message.Message]\n :param message_timeout: The period in seconds during which the Message must be\n sent. If the send is not completed in this time it will return a failure result.\n :type message_timeout: int\n :param session: An optional session ID. If supplied this session ID will be\n applied to every outgoing message sent with this Sender.\n If an individual message already has a session ID, that will be\n used instead. If no session ID is supplied here, nor set on an outgoing\n message, a ValueError will be raised if the entity is sessionful.\n :type session: str or ~uuid.Guid\n :raises: ~azure.servicebus.common.errors.MessageSendFailed\n :returns: A list of the send results of all the messages. Each\n send result is a tuple with two values. The first is a boolean, indicating `True`\n if the message sent, or `False` if it failed. The second is an error if the message\n failed, otherwise it will be `None`.\n :rtype: list[tuple[bool, ~azure.servicebus.common.errors.MessageSendFailed]]\n\n Example:\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START queue_client_send]\n :end-before: [END queue_client_send]\n :language: python\n :dedent: 4\n :caption: Send a single message.\n\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START queue_client_send_multiple]\n :end-before: [END queue_client_send_multiple]\n :language: python\n :dedent: 4\n :caption: Send multiple messages."} {"_id": "q_1334", "text": "Get a Receiver for the Service Bus endpoint.\n\n A Receiver represents a single open connection with which multiple receive operations can be made.\n\n :param session: A specific session from which to receive. This must be specified for a\n sessionful entity, otherwise it must be None. In order to receive the next available\n session, set this to NEXT_AVAILABLE.\n :type session: str or ~azure.servicebus.common.constants.NEXT_AVAILABLE\n :param prefetch: The maximum number of messages to cache with each request to the service.\n The default value is 0, meaning messages will be received from the service and processed\n one at a time. Increasing this value will improve message throughput performance but increase\n the chance that messages will expire while they are cached if they're not processed fast enough.\n :type prefetch: int\n :param mode: The mode with which messages will be retrieved from the entity. The two options\n are PeekLock and ReceiveAndDelete. Messages received with PeekLock must be settled within a given\n lock period before they will be removed from the queue. Messages received with ReceiveAndDelete\n will be immediately removed from the queue, and cannot be subsequently rejected or re-received if\n the client fails to process the message. The default mode is PeekLock.\n :type mode: ~azure.servicebus.common.constants.ReceiveSettleMode\n :param idle_timeout: The timeout in seconds between received messages after which the receiver will\n automatically shutdown. The default value is 0, meaning no timeout.\n :type idle_timeout: int\n :returns: A Receiver instance with an unopened connection.\n :rtype: ~azure.servicebus.aio.async_receive_handler.Receiver\n\n Example:\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START open_close_receiver_context]\n :end-before: [END open_close_receiver_context]\n :language: python\n :dedent: 4\n :caption: Receive messages with a Receiver."} {"_id": "q_1335", "text": "Extracts request id from response header."} {"_id": "q_1336", "text": "Returns the status of the specified operation. After calling an\n asynchronous operation, you can call Get Operation Status to determine\n whether the operation has succeeded, failed, or is still in progress.\n\n request_id:\n The request ID for the request you wish to track."} {"_id": "q_1337", "text": "Add additional headers for management."} {"_id": "q_1338", "text": "Assumed called on Travis, to prepare a package to be deployed\n\n This method prints on stdout for Travis.\n Return is obj to pass to sys.exit() directly"} {"_id": "q_1339", "text": "List certificates in a specified key vault.\n\n The GetCertificates operation returns the set of certificates resources\n in the specified key vault. This operation requires the\n certificates/list permission.\n\n :param vault_base_url: The vault name, for example\n https://myvault.vault.azure.net.\n :type vault_base_url: str\n :param maxresults: Maximum number of results to return in a page. If\n not specified the service will return up to 25 results.\n :type maxresults: int\n :param include_pending: Specifies whether to include certificates\n which are not completely provisioned.\n :type include_pending: bool\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: An iterator like instance of CertificateItem\n :rtype:\n ~azure.keyvault.v7_0.models.CertificateItemPaged[~azure.keyvault.v7_0.models.CertificateItem]\n :raises:\n :class:`KeyVaultErrorException`"} {"_id": "q_1340", "text": "Get list of available service bus regions."} {"_id": "q_1341", "text": "List the service bus namespaces defined on the account."} {"_id": "q_1342", "text": "Create a new service bus namespace.\n\n name:\n Name of the service bus namespace to create.\n region:\n Region to create the namespace in."} {"_id": "q_1343", "text": "Checks to see if the specified service bus namespace is available, or\n if it has already been taken.\n\n name:\n Name of the service bus namespace to validate."} {"_id": "q_1344", "text": "Retrieves the topics in the service namespace.\n\n name:\n Name of the service bus namespace."} {"_id": "q_1345", "text": "Retrieves the notification hubs in the service namespace.\n\n name:\n Name of the service bus namespace."} {"_id": "q_1346", "text": "Retrieves the relays in the service namespace.\n\n name:\n Name of the service bus namespace."} {"_id": "q_1347", "text": "This operation gets rollup data for Service Bus metrics queue.\n Rollup data includes the time granularity for the telemetry aggregation as well as\n the retention settings for each time granularity.\n\n name:\n Name of the service bus namespace.\n queue_name:\n Name of the service bus queue in this namespace.\n metric:\n name of a supported metric"} {"_id": "q_1348", "text": "This operation gets rollup data for Service Bus metrics topic.\n Rollup data includes the time granularity for the telemetry aggregation as well as\n the retention settings for each time granularity.\n\n name:\n Name of the service bus namespace.\n topic_name:\n Name of the service bus queue in this namespace.\n metric:\n name of a supported metric"} {"_id": "q_1349", "text": "This operation gets rollup data for Service Bus metrics relay.\n Rollup data includes the time granularity for the telemetry aggregation as well as\n the retention settings for each time granularity.\n\n name:\n Name of the service bus namespace.\n relay_name:\n Name of the service bus relay in this namespace.\n metric:\n name of a supported metric"} {"_id": "q_1350", "text": "Create a virtual environment in a directory."} {"_id": "q_1351", "text": "Create a new Azure SQL Database server.\n\n admin_login:\n The administrator login name for the new server.\n admin_password:\n The administrator login password for the new server.\n location:\n The region to deploy the new server."} {"_id": "q_1352", "text": "Reset the administrator password for a server.\n\n server_name:\n Name of the server to change the password.\n admin_password:\n The new administrator password for the server."} {"_id": "q_1353", "text": "Gets quotas for an Azure SQL Database Server.\n\n server_name:\n Name of the server."} {"_id": "q_1354", "text": "Gets the event logs for an Azure SQL Database Server.\n\n server_name:\n Name of the server to retrieve the event logs from.\n start_date:\n The starting date and time of the events to retrieve in UTC format,\n for example '2011-09-28 16:05:00'.\n interval_size_in_minutes:\n Size of the event logs to retrieve (in minutes).\n Valid values are: 5, 60, or 1440.\n event_types:\n The event type of the log entries you want to retrieve.\n Valid values are: \n - connection_successful\n - connection_failed\n - connection_terminated\n - deadlock\n - throttling\n - throttling_long_transaction\n To return all event types pass in an empty string."} {"_id": "q_1355", "text": "Updates existing database details.\n\n server_name:\n Name of the server to contain the new database.\n name:\n Required. The name for the new database. See Naming Requirements\n in Azure SQL Database General Guidelines and Limitations and\n Database Identifiers for more information.\n new_database_name:\n Optional. The new name for the new database.\n service_objective_id:\n Optional. The new service level to apply to the database. For more\n information about service levels, see Azure SQL Database Service\n Tiers and Performance Levels. Use List Service Level Objectives to\n get the correct ID for the desired service objective.\n edition:\n Optional. The new edition for the new database.\n max_size_bytes:\n Optional. The new size of the database in bytes. For information on\n available sizes for each edition, see Azure SQL Database Service\n Tiers (Editions)."} {"_id": "q_1356", "text": "Deletes an Azure SQL Database.\n\n server_name:\n Name of the server where the database is located.\n name:\n Name of the database to delete."} {"_id": "q_1357", "text": "List the SQL databases defined on the specified server name"} {"_id": "q_1358", "text": "Close down the handler connection.\n\n If the handler has already closed,\n this operation will do nothing. An optional exception can be passed in to\n indicate that the handler was shutdown due to error.\n It is recommended to open a handler within a context manager as\n opposed to calling the method directly.\n\n .. note:: This operation is not thread-safe.\n\n :param exception: An optional exception if the handler is closing\n due to an error.\n :type exception: Exception\n\n Example:\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START open_close_sender_directly]\n :end-before: [END open_close_sender_directly]\n :language: python\n :dedent: 4\n :caption: Explicitly open and close a Sender."} {"_id": "q_1359", "text": "Close down the receiver connection.\n\n If the receiver has already closed, this operation will do nothing. An optional\n exception can be passed in to indicate that the handler was shutdown due to error.\n It is recommended to open a handler within a context manager as\n opposed to calling the method directly.\n The receiver will be implicitly closed on completion of the message iterator,\n however this method will need to be called explicitly if the message iterator is not run\n to completion.\n\n .. note:: This operation is not thread-safe.\n\n :param exception: An optional exception if the handler is closing\n due to an error.\n :type exception: Exception\n\n Example:\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START open_close_receiver_directly]\n :end-before: [END open_close_receiver_directly]\n :language: python\n :dedent: 4\n :caption: Iterate then explicitly close a Receiver."} {"_id": "q_1360", "text": "Get the session state.\n\n Returns None if no state has been set.\n\n :rtype: str\n\n Example:\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START set_session_state]\n :end-before: [END set_session_state]\n :language: python\n :dedent: 4\n :caption: Getting and setting the state of a session."} {"_id": "q_1361", "text": "Verifies that the challenge is a Bearer challenge and returns the key=value pairs."} {"_id": "q_1362", "text": "Purges data in an Log Analytics workspace by a set of user-defined\n filters.\n\n :param resource_group_name: The name of the resource group to get. The\n name is case insensitive.\n :type resource_group_name: str\n :param workspace_name: Log Analytics workspace name\n :type workspace_name: str\n :param table: Table from which to purge data.\n :type table: str\n :param filters: The set of columns and filters (queries) to run over\n them to purge the resulting data.\n :type filters:\n list[~azure.mgmt.loganalytics.models.WorkspacePurgeBodyFilters]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns object or\n ClientRawResponse if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[object] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[object]]\n :raises: :class:`CloudError`"} {"_id": "q_1363", "text": "Handle connection and service errors.\n\n Called internally when an event has failed to send so we\n can parse the error to determine whether we should attempt\n to retry sending the event again.\n Returns the action to take according to error type.\n\n :param error: The error received in the send attempt.\n :type error: Exception\n :rtype: ~uamqp.errors.ErrorAction"} {"_id": "q_1364", "text": "Deletes an existing queue. This operation will also remove all\n associated state including messages in the queue.\n\n queue_name:\n Name of the queue to delete.\n fail_not_exist:\n Specify whether to throw an exception if the queue doesn't exist."} {"_id": "q_1365", "text": "Retrieves an existing queue.\n\n queue_name:\n Name of the queue."} {"_id": "q_1366", "text": "Creates a new topic. Once created, this topic resource manifest is\n immutable.\n\n topic_name:\n Name of the topic to create.\n topic:\n Topic object to create.\n fail_on_exist:\n Specify whether to throw an exception when the topic exists."} {"_id": "q_1367", "text": "Creates a new rule. Once created, this rule's resource manifest is\n immutable.\n\n topic_name:\n Name of the topic.\n subscription_name:\n Name of the subscription.\n rule_name:\n Name of the rule.\n fail_on_exist:\n Specify whether to throw an exception when the rule exists."} {"_id": "q_1368", "text": "Retrieves the description for the specified rule.\n\n topic_name:\n Name of the topic.\n subscription_name:\n Name of the subscription.\n rule_name:\n Name of the rule."} {"_id": "q_1369", "text": "Retrieves the rules that exist under the specified subscription.\n\n topic_name:\n Name of the topic.\n subscription_name:\n Name of the subscription."} {"_id": "q_1370", "text": "Creates a new subscription. Once created, this subscription resource\n manifest is immutable.\n\n topic_name:\n Name of the topic.\n subscription_name:\n Name of the subscription.\n fail_on_exist:\n Specify whether throw exception when subscription exists."} {"_id": "q_1371", "text": "Gets an existing subscription.\n\n topic_name:\n Name of the topic.\n subscription_name:\n Name of the subscription."} {"_id": "q_1372", "text": "Enqueues a message into the specified topic. The limit to the number\n of messages which may be present in the topic is governed by the\n message size in MaxTopicSizeInBytes. If this message causes the topic\n to exceed its quota, a quota exceeded error is returned and the\n message will be rejected.\n\n topic_name:\n Name of the topic.\n message:\n Message object containing message body and properties."} {"_id": "q_1373", "text": "Unlocks a message for processing by other receivers on a given\n queue. This operation deletes the lock object, causing the\n message to be unlocked. A message must have first been locked by a\n receiver before this operation is called.\n\n queue_name:\n Name of the queue.\n sequence_number:\n The sequence number of the message to be unlocked as returned in\n BrokerProperties['SequenceNumber'] by the Peek Message operation.\n lock_token:\n The ID of the lock as returned by the Peek Message operation in\n BrokerProperties['LockToken']"} {"_id": "q_1374", "text": "Receive a message from a queue for processing.\n\n queue_name:\n Name of the queue.\n peek_lock:\n Optional. True to retrieve and lock the message. False to read and\n delete the message. Default is True (lock).\n timeout:\n Optional. The timeout parameter is expressed in seconds."} {"_id": "q_1375", "text": "Receive a message from a subscription for processing.\n\n topic_name:\n Name of the topic.\n subscription_name:\n Name of the subscription.\n peek_lock:\n Optional. True to retrieve and lock the message. False to read and\n delete the message. Default is True (lock).\n timeout:\n Optional. The timeout parameter is expressed in seconds."} {"_id": "q_1376", "text": "Creates a new Event Hub.\n\n hub_name:\n Name of event hub.\n hub:\n Optional. Event hub properties. Instance of EventHub class.\n hub.message_retention_in_days:\n Number of days to retain the events for this Event Hub.\n hub.status:\n Status of the Event Hub (enabled or disabled).\n hub.user_metadata:\n User metadata.\n hub.partition_count:\n Number of shards on the Event Hub.\n fail_on_exist:\n Specify whether to throw an exception when the event hub exists."} {"_id": "q_1377", "text": "Retrieves an existing event hub.\n\n hub_name:\n Name of the event hub."} {"_id": "q_1378", "text": "Add additional headers for Service Bus."} {"_id": "q_1379", "text": "Check if token expires or not."} {"_id": "q_1380", "text": "Reset Service Principal Profile of a managed cluster.\n\n Update the service principal Profile for a managed cluster.\n\n :param resource_group_name: The name of the resource group.\n :type resource_group_name: str\n :param resource_name: The name of the managed cluster resource.\n :type resource_name: str\n :param client_id: The ID for the service principal.\n :type client_id: str\n :param secret: The secret password associated with the service\n principal in plain text.\n :type secret: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns None or\n ClientRawResponse if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]\n :raises: :class:`CloudError`"} {"_id": "q_1381", "text": "Deletes itself if find queue name or topic name and subscription\n name."} {"_id": "q_1382", "text": "Unlocks itself if find queue name or topic name and subscription\n name."} {"_id": "q_1383", "text": "Renew lock on itself if find queue name or topic name and subscription\n name."} {"_id": "q_1384", "text": "add addtional headers to request for message request."} {"_id": "q_1385", "text": "return the current message as expected by batch body format"} {"_id": "q_1386", "text": "Gets the health of a Service Fabric cluster.\n\n Use EventsHealthStateFilter to filter the collection of health events\n reported on the cluster based on the health state.\n Similarly, use NodesHealthStateFilter and ApplicationsHealthStateFilter\n to filter the collection of nodes and applications returned based on\n their aggregated health state.\n\n :param nodes_health_state_filter: Allows filtering of the node health\n state objects returned in the result of cluster health query\n based on their health state. The possible values for this parameter\n include integer value of one of the\n following health states. Only nodes that match the filter are\n returned. All nodes are used to evaluate the aggregated health state.\n If not specified, all entries are returned.\n The state values are flag-based enumeration, so the value could be a\n combination of these values obtained using bitwise 'OR' operator.\n For example, if the provided value is 6 then health state of nodes\n with HealthState value of OK (2) and Warning (4) are returned.\n - Default - Default value. Matches any HealthState. The value is zero.\n - None - Filter that doesn't match any HealthState value. Used in\n order to return no results on a given collection of states. The value\n is 1.\n - Ok - Filter that matches input with HealthState value Ok. The value\n is 2.\n - Warning - Filter that matches input with HealthState value Warning.\n The value is 4.\n - Error - Filter that matches input with HealthState value Error. The\n value is 8.\n - All - Filter that matches input with any HealthState value. The\n value is 65535.\n :type nodes_health_state_filter: int\n :param applications_health_state_filter: Allows filtering of the\n application health state objects returned in the result of cluster\n health\n query based on their health state.\n The possible values for this parameter include integer value obtained\n from members or bitwise operations\n on members of HealthStateFilter enumeration. Only applications that\n match the filter are returned.\n All applications are used to evaluate the aggregated health state. If\n not specified, all entries are returned.\n The state values are flag-based enumeration, so the value could be a\n combination of these values obtained using bitwise 'OR' operator.\n For example, if the provided value is 6 then health state of\n applications with HealthState value of OK (2) and Warning (4) are\n returned.\n - Default - Default value. Matches any HealthState. The value is zero.\n - None - Filter that doesn't match any HealthState value. Used in\n order to return no results on a given collection of states. The value\n is 1.\n - Ok - Filter that matches input with HealthState value Ok. The value\n is 2.\n - Warning - Filter that matches input with HealthState value Warning.\n The value is 4.\n - Error - Filter that matches input with HealthState value Error. The\n value is 8.\n - All - Filter that matches input with any HealthState value. The\n value is 65535.\n :type applications_health_state_filter: int\n :param events_health_state_filter: Allows filtering the collection of\n HealthEvent objects returned based on health state.\n The possible values for this parameter include integer value of one of\n the following health states.\n Only events that match the filter are returned. All events are used to\n evaluate the aggregated health state.\n If not specified, all entries are returned. The state values are\n flag-based enumeration, so the value could be a combination of these\n values, obtained using the bitwise 'OR' operator. For example, If the\n provided value is 6 then all of the events with HealthState value of\n OK (2) and Warning (4) are returned.\n - Default - Default value. Matches any HealthState. The value is zero.\n - None - Filter that doesn't match any HealthState value. Used in\n order to return no results on a given collection of states. The value\n is 1.\n - Ok - Filter that matches input with HealthState value Ok. The value\n is 2.\n - Warning - Filter that matches input with HealthState value Warning.\n The value is 4.\n - Error - Filter that matches input with HealthState value Error. The\n value is 8.\n - All - Filter that matches input with any HealthState value. The\n value is 65535.\n :type events_health_state_filter: int\n :param exclude_health_statistics: Indicates whether the health\n statistics should be returned as part of the query result. False by\n default.\n The statistics show the number of children entities in health state\n Ok, Warning, and Error.\n :type exclude_health_statistics: bool\n :param include_system_application_health_statistics: Indicates whether\n the health statistics should include the fabric:/System application\n health statistics. False by default.\n If IncludeSystemApplicationHealthStatistics is set to true, the health\n statistics include the entities that belong to the fabric:/System\n application.\n Otherwise, the query result includes health statistics only for user\n applications.\n The health statistics must be included in the query result for this\n parameter to be applied.\n :type include_system_application_health_statistics: bool\n :param timeout: The server timeout for performing the operation in\n seconds. This timeout specifies the time duration that the client is\n willing to wait for the requested operation to complete. The default\n value for this parameter is 60 seconds.\n :type timeout: long\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: ClusterHealth or ClientRawResponse if raw=true\n :rtype: ~azure.servicefabric.models.ClusterHealth or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`FabricErrorException`"} {"_id": "q_1387", "text": "Gets the health of a Service Fabric cluster using the specified policy.\n\n Use EventsHealthStateFilter to filter the collection of health events\n reported on the cluster based on the health state.\n Similarly, use NodesHealthStateFilter and ApplicationsHealthStateFilter\n to filter the collection of nodes and applications returned based on\n their aggregated health state.\n Use ClusterHealthPolicies to override the health policies used to\n evaluate the health.\n\n :param nodes_health_state_filter: Allows filtering of the node health\n state objects returned in the result of cluster health query\n based on their health state. The possible values for this parameter\n include integer value of one of the\n following health states. Only nodes that match the filter are\n returned. All nodes are used to evaluate the aggregated health state.\n If not specified, all entries are returned.\n The state values are flag-based enumeration, so the value could be a\n combination of these values obtained using bitwise 'OR' operator.\n For example, if the provided value is 6 then health state of nodes\n with HealthState value of OK (2) and Warning (4) are returned.\n - Default - Default value. Matches any HealthState. The value is zero.\n - None - Filter that doesn't match any HealthState value. Used in\n order to return no results on a given collection of states. The value\n is 1.\n - Ok - Filter that matches input with HealthState value Ok. The value\n is 2.\n - Warning - Filter that matches input with HealthState value Warning.\n The value is 4.\n - Error - Filter that matches input with HealthState value Error. The\n value is 8.\n - All - Filter that matches input with any HealthState value. The\n value is 65535.\n :type nodes_health_state_filter: int\n :param applications_health_state_filter: Allows filtering of the\n application health state objects returned in the result of cluster\n health\n query based on their health state.\n The possible values for this parameter include integer value obtained\n from members or bitwise operations\n on members of HealthStateFilter enumeration. Only applications that\n match the filter are returned.\n All applications are used to evaluate the aggregated health state. If\n not specified, all entries are returned.\n The state values are flag-based enumeration, so the value could be a\n combination of these values obtained using bitwise 'OR' operator.\n For example, if the provided value is 6 then health state of\n applications with HealthState value of OK (2) and Warning (4) are\n returned.\n - Default - Default value. Matches any HealthState. The value is zero.\n - None - Filter that doesn't match any HealthState value. Used in\n order to return no results on a given collection of states. The value\n is 1.\n - Ok - Filter that matches input with HealthState value Ok. The value\n is 2.\n - Warning - Filter that matches input with HealthState value Warning.\n The value is 4.\n - Error - Filter that matches input with HealthState value Error. The\n value is 8.\n - All - Filter that matches input with any HealthState value. The\n value is 65535.\n :type applications_health_state_filter: int\n :param events_health_state_filter: Allows filtering the collection of\n HealthEvent objects returned based on health state.\n The possible values for this parameter include integer value of one of\n the following health states.\n Only events that match the filter are returned. All events are used to\n evaluate the aggregated health state.\n If not specified, all entries are returned. The state values are\n flag-based enumeration, so the value could be a combination of these\n values, obtained using the bitwise 'OR' operator. For example, If the\n provided value is 6 then all of the events with HealthState value of\n OK (2) and Warning (4) are returned.\n - Default - Default value. Matches any HealthState. The value is zero.\n - None - Filter that doesn't match any HealthState value. Used in\n order to return no results on a given collection of states. The value\n is 1.\n - Ok - Filter that matches input with HealthState value Ok. The value\n is 2.\n - Warning - Filter that matches input with HealthState value Warning.\n The value is 4.\n - Error - Filter that matches input with HealthState value Error. The\n value is 8.\n - All - Filter that matches input with any HealthState value. The\n value is 65535.\n :type events_health_state_filter: int\n :param exclude_health_statistics: Indicates whether the health\n statistics should be returned as part of the query result. False by\n default.\n The statistics show the number of children entities in health state\n Ok, Warning, and Error.\n :type exclude_health_statistics: bool\n :param include_system_application_health_statistics: Indicates whether\n the health statistics should include the fabric:/System application\n health statistics. False by default.\n If IncludeSystemApplicationHealthStatistics is set to true, the health\n statistics include the entities that belong to the fabric:/System\n application.\n Otherwise, the query result includes health statistics only for user\n applications.\n The health statistics must be included in the query result for this\n parameter to be applied.\n :type include_system_application_health_statistics: bool\n :param timeout: The server timeout for performing the operation in\n seconds. This timeout specifies the time duration that the client is\n willing to wait for the requested operation to complete. The default\n value for this parameter is 60 seconds.\n :type timeout: long\n :param application_health_policy_map: Defines a map that contains\n specific application health policies for different applications.\n Each entry specifies as key the application name and as value an\n ApplicationHealthPolicy used to evaluate the application health.\n If an application is not specified in the map, the application health\n evaluation uses the ApplicationHealthPolicy found in its application\n manifest or the default application health policy (if no health policy\n is defined in the manifest).\n The map is empty by default.\n :type application_health_policy_map:\n list[~azure.servicefabric.models.ApplicationHealthPolicyMapItem]\n :param cluster_health_policy: Defines a health policy used to evaluate\n the health of the cluster or of a cluster node.\n :type cluster_health_policy:\n ~azure.servicefabric.models.ClusterHealthPolicy\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: ClusterHealth or ClientRawResponse if raw=true\n :rtype: ~azure.servicefabric.models.ClusterHealth or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`FabricErrorException`"} {"_id": "q_1388", "text": "Removes or unregisters a Service Fabric application type from the\n cluster.\n\n This operation can only be performed if all application instances of\n the application type have been deleted. Once the application type is\n unregistered, no new application instances can be created for this\n particular application type.\n\n :param application_type_name: The name of the application type.\n :type application_type_name: str\n :param application_type_version: The version of the application type\n as defined in the application manifest.\n :type application_type_version: str\n :param timeout: The server timeout for performing the operation in\n seconds. This timeout specifies the time duration that the client is\n willing to wait for the requested operation to complete. The default\n value for this parameter is 60 seconds.\n :type timeout: long\n :param async_parameter: The flag indicating whether or not unprovision\n should occur asynchronously. When set to true, the unprovision\n operation returns when the request is accepted by the system, and the\n unprovision operation continues without any timeout limit. The default\n value is false. However, we recommend setting it to true for large\n application packages that were provisioned.\n :type async_parameter: bool\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: None or ClientRawResponse if raw=true\n :rtype: None or ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`FabricErrorException`"} {"_id": "q_1389", "text": "Submits a property batch.\n\n Submits a batch of property operations. Either all or none of the\n operations will be committed.\n\n :param name_id: The Service Fabric name, without the 'fabric:' URI\n scheme.\n :type name_id: str\n :param timeout: The server timeout for performing the operation in\n seconds. This timeout specifies the time duration that the client is\n willing to wait for the requested operation to complete. The default\n value for this parameter is 60 seconds.\n :type timeout: long\n :param operations: A list of the property batch operations to be\n executed.\n :type operations:\n list[~azure.servicefabric.models.PropertyBatchOperation]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: PropertyBatchInfo or ClientRawResponse if raw=true\n :rtype: ~azure.servicefabric.models.PropertyBatchInfo or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`FabricErrorException`"} {"_id": "q_1390", "text": "Start capturing network packets for the site.\n\n Start capturing network packets for the site.\n\n :param resource_group_name: Name of the resource group to which the\n resource belongs.\n :type resource_group_name: str\n :param name: The name of the web app.\n :type name: str\n :param duration_in_seconds: The duration to keep capturing in seconds.\n :type duration_in_seconds: int\n :param max_frame_length: The maximum frame length in bytes (Optional).\n :type max_frame_length: int\n :param sas_url: The Blob URL to store capture file.\n :type sas_url: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns list or\n ClientRawResponse if raw==True\n :rtype:\n ~msrestazure.azure_operation.AzureOperationPoller[list[~azure.mgmt.web.models.NetworkTrace]]\n or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[list[~azure.mgmt.web.models.NetworkTrace]]]\n :raises:\n :class:`DefaultErrorResponseException`"} {"_id": "q_1391", "text": "Get the difference in configuration settings between two web app slots.\n\n Get the difference in configuration settings between two web app slots.\n\n :param resource_group_name: Name of the resource group to which the\n resource belongs.\n :type resource_group_name: str\n :param name: Name of the app.\n :type name: str\n :param slot: Name of the source slot. If a slot is not specified, the\n production slot is used as the source slot.\n :type slot: str\n :param target_slot: Destination deployment slot during swap operation.\n :type target_slot: str\n :param preserve_vnet: true to preserve Virtual Network to\n the slot during swap; otherwise, false.\n :type preserve_vnet: bool\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: An iterator like instance of SlotDifference\n :rtype:\n ~azure.mgmt.web.models.SlotDifferencePaged[~azure.mgmt.web.models.SlotDifference]\n :raises:\n :class:`DefaultErrorResponseException`"} {"_id": "q_1392", "text": "Swaps two deployment slots of an app.\n\n Swaps two deployment slots of an app.\n\n :param resource_group_name: Name of the resource group to which the\n resource belongs.\n :type resource_group_name: str\n :param name: Name of the app.\n :type name: str\n :param slot: Name of the source slot. If a slot is not specified, the\n production slot is used as the source slot.\n :type slot: str\n :param target_slot: Destination deployment slot during swap operation.\n :type target_slot: str\n :param preserve_vnet: true to preserve Virtual Network to\n the slot during swap; otherwise, false.\n :type preserve_vnet: bool\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns None or\n ClientRawResponse if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]\n :raises: :class:`CloudError`"} {"_id": "q_1393", "text": "Execute OData query.\n\n Executes an OData query for events.\n\n :param app_id: ID of the application. This is Application ID from the\n API Access settings blade in the Azure portal.\n :type app_id: str\n :param event_type: The type of events to query; either a standard\n event type (`traces`, `customEvents`, `pageViews`, `requests`,\n `dependencies`, `exceptions`, `availabilityResults`) or `$all` to\n query across all event types. Possible values include: '$all',\n 'traces', 'customEvents', 'pageViews', 'browserTimings', 'requests',\n 'dependencies', 'exceptions', 'availabilityResults',\n 'performanceCounters', 'customMetrics'\n :type event_type: str or ~azure.applicationinsights.models.EventType\n :param timespan: Optional. The timespan over which to retrieve events.\n This is an ISO8601 time period value. This timespan is applied in\n addition to any that are specified in the Odata expression.\n :type timespan: str\n :param filter: An expression used to filter the returned events\n :type filter: str\n :param search: A free-text search expression to match for whether a\n particular event should be returned\n :type search: str\n :param orderby: A comma-separated list of properties with \\\\\"asc\\\\\"\n (the default) or \\\\\"desc\\\\\" to control the order of returned events\n :type orderby: str\n :param select: Limits the properties to just those requested on each\n returned event\n :type select: str\n :param skip: The number of items to skip over before returning events\n :type skip: int\n :param top: The number of events to return\n :type top: int\n :param format: Format for the returned events\n :type format: str\n :param count: Request a count of matching items included with the\n returned events\n :type count: bool\n :param apply: An expression used for aggregation over returned events\n :type apply: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: EventsResults or ClientRawResponse if raw=true\n :rtype: ~azure.applicationinsights.models.EventsResults or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`ErrorResponseException`"} {"_id": "q_1394", "text": "Add a face to a large face list. The input face is specified as an\n image with a targetFace rectangle. It returns a persistedFaceId\n representing the added face, and persistedFaceId will not expire.\n\n :param large_face_list_id: Id referencing a particular large face\n list.\n :type large_face_list_id: str\n :param image: An image stream.\n :type image: Generator\n :param user_data: User-specified data about the face for any purpose.\n The maximum length is 1KB.\n :type user_data: str\n :param target_face: A face rectangle to specify the target face to be\n added to a person in the format of \"targetFace=left,top,width,height\".\n E.g. \"targetFace=10,10,100,100\". If there is more than one face in the\n image, targetFace is required to specify which face to add. No\n targetFace means there is only one face detected in the entire image.\n :type target_face: list[int]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param callback: When specified, will be called with each chunk of\n data that is streamed. The callback should take two arguments, the\n bytes of the current chunk of data and the response object. If the\n data is uploading, response will be None.\n :type callback: Callable[Bytes, response=None]\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: PersistedFace or ClientRawResponse if raw=true\n :rtype: ~azure.cognitiveservices.vision.face.models.PersistedFace or\n ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`APIErrorException`"} {"_id": "q_1395", "text": "Reset auth_attempted on redirects."} {"_id": "q_1396", "text": "Publishes a batch of events to an Azure Event Grid topic.\n\n :param topic_hostname: The host name of the topic, e.g.\n topic1.westus2-1.eventgrid.azure.net\n :type topic_hostname: str\n :param events: An array of events to be published to Event Grid.\n :type events: list[~azure.eventgrid.models.EventGridEvent]\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :param operation_config: :ref:`Operation configuration\n overrides`.\n :return: None or ClientRawResponse if raw=true\n :rtype: None or ~msrest.pipeline.ClientRawResponse\n :raises:\n :class:`HttpOperationError`"} {"_id": "q_1397", "text": "Moves resources from one resource group to another resource group.\n\n The resources to move must be in the same source resource group. The\n target resource group may be in a different subscription. When moving\n resources, both the source group and the target group are locked for\n the duration of the operation. Write and delete operations are blocked\n on the groups until the move completes. .\n\n :param source_resource_group_name: The name of the resource group\n containing the resources to move.\n :type source_resource_group_name: str\n :param resources: The IDs of the resources.\n :type resources: list[str]\n :param target_resource_group: The target resource group.\n :type target_resource_group: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns None or\n ClientRawResponse if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]\n :raises: :class:`CloudError`"} {"_id": "q_1398", "text": "Create a queue entity.\n\n :param queue_name: The name of the new queue.\n :type queue_name: str\n :param lock_duration: The lock durection in seconds for each message in the queue.\n :type lock_duration: int\n :param max_size_in_megabytes: The max size to allow the queue to grow to.\n :type max_size_in_megabytes: int\n :param requires_duplicate_detection: Whether the queue will require every message with\n a specified time frame to have a unique ID. Non-unique messages will be discarded.\n Default value is False.\n :type requires_duplicate_detection: bool\n :param requires_session: Whether the queue will be sessionful, and therefore require all\n message to have a Session ID and be received by a sessionful receiver.\n Default value is False.\n :type requires_session: bool\n :param default_message_time_to_live: The length of time a message will remain in the queue\n before it is either discarded or moved to the dead letter queue.\n :type default_message_time_to_live: ~datetime.timedelta\n :param dead_lettering_on_message_expiration: Whether to move expired messages to the\n dead letter queue. Default value is False.\n :type dead_lettering_on_message_expiration: bool\n :param duplicate_detection_history_time_window: The period within which all incoming messages\n must have a unique message ID.\n :type duplicate_detection_history_time_window: ~datetime.timedelta\n :param max_delivery_count: The maximum number of times a message will attempt to be delivered\n before it is moved to the dead letter queue.\n :type max_delivery_count: int\n :param enable_batched_operations:\n :type: enable_batched_operations: bool\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namespace is not found.\n :raises: ~azure.common.AzureConflictHttpError if a queue of the same name already exists."} {"_id": "q_1399", "text": "Delete a queue entity.\n\n :param queue_name: The name of the queue to delete.\n :type queue_name: str\n :param fail_not_exist: Whether to raise an exception if the named queue is not\n found. If set to True, a ServiceBusResourceNotFound will be raised.\n Default value is False.\n :type fail_not_exist: bool\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namesapce is not found.\n :raises: ~azure.servicebus.common.errors.ServiceBusResourceNotFound if the queue is not found\n and `fail_not_exist` is set to True."} {"_id": "q_1400", "text": "Delete a topic entity.\n\n :param topic_name: The name of the topic to delete.\n :type topic_name: str\n :param fail_not_exist: Whether to raise an exception if the named topic is not\n found. If set to True, a ServiceBusResourceNotFound will be raised.\n Default value is False.\n :type fail_not_exist: bool\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namesapce is not found.\n :raises: ~azure.servicebus.common.errors.ServiceBusResourceNotFound if the topic is not found\n and `fail_not_exist` is set to True."} {"_id": "q_1401", "text": "Create a subscription entity.\n\n :param topic_name: The name of the topic under which to create the subscription.\n :param subscription_name: The name of the new subscription.\n :type subscription_name: str\n :param lock_duration: The lock durection in seconds for each message in the subscription.\n :type lock_duration: int\n :param requires_session: Whether the subscription will be sessionful, and therefore require all\n message to have a Session ID and be received by a sessionful receiver.\n Default value is False.\n :type requires_session: bool\n :param default_message_time_to_live: The length of time a message will remain in the subscription\n before it is either discarded or moved to the dead letter queue.\n :type default_message_time_to_live: ~datetime.timedelta\n :param dead_lettering_on_message_expiration: Whether to move expired messages to the\n dead letter queue. Default value is False.\n :type dead_lettering_on_message_expiration: bool\n :param dead_lettering_on_filter_evaluation_exceptions: Whether to move messages that error on\n filtering into the dead letter queue. Default is False, and the messages will be discarded.\n :type dead_lettering_on_filter_evaluation_exceptions: bool\n :param max_delivery_count: The maximum number of times a message will attempt to be delivered\n before it is moved to the dead letter queue.\n :type max_delivery_count: int\n :param enable_batched_operations:\n :type: enable_batched_operations: bool\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namespace is not found.\n :raises: ~azure.common.AzureConflictHttpError if a queue of the same name already exists."} {"_id": "q_1402", "text": "Perform an operation to update the properties of the entity.\n\n :returns: The properties of the entity as a dictionary.\n :rtype: dict[str, Any]\n :raises: ~azure.servicebus.common.errors.ServiceBusResourceNotFound if the entity does not exist.\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the endpoint cannot be reached.\n :raises: ~azure.common.AzureHTTPError if the credentials are invalid."} {"_id": "q_1403", "text": "Whether the receivers lock on a particular session has expired.\n\n :rtype: bool"} {"_id": "q_1404", "text": "Creates an Azure subscription.\n\n :param billing_account_name: The name of the commerce root billing\n account.\n :type billing_account_name: str\n :param invoice_section_name: The name of the invoice section.\n :type invoice_section_name: str\n :param body: The subscription creation parameters.\n :type body:\n ~azure.mgmt.subscription.models.SubscriptionCreationParameters\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns\n SubscriptionCreationResult or\n ClientRawResponse if raw==True\n :rtype:\n ~msrestazure.azure_operation.AzureOperationPoller[~azure.mgmt.subscription.models.SubscriptionCreationResult]\n or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[~azure.mgmt.subscription.models.SubscriptionCreationResult]]\n :raises:\n :class:`ErrorResponseException`"} {"_id": "q_1405", "text": "Export logs that show Api requests made by this subscription in the\n given time window to show throttling activities.\n\n :param parameters: Parameters supplied to the LogAnalytics\n getRequestRateByInterval Api.\n :type parameters:\n ~azure.mgmt.compute.v2018_04_01.models.RequestRateByIntervalInput\n :param location: The location upon which virtual-machine-sizes is\n queried.\n :type location: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns\n LogAnalyticsOperationResult or\n ClientRawResponse if raw==True\n :rtype:\n ~msrestazure.azure_operation.AzureOperationPoller[~azure.mgmt.compute.v2018_04_01.models.LogAnalyticsOperationResult]\n or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[~azure.mgmt.compute.v2018_04_01.models.LogAnalyticsOperationResult]]\n :raises: :class:`CloudError`"} {"_id": "q_1406", "text": "Scan output for exceptions\n\n If there is an output from an add task collection call add it to the results.\n\n :param results_queue: Queue containing results of attempted add_collection's\n :type results_queue: collections.deque\n :return: list of TaskAddResults\n :rtype: list[~TaskAddResult]"} {"_id": "q_1407", "text": "Main method for worker to run\n\n Pops a chunk of tasks off the collection of pending tasks to be added and submits them to be added.\n\n :param collections.deque results_queue: Queue for worker to output results to"} {"_id": "q_1408", "text": "Will build the actual config for Jinja2, based on SDK config."} {"_id": "q_1409", "text": "Starts an environment by starting all resources inside the environment.\n This operation can take a while to complete.\n\n :param user_name: The name of the user.\n :type user_name: str\n :param environment_id: The resourceId of the environment\n :type environment_id: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns None or\n ClientRawResponse if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]\n :raises: :class:`CloudError`"} {"_id": "q_1410", "text": "Create message from response.\n\n response:\n response from Service Bus cloud server.\n service_instance:\n the Service Bus client."} {"_id": "q_1411", "text": "Converts entry element to rule object.\n\n The format of xml for rule:\n\n\n\n \n MyProperty='XYZ'\n \n \n set MyProperty2 = 'ABC'\n \n\n\n"} {"_id": "q_1412", "text": "Converts entry element to queue object.\n\n The format of xml response for queue:\n\n 10000\n PT5M\n PT2M\n False\n False\n ...\n"} {"_id": "q_1413", "text": "Converts entry element to topic\n\n The xml format for topic:\n\n \n \n P10675199DT2H48M5.4775807S\n 1024\n false\n P7D\n true\n \n \n"} {"_id": "q_1414", "text": "Converts entry element to subscription\n\n The xml format for subscription:\n\n \n \n PT5M\n false\n P10675199DT2H48M5.4775807S\n false\n true\n \n \n"} {"_id": "q_1415", "text": "Creates a new certificate inside the specified account.\n\n :param resource_group_name: The name of the resource group that\n contains the Batch account.\n :type resource_group_name: str\n :param account_name: The name of the Batch account.\n :type account_name: str\n :param certificate_name: The identifier for the certificate. This must\n be made up of algorithm and thumbprint separated by a dash, and must\n match the certificate data in the request. For example SHA1-a3d1c5.\n :type certificate_name: str\n :param parameters: Additional parameters for certificate creation.\n :type parameters:\n ~azure.mgmt.batch.models.CertificateCreateOrUpdateParameters\n :param if_match: The entity state (ETag) version of the certificate to\n update. A value of \"*\" can be used to apply the operation only if the\n certificate already exists. If omitted, this operation will always be\n applied.\n :type if_match: str\n :param if_none_match: Set to '*' to allow a new certificate to be\n created, but to prevent updating an existing certificate. Other values\n will be ignored.\n :type if_none_match: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :return: An instance of AzureOperationPoller that returns Certificate\n or ClientRawResponse if raw=true\n :rtype:\n ~msrestazure.azure_operation.AzureOperationPoller[~azure.mgmt.batch.models.Certificate]\n or ~msrest.pipeline.ClientRawResponse\n :raises: :class:`CloudError`"} {"_id": "q_1416", "text": "Deletes the specified certificate.\n\n :param resource_group_name: The name of the resource group that\n contains the Batch account.\n :type resource_group_name: str\n :param account_name: The name of the Batch account.\n :type account_name: str\n :param certificate_name: The identifier for the certificate. This must\n be made up of algorithm and thumbprint separated by a dash, and must\n match the certificate data in the request. For example SHA1-a3d1c5.\n :type certificate_name: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: returns the direct response alongside the\n deserialized response\n :return: An instance of AzureOperationPoller that returns None or\n ClientRawResponse if raw=true\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrest.pipeline.ClientRawResponse\n :raises: :class:`CloudError`"} {"_id": "q_1417", "text": "Return a SDK client initialized with a JSON auth dict.\n\n The easiest way to obtain this content is to call the following CLI commands:\n\n .. code:: bash\n\n az ad sp create-for-rbac --sdk-auth\n\n This method will fill automatically the following client parameters:\n - credentials\n - subscription_id\n - base_url\n - tenant_id\n\n Parameters provided in kwargs will override parameters and be passed directly to the client.\n\n :Example:\n\n .. code:: python\n\n from azure.common.client_factory import get_client_from_auth_file\n from azure.mgmt.compute import ComputeManagementClient\n config_dict = {\n \"clientId\": \"ad735158-65ca-11e7-ba4d-ecb1d756380e\",\n \"clientSecret\": \"b70bb224-65ca-11e7-810c-ecb1d756380e\",\n \"subscriptionId\": \"bfc42d3a-65ca-11e7-95cf-ecb1d756380e\",\n \"tenantId\": \"c81da1d8-65ca-11e7-b1d1-ecb1d756380e\",\n \"activeDirectoryEndpointUrl\": \"https://login.microsoftonline.com\",\n \"resourceManagerEndpointUrl\": \"https://management.azure.com/\",\n \"activeDirectoryGraphResourceId\": \"https://graph.windows.net/\",\n \"sqlManagementEndpointUrl\": \"https://management.core.windows.net:8443/\",\n \"galleryEndpointUrl\": \"https://gallery.azure.com/\",\n \"managementEndpointUrl\": \"https://management.core.windows.net/\"\n }\n client = get_client_from_json_dict(ComputeManagementClient, config_dict)\n\n .. versionadded:: 1.1.7\n\n :param client_class: A SDK client class\n :param dict config_dict: A config dict.\n :return: An instantiated client"} {"_id": "q_1418", "text": "resp_body is the XML we received\n resp_type is a string, such as Containers,\n return_type is the type we're constructing, such as ContainerEnumResults\n item_type is the type object of the item to be created, such as Container\n\n This function then returns a ContainerEnumResults object with the\n containers member populated with the results."} {"_id": "q_1419", "text": "get properties from element tree element"} {"_id": "q_1420", "text": "Get a client for a queue entity.\n\n :param queue_name: The name of the queue.\n :type queue_name: str\n :rtype: ~azure.servicebus.servicebus_client.QueueClient\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namespace is not found.\n :raises: ~azure.servicebus.common.errors.ServiceBusResourceNotFound if the queue is not found.\n\n Example:\n .. literalinclude:: ../examples/test_examples.py\n :start-after: [START get_queue_client]\n :end-before: [END get_queue_client]\n :language: python\n :dedent: 8\n :caption: Get the specific queue client from Service Bus client"} {"_id": "q_1421", "text": "Get clients for all queue entities in the namespace.\n\n :rtype: list[~azure.servicebus.servicebus_client.QueueClient]\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namespace is not found.\n\n Example:\n .. literalinclude:: ../examples/test_examples.py\n :start-after: [START list_queues]\n :end-before: [END list_queues]\n :language: python\n :dedent: 4\n :caption: List the queues from Service Bus client"} {"_id": "q_1422", "text": "Get a client for a topic entity.\n\n :param topic_name: The name of the topic.\n :type topic_name: str\n :rtype: ~azure.servicebus.servicebus_client.TopicClient\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namespace is not found.\n :raises: ~azure.servicebus.common.errors.ServiceBusResourceNotFound if the topic is not found.\n\n Example:\n .. literalinclude:: ../examples/test_examples.py\n :start-after: [START get_topic_client]\n :end-before: [END get_topic_client]\n :language: python\n :dedent: 8\n :caption: Get the specific topic client from Service Bus client"} {"_id": "q_1423", "text": "Get a client for all topic entities in the namespace.\n\n :rtype: list[~azure.servicebus.servicebus_client.TopicClient]\n :raises: ~azure.servicebus.common.errors.ServiceBusConnectionError if the namespace is not found.\n\n Example:\n .. literalinclude:: ../examples/test_examples.py\n :start-after: [START list_topics]\n :end-before: [END list_topics]\n :language: python\n :dedent: 4\n :caption: List the topics from Service Bus client"} {"_id": "q_1424", "text": "Receive messages by sequence number that have been previously deferred.\n\n When receiving deferred messages from a partitioned entity, all of the supplied\n sequence numbers must be messages from the same partition.\n\n :param sequence_numbers: A list of the sequence numbers of messages that have been\n deferred.\n :type sequence_numbers: list[int]\n :param mode: The mode with which messages will be retrieved from the entity. The two options\n are PeekLock and ReceiveAndDelete. Messages received with PeekLock must be settled within a given\n lock period before they will be removed from the queue. Messages received with ReceiveAndDelete\n will be immediately removed from the queue, and cannot be subsequently rejected or re-received if\n the client fails to process the message. The default mode is PeekLock.\n :type mode: ~azure.servicebus.common.constants.ReceiveSettleMode\n :rtype: list[~azure.servicebus.common.message.Message]\n\n Example:\n .. literalinclude:: ../examples/test_examples.py\n :start-after: [START receive_deferred_messages_service_bus]\n :end-before: [END receive_deferred_messages_service_bus]\n :language: python\n :dedent: 8\n :caption: Get the messages which were deferred using their sequence numbers"} {"_id": "q_1425", "text": "Settle messages that have been previously deferred.\n\n :param settlement: How the messages are to be settled. This must be a string\n of one of the following values: 'completed', 'suspended', 'abandoned'.\n :type settlement: str\n :param messages: A list of deferred messages to be settled.\n :type messages: list[~azure.servicebus.common.message.DeferredMessage]\n\n Example:\n .. literalinclude:: ../examples/test_examples.py\n :start-after: [START settle_deferred_messages_service_bus]\n :end-before: [END settle_deferred_messages_service_bus]\n :language: python\n :dedent: 8\n :caption: Settle deferred messages."} {"_id": "q_1426", "text": "Delete a website.\n\n webspace_name:\n The name of the webspace.\n website_name:\n The name of the website.\n delete_empty_server_farm:\n If the site being deleted is the last web site in a server farm,\n you can delete the server farm by setting this to True.\n delete_metrics:\n To also delete the metrics for the site that you are deleting, you\n can set this to True."} {"_id": "q_1427", "text": "Update a web site.\n\n webspace_name:\n The name of the webspace.\n website_name:\n The name of the website.\n state:\n The wanted state ('Running' or 'Stopped' accepted)"} {"_id": "q_1428", "text": "Restart a web site.\n\n webspace_name:\n The name of the webspace.\n website_name:\n The name of the website."} {"_id": "q_1429", "text": "Updates the policies for the specified container registry.\n\n :param resource_group_name: The name of the resource group to which\n the container registry belongs.\n :type resource_group_name: str\n :param registry_name: The name of the container registry.\n :type registry_name: str\n :param quarantine_policy: An object that represents quarantine policy\n for a container registry.\n :type quarantine_policy:\n ~azure.mgmt.containerregistry.v2018_02_01_preview.models.QuarantinePolicy\n :param trust_policy: An object that represents content trust policy\n for a container registry.\n :type trust_policy:\n ~azure.mgmt.containerregistry.v2018_02_01_preview.models.TrustPolicy\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns RegistryPolicies or\n ClientRawResponse if raw==True\n :rtype:\n ~msrestazure.azure_operation.AzureOperationPoller[~azure.mgmt.containerregistry.v2018_02_01_preview.models.RegistryPolicies]\n or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[~azure.mgmt.containerregistry.v2018_02_01_preview.models.RegistryPolicies]]\n :raises: :class:`CloudError`"} {"_id": "q_1430", "text": "Completes the restore operation on a managed database.\n\n :param location_name: The name of the region where the resource is\n located.\n :type location_name: str\n :param operation_id: Management operation id that this request tries\n to complete.\n :type operation_id: str\n :param last_backup_name: The last backup name to apply\n :type last_backup_name: str\n :param dict custom_headers: headers that will be added to the request\n :param bool raw: The poller return type is ClientRawResponse, the\n direct response alongside the deserialized response\n :param polling: True for ARMPolling, False for no polling, or a\n polling object for personal polling strategy\n :return: An instance of LROPoller that returns None or\n ClientRawResponse if raw==True\n :rtype: ~msrestazure.azure_operation.AzureOperationPoller[None] or\n ~msrestazure.azure_operation.AzureOperationPoller[~msrest.pipeline.ClientRawResponse[None]]\n :raises: :class:`CloudError`"} {"_id": "q_1431", "text": "Cancel one or more messages that have previsouly been scheduled and are still pending.\n\n :param sequence_numbers: The seqeuence numbers of the scheduled messages.\n :type sequence_numbers: int\n\n Example:\n .. literalinclude:: ../examples/async_examples/test_examples_async.py\n :start-after: [START cancel_schedule_messages]\n :end-before: [END cancel_schedule_messages]\n :language: python\n :dedent: 4\n :caption: Schedule messages."} {"_id": "q_1432", "text": "Reconnect the handler.\n\n If the handler was disconnected from the service with\n a retryable error - attempt to reconnect.\n This method will be called automatically for most retryable errors.\n Also attempts to re-queue any messages that were pending before the reconnect."} {"_id": "q_1433", "text": "Returns the width of the string it would be when displayed."} {"_id": "q_1434", "text": "Drops Characters by unicode not by bytes."} {"_id": "q_1435", "text": "Clears out the previous line and prints a new one."} {"_id": "q_1436", "text": "Creates a status line with appropriate size."} {"_id": "q_1437", "text": "Segments are yielded when they are available\n\n Segments appear on a time line, for dynamic content they are only available at a certain time\n and sometimes for a limited time. For static content they are all available at the same time.\n\n :param kwargs: extra args to pass to the segment template\n :return: yields Segments"} {"_id": "q_1438", "text": "Puts a value into a queue but aborts if this thread is closed."} {"_id": "q_1439", "text": "Wrapper around ElementTree.fromstring with some extras.\n\n Provides these extra features:\n - Handles incorrectly encoded XML\n - Allows stripping namespace information\n - Wraps errors in custom exception with a snippet of the data in the message"} {"_id": "q_1440", "text": "Search for a key in a nested dict, or list of nested dicts, and return the values.\n\n :param data: dict/list to search\n :param key: key to find\n :return: matches for key"} {"_id": "q_1441", "text": "Spawn the process defined in `cmd`\n\n parameters is converted to options the short and long option prefixes\n if a list is given as the value, the parameter is repeated with each\n value\n\n If timeout is set the spawn will block until the process returns or\n the timeout expires.\n\n :param parameters: optional parameters\n :param arguments: positional arguments\n :param stderr: where to redirect stderr to\n :param timeout: timeout for short lived process\n :param long_option_prefix: option prefix, default -\n :param short_option_prefix: long option prefix, default --\n :return: spawned process"} {"_id": "q_1442", "text": "Brute force regex based HTML tag parser. This is a rough-and-ready searcher to find HTML tags when\n standards compliance is not required. Will find tags that are commented out, or inside script tag etc.\n\n :param html: HTML page\n :param tag: tag name to find\n :return: generator with Tags"} {"_id": "q_1443", "text": "Attempt to parse a DASH manifest file and return its streams\n\n :param session: Streamlink session instance\n :param url_or_manifest: URL of the manifest file or an XML manifest string\n :return: a dict of name -> DASHStream instances"} {"_id": "q_1444", "text": "Determine which Unicode encoding the JSON text sample is encoded with\n\n RFC4627 (http://www.ietf.org/rfc/rfc4627.txt) suggests that the encoding of JSON text can be determined\n by checking the pattern of NULL bytes in first 4 octets of the text.\n :param sample: a sample of at least 4 bytes of the JSON text\n :return: the most likely encoding of the JSON text"} {"_id": "q_1445", "text": "Parses JSON from a response."} {"_id": "q_1446", "text": "Parses XML from a response."} {"_id": "q_1447", "text": "Parses a semi-colon delimited list of query parameters.\n\n Example: foo=bar;baz=qux"} {"_id": "q_1448", "text": "Return the message for this LogRecord.\n\n Return the message for this LogRecord after merging any user-supplied\n arguments with the message."} {"_id": "q_1449", "text": "Attempt a login to LiveEdu.tv"} {"_id": "q_1450", "text": "Loads a plugin from the same directory as the calling plugin.\n\n The path used is extracted from the last call in module scope,\n therefore this must be called only from module level in the\n originating plugin or the correct plugin path will not be found."} {"_id": "q_1451", "text": "Update or remove keys from a query string in a URL\n\n :param url: URL to update\n :param qsd: dict of keys to update, a None value leaves it unchanged\n :param remove: list of keys to remove, or \"*\" to remove all\n note: updated keys are never removed, even if unchanged\n :return: updated URL"} {"_id": "q_1452", "text": "Find all the arguments required by name\n\n :param name: name of the argument the find the dependencies\n\n :return: list of dependant arguments"} {"_id": "q_1453", "text": "Decides where to write the stream.\n\n Depending on arguments it can be one of these:\n - The stdout pipe\n - A subprocess' stdin pipe\n - A named pipe that the subprocess reads from\n - A regular file"} {"_id": "q_1454", "text": "Creates a HTTP server listening on a given host and port.\n\n If host is empty, listen on all available interfaces, and if port is 0,\n listen on a random high port."} {"_id": "q_1455", "text": "Repeatedly accept HTTP connections on a server.\n\n Forever if the serving externally, or while a player is running if it is not\n empty."} {"_id": "q_1456", "text": "Continuously output the stream over HTTP."} {"_id": "q_1457", "text": "Prepares a filename to be passed to the player."} {"_id": "q_1458", "text": "Reads data from stream and then writes it to the output."} {"_id": "q_1459", "text": "Decides what to do with the selected stream.\n\n Depending on arguments it can be one of these:\n - Output internal command-line\n - Output JSON represenation\n - Continuously output the stream over HTTP\n - Output stream data to selected output"} {"_id": "q_1460", "text": "Fetches streams using correct parameters."} {"_id": "q_1461", "text": "Returns the real stream name of a synonym."} {"_id": "q_1462", "text": "Formats a dict of streams.\n\n Filters out synonyms and displays them next to\n the stream they point to.\n\n Streams are sorted according to their quality\n (based on plugin.stream_weight)."} {"_id": "q_1463", "text": "The URL handler.\n\n Attempts to resolve the URL to a plugin and then attempts\n to fetch a list of available streams.\n\n Proceeds to handle stream if user specified a valid one,\n otherwise output list of valid streams."} {"_id": "q_1464", "text": "Opens a web browser to allow the user to grant Streamlink\n access to their Twitch account."} {"_id": "q_1465", "text": "Console setup."} {"_id": "q_1466", "text": "Sets the global HTTP settings, such as proxy and headers."} {"_id": "q_1467", "text": "Loads any additional plugins."} {"_id": "q_1468", "text": "Sets Streamlink options."} {"_id": "q_1469", "text": "Fallback if no stream_id was found before"} {"_id": "q_1470", "text": "Returns current value of specified option.\n\n :param key: key of the option"} {"_id": "q_1471", "text": "Returns current value of plugin specific option.\n\n :param plugin: name of the plugin\n :param key: key of the option"} {"_id": "q_1472", "text": "Attempts to find a plugin that can use this URL.\n\n The default protocol (http) will be prefixed to the URL if\n not specified.\n\n Raises :exc:`NoPluginError` on failure.\n\n :param url: a URL to match against loaded plugins\n :param follow_redirect: follow redirects"} {"_id": "q_1473", "text": "Checks if the string value starts with another string."} {"_id": "q_1474", "text": "Checks if the string value contains another string."} {"_id": "q_1475", "text": "Filters out unwanted items using the specified function.\n\n Supports both dicts and sequences, key/value pairs are\n expanded when applied to a dict."} {"_id": "q_1476", "text": "Apply function to each value inside the sequence or dict.\n\n Supports both dicts and sequences, key/value pairs are\n expanded when applied to a dict."} {"_id": "q_1477", "text": "Parses an URL and validates its attributes."} {"_id": "q_1478", "text": "Find a list of XML elements via xpath."} {"_id": "q_1479", "text": "Attempts to parse a M3U8 playlist from a string of data.\n\n If specified, *base_uri* is the base URI that relative URIs will\n be joined together with, otherwise relative URIs will be as is.\n\n If specified, *parser* can be a M3U8Parser subclass to be used\n to parse the data."} {"_id": "q_1480", "text": "Logs in to Steam"} {"_id": "q_1481", "text": "Returns the stream_id contained in the HTML."} {"_id": "q_1482", "text": "Creates a key-function mapping.\n\n The return value from the function should be either\n - A tuple containing a name and stream\n - A iterator of tuples containing a name and stream\n\n Any extra arguments will be passed to the function."} {"_id": "q_1483", "text": "Makes a call against the api.\n\n :param entrypoint: API method to call.\n :param params: parameters to include in the request data.\n :param schema: schema to use to validate the data"} {"_id": "q_1484", "text": "Starts a session against Crunchyroll's server.\n Is recommended that you call this method before making any other calls\n to make sure you have a valid session against the server."} {"_id": "q_1485", "text": "Creates a new CrunchyrollAPI object, initiates it's session and\n tries to authenticate it either by using saved credentials or the\n user's username and password."} {"_id": "q_1486", "text": "Log 'msg % args' at level 'level' only if condition is fulfilled."} {"_id": "q_1487", "text": "Creates a distributed session.\n\n It calls `MonitoredTrainingSession` to create a :class:`MonitoredSession` for distributed training.\n\n Parameters\n ----------\n task_spec : :class:`TaskSpecDef`.\n The task spec definition from create_task_spec_def()\n checkpoint_dir : str.\n Optional path to a directory where to restore variables.\n scaffold : ``Scaffold``\n A `Scaffold` used for gathering or building supportive ops.\n If not specified, a default one is created. It's used to finalize the graph.\n hooks : list of ``SessionRunHook`` objects.\n Optional\n chief_only_hooks : list of ``SessionRunHook`` objects.\n Activate these hooks if `is_chief==True`, ignore otherwise.\n save_checkpoint_secs : int\n The frequency, in seconds, that a checkpoint is saved\n using a default checkpoint saver. If `save_checkpoint_secs` is set to\n `None`, then the default checkpoint saver isn't used.\n save_summaries_steps : int\n The frequency, in number of global steps, that the\n summaries are written to disk using a default summary saver. If both\n `save_summaries_steps` and `save_summaries_secs` are set to `None`, then\n the default summary saver isn't used. Default 100.\n save_summaries_secs : int\n The frequency, in secs, that the summaries are written\n to disk using a default summary saver. If both `save_summaries_steps` and\n `save_summaries_secs` are set to `None`, then the default summary saver\n isn't used. Default not enabled.\n config : ``tf.ConfigProto``\n an instance of `tf.ConfigProto` proto used to configure the session.\n It's the `config` argument of constructor of `tf.Session`.\n stop_grace_period_secs : int\n Number of seconds given to threads to stop after\n `close()` has been called.\n log_step_count_steps : int\n The frequency, in number of global steps, that the\n global step/sec is logged.\n\n Examples\n --------\n A simple example for distributed training where all the workers use the same dataset:\n\n >>> task_spec = TaskSpec()\n >>> with tf.device(task_spec.device_fn()):\n >>> tensors = create_graph()\n >>> with tl.DistributedSession(task_spec=task_spec,\n ... checkpoint_dir='/tmp/ckpt') as session:\n >>> while not session.should_stop():\n >>> session.run(tensors)\n\n An example where the dataset is shared among the workers\n (see https://www.tensorflow.org/programmers_guide/datasets):\n\n >>> task_spec = TaskSpec()\n >>> # dataset is a :class:`tf.data.Dataset` with the raw data\n >>> dataset = create_dataset()\n >>> if task_spec is not None:\n >>> dataset = dataset.shard(task_spec.num_workers, task_spec.shard_index)\n >>> # shuffle or apply a map function to the new sharded dataset, for example:\n >>> dataset = dataset.shuffle(buffer_size=10000)\n >>> dataset = dataset.batch(batch_size)\n >>> dataset = dataset.repeat(num_epochs)\n >>> # create the iterator for the dataset and the input tensor\n >>> iterator = dataset.make_one_shot_iterator()\n >>> next_element = iterator.get_next()\n >>> with tf.device(task_spec.device_fn()):\n >>> # next_element is the input for the graph\n >>> tensors = create_graph(next_element)\n >>> with tl.DistributedSession(task_spec=task_spec,\n ... checkpoint_dir='/tmp/ckpt') as session:\n >>> while not session.should_stop():\n >>> session.run(tensors)\n\n References\n ----------\n - `MonitoredTrainingSession `__"} {"_id": "q_1488", "text": "A helper function that shows how to train and validate a model at the same time.\n\n Parameters\n ----------\n validate_step_size : int\n Validate the training network every N steps."} {"_id": "q_1489", "text": "A generic function to load mnist-like dataset.\n\n Parameters:\n ----------\n shape : tuple\n The shape of digit images.\n path : str\n The path that the data is downloaded to.\n name : str\n The dataset name you want to use(the default is 'mnist').\n url : str\n The url of dataset(the default is 'http://yann.lecun.com/exdb/mnist/')."} {"_id": "q_1490", "text": "Load Matt Mahoney's dataset.\n\n Download a text file from Matt Mahoney's website\n if not present, and make sure it's the right size.\n Extract the first file enclosed in a zip file as a list of words.\n This dataset can be used for Word Embedding.\n\n Parameters\n ----------\n path : str\n The path that the data is downloaded to, defaults is ``data/mm_test8/``.\n\n Returns\n --------\n list of str\n The raw text data e.g. [.... 'their', 'families', 'who', 'were', 'expelled', 'from', 'jerusalem', ...]\n\n Examples\n --------\n >>> words = tl.files.load_matt_mahoney_text8_dataset()\n >>> print('Data size', len(words))"} {"_id": "q_1491", "text": "Load IMDB dataset.\n\n Parameters\n ----------\n path : str\n The path that the data is downloaded to, defaults is ``data/imdb/``.\n nb_words : int\n Number of words to get.\n skip_top : int\n Top most frequent words to ignore (they will appear as oov_char value in the sequence data).\n maxlen : int\n Maximum sequence length. Any longer sequence will be truncated.\n seed : int\n Seed for reproducible data shuffling.\n start_char : int\n The start of a sequence will be marked with this character. Set to 1 because 0 is usually the padding character.\n oov_char : int\n Words that were cut out because of the num_words or skip_top limit will be replaced with this character.\n index_from : int\n Index actual words with this index and higher.\n\n Examples\n --------\n >>> X_train, y_train, X_test, y_test = tl.files.load_imdb_dataset(\n ... nb_words=20000, test_split=0.2)\n >>> print('X_train.shape', X_train.shape)\n (20000,) [[1, 62, 74, ... 1033, 507, 27],[1, 60, 33, ... 13, 1053, 7]..]\n >>> print('y_train.shape', y_train.shape)\n (20000,) [1 0 0 ..., 1 0 1]\n\n References\n -----------\n - `Modified from keras. `__"} {"_id": "q_1492", "text": "Load Nietzsche dataset.\n\n Parameters\n ----------\n path : str\n The path that the data is downloaded to, defaults is ``data/nietzsche/``.\n\n Returns\n --------\n str\n The content.\n\n Examples\n --------\n >>> see tutorial_generate_text.py\n >>> words = tl.files.load_nietzsche_dataset()\n >>> words = basic_clean_str(words)\n >>> words = words.split()"} {"_id": "q_1493", "text": "Load WMT'15 English-to-French translation dataset.\n\n It will download the data from the WMT'15 Website (10^9-French-English corpus), and the 2013 news test from the same site as development set.\n Returns the directories of training data and test data.\n\n Parameters\n ----------\n path : str\n The path that the data is downloaded to, defaults is ``data/wmt_en_fr/``.\n\n References\n ----------\n - Code modified from /tensorflow/models/rnn/translation/data_utils.py\n\n Notes\n -----\n Usually, it will take a long time to download this dataset."} {"_id": "q_1494", "text": "Download file from Google Drive.\n\n See ``tl.files.load_celebA_dataset`` for example.\n\n Parameters\n --------------\n ID : str\n The driver ID.\n destination : str\n The destination for save file."} {"_id": "q_1495", "text": "Load CelebA dataset\n\n Return a list of image path.\n\n Parameters\n -----------\n path : str\n The path that the data is downloaded to, defaults is ``data/celebA/``."} {"_id": "q_1496", "text": "Assign the given parameters to the TensorLayer network.\n\n Parameters\n ----------\n sess : Session\n TensorFlow Session.\n params : list of array\n A list of parameters (array) in order.\n network : :class:`Layer`\n The network to be assigned.\n\n Returns\n --------\n list of operations\n A list of tf ops in order that assign params. Support sess.run(ops) manually.\n\n Examples\n --------\n - See ``tl.files.save_npz``\n\n References\n ----------\n - `Assign value to a TensorFlow variable `__"} {"_id": "q_1497", "text": "Load model from npz and assign to a network.\n\n Parameters\n -------------\n sess : Session\n TensorFlow Session.\n name : str\n The name of the `.npz` file.\n network : :class:`Layer`\n The network to be assigned.\n\n Returns\n --------\n False or network\n Returns False, if the model is not exist.\n\n Examples\n --------\n - See ``tl.files.save_npz``"} {"_id": "q_1498", "text": "Input parameters and the file name, save parameters as a dictionary into .npz file.\n\n Use ``tl.files.load_and_assign_npz_dict()`` to restore.\n\n Parameters\n ----------\n save_list : list of parameters\n A list of parameters (tensor) to be saved.\n name : str\n The name of the `.npz` file.\n sess : Session\n TensorFlow Session."} {"_id": "q_1499", "text": "Load parameters from `ckpt` file.\n\n Parameters\n ------------\n sess : Session\n TensorFlow Session.\n mode_name : str\n The name of the model, default is ``model.ckpt``.\n save_dir : str\n The path / file directory to the `ckpt`, default is ``checkpoint``.\n var_list : list of tensor\n The parameters / variables (tensor) to be saved. If empty, save all global variables (default).\n is_latest : boolean\n Whether to load the latest `ckpt`, if False, load the `ckpt` with the name of ```mode_name``.\n printable : boolean\n Whether to print all parameters information.\n\n Examples\n ----------\n - Save all global parameters.\n\n >>> tl.files.save_ckpt(sess=sess, mode_name='model.ckpt', save_dir='model', printable=True)\n\n - Save specific parameters.\n\n >>> tl.files.save_ckpt(sess=sess, mode_name='model.ckpt', var_list=net.all_params, save_dir='model', printable=True)\n\n - Load latest ckpt.\n\n >>> tl.files.load_ckpt(sess=sess, var_list=net.all_params, save_dir='model', printable=True)\n\n - Load specific ckpt.\n\n >>> tl.files.load_ckpt(sess=sess, mode_name='model.ckpt', var_list=net.all_params, save_dir='model', is_latest=False, printable=True)"} {"_id": "q_1500", "text": "Load `.npy` file.\n\n Parameters\n ------------\n path : str\n Path to the file (optional).\n name : str\n File name.\n\n Examples\n ---------\n - see tl.files.save_any_to_npy()"} {"_id": "q_1501", "text": "Checks if file exists in working_directory otherwise tries to dowload the file,\n and optionally also tries to extract the file if format is \".zip\" or \".tar\"\n\n Parameters\n -----------\n filename : str\n The name of the (to be) dowloaded file.\n working_directory : str\n A folder path to search for the file in and dowload the file to\n url : str\n The URL to download the file from\n extract : boolean\n If True, tries to uncompress the dowloaded file is \".tar.gz/.tar.bz2\" or \".zip\" file, default is False.\n expected_bytes : int or None\n If set tries to verify that the downloaded file is of the specified size, otherwise raises an Exception, defaults is None which corresponds to no check being performed.\n\n Returns\n ----------\n str\n File path of the dowloaded (uncompressed) file.\n\n Examples\n --------\n >>> down_file = tl.files.maybe_download_and_extract(filename='train-images-idx3-ubyte.gz',\n ... working_directory='data/',\n ... url_source='http://yann.lecun.com/exdb/mnist/')\n >>> tl.files.maybe_download_and_extract(filename='ADEChallengeData2016.zip',\n ... working_directory='data/',\n ... url_source='http://sceneparsing.csail.mit.edu/data/',\n ... extract=True)"} {"_id": "q_1502", "text": "Process a batch of data by given function by threading.\n\n Usually be used for data augmentation.\n\n Parameters\n -----------\n data : numpy.array or others\n The data to be processed.\n thread_count : int\n The number of threads to use.\n fn : function\n The function for data processing.\n more args : the args for `fn`\n Ssee Examples below.\n\n Examples\n --------\n Process images.\n\n >>> images, _, _, _ = tl.files.load_cifar10_dataset(shape=(-1, 32, 32, 3))\n >>> images = tl.prepro.threading_data(images[0:32], tl.prepro.zoom, zoom_range=[0.5, 1])\n\n Customized image preprocessing function.\n\n >>> def distort_img(x):\n >>> x = tl.prepro.flip_axis(x, axis=0, is_random=True)\n >>> x = tl.prepro.flip_axis(x, axis=1, is_random=True)\n >>> x = tl.prepro.crop(x, 100, 100, is_random=True)\n >>> return x\n >>> images = tl.prepro.threading_data(images, distort_img)\n\n Process images and masks together (Usually be used for image segmentation).\n\n >>> X, Y --> [batch_size, row, col, 1]\n >>> data = tl.prepro.threading_data([_ for _ in zip(X, Y)], tl.prepro.zoom_multi, zoom_range=[0.5, 1], is_random=True)\n data --> [batch_size, 2, row, col, 1]\n >>> X_, Y_ = data.transpose((1,0,2,3,4))\n X_, Y_ --> [batch_size, row, col, 1]\n >>> tl.vis.save_image(X_, 'images.png')\n >>> tl.vis.save_image(Y_, 'masks.png')\n\n Process images and masks together by using ``thread_count``.\n\n >>> X, Y --> [batch_size, row, col, 1]\n >>> data = tl.prepro.threading_data(X, tl.prepro.zoom_multi, 8, zoom_range=[0.5, 1], is_random=True)\n data --> [batch_size, 2, row, col, 1]\n >>> X_, Y_ = data.transpose((1,0,2,3,4))\n X_, Y_ --> [batch_size, row, col, 1]\n >>> tl.vis.save_image(X_, 'after.png')\n >>> tl.vis.save_image(Y_, 'before.png')\n\n Customized function for processing images and masks together.\n\n >>> def distort_img(data):\n >>> x, y = data\n >>> x, y = tl.prepro.flip_axis_multi([x, y], axis=0, is_random=True)\n >>> x, y = tl.prepro.flip_axis_multi([x, y], axis=1, is_random=True)\n >>> x, y = tl.prepro.crop_multi([x, y], 100, 100, is_random=True)\n >>> return x, y\n\n >>> X, Y --> [batch_size, row, col, channel]\n >>> data = tl.prepro.threading_data([_ for _ in zip(X, Y)], distort_img)\n >>> X_, Y_ = data.transpose((1,0,2,3,4))\n\n Returns\n -------\n list or numpyarray\n The processed results.\n\n References\n ----------\n - `python queue `__\n - `run with limited queue `__"} {"_id": "q_1503", "text": "Projective transform by given coordinates, usually 4 coordinates.\n\n see `scikit-image `__.\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n src : list or numpy\n The original coordinates, usually 4 coordinates of (width, height).\n dst : list or numpy\n The coordinates after transformation, the number of coordinates is the same with src.\n map_args : dictionary or None\n Keyword arguments passed to inverse map.\n output_shape : tuple of 2 int\n Shape of the output image generated. By default the shape of the input image is preserved. Note that, even for multi-band images, only rows and columns need to be specified.\n order : int\n The order of interpolation. The order has to be in the range 0-5:\n - 0 Nearest-neighbor\n - 1 Bi-linear (default)\n - 2 Bi-quadratic\n - 3 Bi-cubic\n - 4 Bi-quartic\n - 5 Bi-quintic\n mode : str\n One of `constant` (default), `edge`, `symmetric`, `reflect` or `wrap`.\n Points outside the boundaries of the input are filled according to the given mode. Modes match the behaviour of numpy.pad.\n cval : float\n Used in conjunction with mode `constant`, the value outside the image boundaries.\n clip : boolean\n Whether to clip the output to the range of values of the input image. This is enabled by default, since higher order interpolation may produce values outside the given input range.\n preserve_range : boolean\n Whether to keep the original range of values. Otherwise, the input image is converted according to the conventions of img_as_float.\n\n Returns\n -------\n numpy.array\n A processed image.\n\n Examples\n --------\n Assume X is an image from CIFAR-10, i.e. shape == (32, 32, 3)\n\n >>> src = [[0,0],[0,32],[32,0],[32,32]] # [w, h]\n >>> dst = [[10,10],[0,32],[32,0],[32,32]]\n >>> x = tl.prepro.projective_transform_by_points(X, src, dst)\n\n References\n -----------\n - `scikit-image : geometric transformations `__\n - `scikit-image : examples `__"} {"_id": "q_1504", "text": "Rotate an image randomly or non-randomly.\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n rg : int or float\n Degree to rotate, usually 0 ~ 180.\n is_random : boolean\n If True, randomly rotate. Default is False\n row_index col_index and channel_index : int\n Index of row, col and channel, default (0, 1, 2), for theano (1, 2, 0).\n fill_mode : str\n Method to fill missing pixel, default `nearest`, more options `constant`, `reflect` or `wrap`, see `scipy ndimage affine_transform `__\n cval : float\n Value used for points outside the boundaries of the input if mode=`constant`. Default is 0.0\n order : int\n The order of interpolation. The order has to be in the range 0-5. See ``tl.prepro.affine_transform`` and `scipy ndimage affine_transform `__\n\n Returns\n -------\n numpy.array\n A processed image.\n\n Examples\n ---------\n >>> x --> [row, col, 1]\n >>> x = tl.prepro.rotation(x, rg=40, is_random=False)\n >>> tl.vis.save_image(x, 'im.png')"} {"_id": "q_1505", "text": "Randomly or centrally crop an image.\n\n Parameters\n ----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n wrg : int\n Size of width.\n hrg : int\n Size of height.\n is_random : boolean,\n If True, randomly crop, else central crop. Default is False.\n row_index: int\n index of row.\n col_index: int\n index of column.\n\n Returns\n -------\n numpy.array\n A processed image."} {"_id": "q_1506", "text": "Flip the axises of multiple images together, such as flip left and right, up and down, randomly or non-randomly,\n\n Parameters\n -----------\n x : list of numpy.array\n List of images with dimension of [n_images, row, col, channel] (default).\n others : args\n See ``tl.prepro.flip_axis``.\n\n Returns\n -------\n numpy.array\n A list of processed images."} {"_id": "q_1507", "text": "Shift an image randomly or non-randomly.\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n wrg : float\n Percentage of shift in axis x, usually -0.25 ~ 0.25.\n hrg : float\n Percentage of shift in axis y, usually -0.25 ~ 0.25.\n is_random : boolean\n If True, randomly shift. Default is False.\n row_index col_index and channel_index : int\n Index of row, col and channel, default (0, 1, 2), for theano (1, 2, 0).\n fill_mode : str\n Method to fill missing pixel, default `nearest`, more options `constant`, `reflect` or `wrap`, see `scipy ndimage affine_transform `__\n cval : float\n Value used for points outside the boundaries of the input if mode='constant'. Default is 0.0.\n order : int\n The order of interpolation. The order has to be in the range 0-5. See ``tl.prepro.affine_transform`` and `scipy ndimage affine_transform `__\n\n Returns\n -------\n numpy.array\n A processed image."} {"_id": "q_1508", "text": "Change the brightness of a single image, randomly or non-randomly.\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n gamma : float\n Non negative real number. Default value is 1.\n - Small than 1 means brighter.\n - If `is_random` is True, gamma in a range of (1-gamma, 1+gamma).\n gain : float\n The constant multiplier. Default value is 1.\n is_random : boolean\n If True, randomly change brightness. Default is False.\n\n Returns\n -------\n numpy.array\n A processed image.\n\n References\n -----------\n - `skimage.exposure.adjust_gamma `__\n - `chinese blog `__"} {"_id": "q_1509", "text": "Perform illumination augmentation for a single image, randomly or non-randomly.\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n gamma : float\n Change brightness (the same with ``tl.prepro.brightness``)\n - if is_random=False, one float number, small than one means brighter, greater than one means darker.\n - if is_random=True, tuple of two float numbers, (min, max).\n contrast : float\n Change contrast.\n - if is_random=False, one float number, small than one means blur.\n - if is_random=True, tuple of two float numbers, (min, max).\n saturation : float\n Change saturation.\n - if is_random=False, one float number, small than one means unsaturation.\n - if is_random=True, tuple of two float numbers, (min, max).\n is_random : boolean\n If True, randomly change illumination. Default is False.\n\n Returns\n -------\n numpy.array\n A processed image.\n\n Examples\n ---------\n Random\n\n >>> x = tl.prepro.illumination(x, gamma=(0.5, 5.0), contrast=(0.3, 1.0), saturation=(0.7, 1.0), is_random=True)\n\n Non-random\n\n >>> x = tl.prepro.illumination(x, 0.5, 0.6, 0.8, is_random=False)"} {"_id": "q_1510", "text": "Adjust hue of an RGB image.\n\n This is a convenience method that converts an RGB image to float representation, converts it to HSV, add an offset to the hue channel, converts back to RGB and then back to the original data type.\n For TF, see `tf.image.adjust_hue `__.and `tf.image.random_hue `__.\n\n Parameters\n -----------\n im : numpy.array\n An image with values between 0 and 255.\n hout : float\n The scale value for adjusting hue.\n - If is_offset is False, set all hue values to this value. 0 is red; 0.33 is green; 0.66 is blue.\n - If is_offset is True, add this value as the offset to the hue channel.\n is_offset : boolean\n Whether `hout` is added on HSV as offset or not. Default is True.\n is_clip : boolean\n If HSV value smaller than 0, set to 0. Default is True.\n is_random : boolean\n If True, randomly change hue. Default is False.\n\n Returns\n -------\n numpy.array\n A processed image.\n\n Examples\n ---------\n Random, add a random value between -0.2 and 0.2 as the offset to every hue values.\n\n >>> im_hue = tl.prepro.adjust_hue(image, hout=0.2, is_offset=True, is_random=False)\n\n Non-random, make all hue to green.\n\n >>> im_green = tl.prepro.adjust_hue(image, hout=0.66, is_offset=False, is_random=False)\n\n References\n -----------\n - `tf.image.random_hue `__.\n - `tf.image.adjust_hue `__.\n - `StackOverflow: Changing image hue with python PIL `__."} {"_id": "q_1511", "text": "Resize an image by given output size and method.\n\n Warning, this function will rescale the value to [0, 255].\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n size : list of 2 int or None\n For height and width.\n interp : str\n Interpolation method for re-sizing (`nearest`, `lanczos`, `bilinear`, `bicubic` (default) or `cubic`).\n mode : str\n The PIL image mode (`P`, `L`, etc.) to convert image before resizing.\n\n Returns\n -------\n numpy.array\n A processed image.\n\n References\n ------------\n - `scipy.misc.imresize `__"} {"_id": "q_1512", "text": "Normalize every pixels by the same given mean and std, which are usually\n compute from all examples.\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n mean : float\n Value for subtraction.\n std : float\n Value for division.\n epsilon : float\n A small position value for dividing standard deviation.\n\n Returns\n -------\n numpy.array\n A processed image."} {"_id": "q_1513", "text": "Apply ZCA whitening on an image by given principal components matrix.\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] (default).\n principal_components : matrix\n Matrix from ``get_zca_whitening_principal_components_img``.\n\n Returns\n -------\n numpy.array\n A processed image."} {"_id": "q_1514", "text": "Randomly set some pixels to zero by a given keeping probability.\n\n Parameters\n -----------\n x : numpy.array\n An image with dimension of [row, col, channel] or [row, col].\n keep : float\n The keeping probability (0, 1), the lower more values will be set to zero.\n\n Returns\n -------\n numpy.array\n A processed image."} {"_id": "q_1515", "text": "Inputs a list of points, return a 2D image.\n\n Parameters\n --------------\n list_points : list of 2 int\n [[x, y], [x, y]..] for point coordinates.\n size : tuple of 2 int\n (w, h) for output size.\n val : float or int\n For the contour value.\n\n Returns\n -------\n numpy.array\n An image."} {"_id": "q_1516", "text": "Parse darknet annotation format into two lists for class and bounding box.\n\n Input list of [[class, x, y, w, h], ...], return two list of [class ...] and [[x, y, w, h], ...].\n\n Parameters\n ------------\n annotations : list of list\n A list of class and bounding boxes of images e.g. [[class, x, y, w, h], ...]\n\n Returns\n -------\n list of int\n List of class labels.\n\n list of list of 4 numbers\n List of bounding box."} {"_id": "q_1517", "text": "Resize an image, and compute the new bounding box coordinates.\n\n Parameters\n -------------\n im : numpy.array\n An image with dimension of [row, col, channel] (default).\n coords : list of list of 4 int/float or None\n Coordinates [[x, y, w, h], [x, y, w, h], ...]\n size interp and mode : args\n See ``tl.prepro.imresize``.\n is_rescale : boolean\n Set to True, if the input coordinates are rescaled to [0, 1], then return the original coordinates. Default is False.\n\n Returns\n -------\n numpy.array\n A processed image\n list of list of 4 numbers\n A list of new bounding boxes.\n\n Examples\n --------\n >>> im = np.zeros([80, 100, 3]) # as an image with shape width=100, height=80\n >>> _, coords = obj_box_imresize(im, coords=[[20, 40, 30, 30], [10, 20, 20, 20]], size=[160, 200], is_rescale=False)\n >>> print(coords)\n [[40, 80, 60, 60], [20, 40, 40, 40]]\n >>> _, coords = obj_box_imresize(im, coords=[[20, 40, 30, 30]], size=[40, 100], is_rescale=False)\n >>> print(coords)\n [[20, 20, 30, 15]]\n >>> _, coords = obj_box_imresize(im, coords=[[20, 40, 30, 30]], size=[60, 150], is_rescale=False)\n >>> print(coords)\n [[30, 30, 45, 22]]\n >>> im2, coords = obj_box_imresize(im, coords=[[0.2, 0.4, 0.3, 0.3]], size=[160, 200], is_rescale=True)\n >>> print(coords, im2.shape)\n [[0.2, 0.4, 0.3, 0.3]] (160, 200, 3)"} {"_id": "q_1518", "text": "Return mask for sequences.\n\n Parameters\n -----------\n sequences : list of list of int\n All sequences where each row is a sequence.\n pad_val : int\n The pad value.\n\n Returns\n ----------\n list of list of int\n The mask.\n\n Examples\n ---------\n >>> sentences_ids = [[4, 0, 5, 3, 0, 0],\n ... [5, 3, 9, 4, 9, 0]]\n >>> mask = sequences_get_mask(sentences_ids, pad_val=0)\n [[1 1 1 1 0 0]\n [1 1 1 1 1 0]]"} {"_id": "q_1519", "text": "Flip an image and corresponding keypoints.\n\n Parameters\n -----------\n image : 3 channel image\n The given image for augmentation.\n annos : list of list of floats\n The keypoints annotation of people.\n mask : single channel image or None\n The mask if available.\n prob : float, 0 to 1\n The probability to flip the image, if 1, always flip the image.\n flip_list : tuple of int\n Denotes how the keypoints number be changed after flipping which is required for pose estimation task.\n The left and right body should be maintained rather than switch.\n (Default COCO format).\n Set to an empty tuple if you don't need to maintain left and right information.\n\n Returns\n ----------\n preprocessed image, annos, mask"} {"_id": "q_1520", "text": "Take 1D float array of rewards and compute discounted rewards for an\n episode. When encount a non-zero value, consider as the end a of an episode.\n\n Parameters\n ----------\n rewards : list\n List of rewards\n gamma : float\n Discounted factor\n mode : int\n Mode for computing the discount rewards.\n - If mode == 0, reset the discount process when encount a non-zero reward (Ping-pong game).\n - If mode == 1, would not reset the discount process.\n\n Returns\n --------\n list of float\n The discounted rewards.\n\n Examples\n ----------\n >>> rewards = np.asarray([0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1])\n >>> gamma = 0.9\n >>> discount_rewards = tl.rein.discount_episode_rewards(rewards, gamma)\n >>> print(discount_rewards)\n [ 0.72899997 0.81 0.89999998 1. 0.72899997 0.81\n 0.89999998 1. 0.72899997 0.81 0.89999998 1. ]\n >>> discount_rewards = tl.rein.discount_episode_rewards(rewards, gamma, mode=1)\n >>> print(discount_rewards)\n [ 1.52110755 1.69011939 1.87791049 2.08656716 1.20729685 1.34144104\n 1.49048996 1.65610003 0.72899997 0.81 0.89999998 1. ]"} {"_id": "q_1521", "text": "Calculate the loss for Policy Gradient Network.\n\n Parameters\n ----------\n logits : tensor\n The network outputs without softmax. This function implements softmax inside.\n actions : tensor or placeholder\n The agent actions.\n rewards : tensor or placeholder\n The rewards.\n\n Returns\n --------\n Tensor\n The TensorFlow loss function.\n\n Examples\n ----------\n >>> states_batch_pl = tf.placeholder(tf.float32, shape=[None, D])\n >>> network = InputLayer(states_batch_pl, name='input')\n >>> network = DenseLayer(network, n_units=H, act=tf.nn.relu, name='relu1')\n >>> network = DenseLayer(network, n_units=3, name='out')\n >>> probs = network.outputs\n >>> sampling_prob = tf.nn.softmax(probs)\n >>> actions_batch_pl = tf.placeholder(tf.int32, shape=[None])\n >>> discount_rewards_batch_pl = tf.placeholder(tf.float32, shape=[None])\n >>> loss = tl.rein.cross_entropy_reward_loss(probs, actions_batch_pl, discount_rewards_batch_pl)\n >>> train_op = tf.train.RMSPropOptimizer(learning_rate, decay_rate).minimize(loss)"} {"_id": "q_1522", "text": "Log weight.\n\n Parameters\n -----------\n probs : tensor\n If it is a network output, usually we should scale it to [0, 1] via softmax.\n weights : tensor\n The weights.\n\n Returns\n --------\n Tensor\n The Tensor after appling the log weighted expression."} {"_id": "q_1523", "text": "Softmax cross-entropy operation, returns the TensorFlow expression of cross-entropy for two distributions,\n it implements softmax internally. See ``tf.nn.sparse_softmax_cross_entropy_with_logits``.\n\n Parameters\n ----------\n output : Tensor\n A batch of distribution with shape: [batch_size, num of classes].\n target : Tensor\n A batch of index with shape: [batch_size, ].\n name : string\n Name of this loss.\n\n Examples\n --------\n >>> ce = tl.cost.cross_entropy(y_logits, y_target_logits, 'my_loss')\n\n References\n -----------\n - About cross-entropy: ``__.\n - The code is borrowed from: ``__."} {"_id": "q_1524", "text": "Sigmoid cross-entropy operation, see ``tf.nn.sigmoid_cross_entropy_with_logits``.\n\n Parameters\n ----------\n output : Tensor\n A batch of distribution with shape: [batch_size, num of classes].\n target : Tensor\n A batch of index with shape: [batch_size, ].\n name : string\n Name of this loss."} {"_id": "q_1525", "text": "Binary cross entropy operation.\n\n Parameters\n ----------\n output : Tensor\n Tensor with type of `float32` or `float64`.\n target : Tensor\n The target distribution, format the same with `output`.\n epsilon : float\n A small value to avoid output to be zero.\n name : str\n An optional name to attach to this function.\n\n References\n -----------\n - `ericjang-DRAW `__"} {"_id": "q_1526", "text": "Return the TensorFlow expression of normalized mean-square-error of two distributions.\n\n Parameters\n ----------\n output : Tensor\n 2D, 3D or 4D tensor i.e. [batch_size, n_feature], [batch_size, height, width] or [batch_size, height, width, channel].\n target : Tensor\n The target distribution, format the same with `output`.\n name : str\n An optional name to attach to this function."} {"_id": "q_1527", "text": "Returns the expression of cross-entropy of two sequences, implement\n softmax internally. Normally be used for Dynamic RNN with Synced sequence input and output.\n\n Parameters\n -----------\n logits : Tensor\n 2D tensor with shape of [batch_size * ?, n_classes], `?` means dynamic IDs for each example.\n - Can be get from `DynamicRNNLayer` by setting ``return_seq_2d`` to `True`.\n target_seqs : Tensor\n int of tensor, like word ID. [batch_size, ?], `?` means dynamic IDs for each example.\n input_mask : Tensor\n The mask to compute loss, it has the same size with `target_seqs`, normally 0 or 1.\n return_details : boolean\n Whether to return detailed losses.\n - If False (default), only returns the loss.\n - If True, returns the loss, losses, weights and targets (see source code).\n\n Examples\n --------\n >>> batch_size = 64\n >>> vocab_size = 10000\n >>> embedding_size = 256\n >>> input_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name=\"input\")\n >>> target_seqs = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name=\"target\")\n >>> input_mask = tf.placeholder(dtype=tf.int64, shape=[batch_size, None], name=\"mask\")\n >>> net = tl.layers.EmbeddingInputlayer(\n ... inputs = input_seqs,\n ... vocabulary_size = vocab_size,\n ... embedding_size = embedding_size,\n ... name = 'seq_embedding')\n >>> net = tl.layers.DynamicRNNLayer(net,\n ... cell_fn = tf.contrib.rnn.BasicLSTMCell,\n ... n_hidden = embedding_size,\n ... dropout = (0.7 if is_train else None),\n ... sequence_length = tl.layers.retrieve_seq_length_op2(input_seqs),\n ... return_seq_2d = True,\n ... name = 'dynamicrnn')\n >>> print(net.outputs)\n (?, 256)\n >>> net = tl.layers.DenseLayer(net, n_units=vocab_size, name=\"output\")\n >>> print(net.outputs)\n (?, 10000)\n >>> loss = tl.cost.cross_entropy_seq_with_mask(net.outputs, target_seqs, input_mask)"} {"_id": "q_1528", "text": "Max-norm regularization returns a function that can be used to apply max-norm regularization to weights.\n\n More about max-norm, see `wiki-max norm `_.\n The implementation follows `TensorFlow contrib `__.\n\n Parameters\n ----------\n scale : float\n A scalar multiplier `Tensor`. 0.0 disables the regularizer.\n\n Returns\n ---------\n A function with signature `mn(weights, name=None)` that apply Lo regularization.\n\n Raises\n --------\n ValueError : If scale is outside of the range [0.0, 1.0] or if scale is not a float."} {"_id": "q_1529", "text": "Ramp activation function.\n\n Parameters\n ----------\n x : Tensor\n input.\n v_min : float\n cap input to v_min as a lower bound.\n v_max : float\n cap input to v_max as a upper bound.\n name : str\n The function name (optional).\n\n Returns\n -------\n Tensor\n A ``Tensor`` in the same type as ``x``."} {"_id": "q_1530", "text": "Return the softmax outputs of images, every pixels have multiple label, the sum of a pixel is 1.\n\n Usually be used for image segmentation.\n\n Parameters\n ----------\n x : Tensor\n input.\n - For 2d image, 4D tensor (batch_size, height, weight, channel), where channel >= 2.\n - For 3d image, 5D tensor (batch_size, depth, height, weight, channel), where channel >= 2.\n name : str\n function name (optional)\n\n Returns\n -------\n Tensor\n A ``Tensor`` in the same type as ``x``.\n\n Examples\n --------\n >>> outputs = pixel_wise_softmax(network.outputs)\n >>> dice_loss = 1 - dice_coe(outputs, y_, epsilon=1e-5)\n\n References\n ----------\n - `tf.reverse `__"} {"_id": "q_1531", "text": "Tensorflow version of np.repeat for 1D"} {"_id": "q_1532", "text": "Batch version of tf_map_coordinates\n\n Only supports 2D feature maps\n\n Parameters\n ----------\n inputs : ``tf.Tensor``\n shape = (b*c, h, w)\n coords : ``tf.Tensor``\n shape = (b*c, h, w, n, 2)\n\n Returns\n -------\n ``tf.Tensor``\n A Tensor with the shape as (b*c, h, w, n)"} {"_id": "q_1533", "text": "Batch map offsets into input\n\n Parameters\n ------------\n inputs : ``tf.Tensor``\n shape = (b, h, w, c)\n offsets: ``tf.Tensor``\n shape = (b, h, w, 2*n)\n grid_offset: `tf.Tensor``\n Offset grids shape = (h, w, n, 2)\n\n Returns\n -------\n ``tf.Tensor``\n A Tensor with the shape as (b, h, w, c)"} {"_id": "q_1534", "text": "Generate a generator that input a group of example in numpy.array and\n their labels, return the examples and labels by the given batch size.\n\n Parameters\n ----------\n inputs : numpy.array\n The input features, every row is a example.\n targets : numpy.array\n The labels of inputs, every row is a example.\n batch_size : int\n The batch size.\n allow_dynamic_batch_size: boolean\n Allow the use of the last data batch in case the number of examples is not a multiple of batch_size, this may result in unexpected behaviour if other functions expect a fixed-sized batch-size.\n shuffle : boolean\n Indicating whether to use a shuffling queue, shuffle the dataset before return.\n\n Examples\n --------\n >>> X = np.asarray([['a','a'], ['b','b'], ['c','c'], ['d','d'], ['e','e'], ['f','f']])\n >>> y = np.asarray([0,1,2,3,4,5])\n >>> for batch in tl.iterate.minibatches(inputs=X, targets=y, batch_size=2, shuffle=False):\n >>> print(batch)\n (array([['a', 'a'], ['b', 'b']], dtype='>> db.save_model(net, accuracy=0.8, loss=2.3, name='second_model')\n\n Load one model with parameters from database (run this in other script)\n >>> net = db.find_top_model(sess=sess, accuracy=0.8, loss=2.3)\n\n Find and load the latest model.\n >>> net = db.find_top_model(sess=sess, sort=[(\"time\", pymongo.DESCENDING)])\n >>> net = db.find_top_model(sess=sess, sort=[(\"time\", -1)])\n\n Find and load the oldest model.\n >>> net = db.find_top_model(sess=sess, sort=[(\"time\", pymongo.ASCENDING)])\n >>> net = db.find_top_model(sess=sess, sort=[(\"time\", 1)])\n\n Get model information\n >>> net._accuracy\n ... 0.8\n\n Returns\n ---------\n boolean : True for success, False for fail."} {"_id": "q_1536", "text": "Saves one dataset into database, timestamp will be added automatically.\n\n Parameters\n ----------\n dataset : any type\n The dataset you want to store.\n dataset_name : str\n The name of dataset.\n kwargs : other events\n Other events, such as description, author and etc (optinal).\n\n Examples\n ----------\n Save dataset\n >>> db.save_dataset([X_train, y_train, X_test, y_test], 'mnist', description='this is a tutorial')\n\n Get dataset\n >>> dataset = db.find_top_dataset('mnist')\n\n Returns\n ---------\n boolean : Return True if save success, otherwise, return False."} {"_id": "q_1537", "text": "Finds and returns a dataset from the database which matches the requirement.\n\n Parameters\n ----------\n dataset_name : str\n The name of dataset.\n sort : List of tuple\n PyMongo sort comment, search \"PyMongo find one sorting\" and `collection level operations `__ for more details.\n kwargs : other events\n Other events, such as description, author and etc (optinal).\n\n Examples\n ---------\n Save dataset\n >>> db.save_dataset([X_train, y_train, X_test, y_test], 'mnist', description='this is a tutorial')\n\n Get dataset\n >>> dataset = db.find_top_dataset('mnist')\n >>> datasets = db.find_datasets('mnist')\n\n Returns\n --------\n dataset : the dataset or False\n Return False if nothing found."} {"_id": "q_1538", "text": "Finds and returns all datasets from the database which matches the requirement.\n In some case, the data in a dataset can be stored separately for better management.\n\n Parameters\n ----------\n dataset_name : str\n The name/key of dataset.\n kwargs : other events\n Other events, such as description, author and etc (optional).\n\n Returns\n --------\n params : the parameters, return False if nothing found."} {"_id": "q_1539", "text": "Delete datasets.\n\n Parameters\n -----------\n kwargs : logging information\n Find items to delete, leave it empty to delete all log."} {"_id": "q_1540", "text": "Saves the validation log, timestamp will be added automatically.\n\n Parameters\n -----------\n kwargs : logging information\n Events, such as accuracy, loss, step number and etc.\n\n Examples\n ---------\n >>> db.save_validation_log(accuracy=0.33, loss=0.98)"} {"_id": "q_1541", "text": "Deletes training log.\n\n Parameters\n -----------\n kwargs : logging information\n Find items to delete, leave it empty to delete all log.\n\n Examples\n ---------\n Save training log\n >>> db.save_training_log(accuracy=0.33)\n >>> db.save_training_log(accuracy=0.44)\n\n Delete logs that match the requirement\n >>> db.delete_training_log(accuracy=0.33)\n\n Delete all logs\n >>> db.delete_training_log()"} {"_id": "q_1542", "text": "Uploads a task to the database, timestamp will be added automatically.\n\n Parameters\n -----------\n task_name : str\n The task name.\n script : str\n File name of the python script.\n hyper_parameters : dictionary\n The hyper parameters pass into the script.\n saved_result_keys : list of str\n The keys of the task results to keep in the database when the task finishes.\n kwargs : other parameters\n Users customized parameters such as description, version number.\n\n Examples\n -----------\n Uploads a task\n >>> db.create_task(task_name='mnist', script='example/tutorial_mnist_simple.py', description='simple tutorial')\n\n Finds and runs the latest task\n >>> db.run_top_task(sess=sess, sort=[(\"time\", pymongo.DESCENDING)])\n >>> db.run_top_task(sess=sess, sort=[(\"time\", -1)])\n\n Finds and runs the oldest task\n >>> db.run_top_task(sess=sess, sort=[(\"time\", pymongo.ASCENDING)])\n >>> db.run_top_task(sess=sess, sort=[(\"time\", 1)])"} {"_id": "q_1543", "text": "Finds and runs a pending task that in the first of the sorting list.\n\n Parameters\n -----------\n task_name : str\n The task name.\n sort : List of tuple\n PyMongo sort comment, search \"PyMongo find one sorting\" and `collection level operations `__ for more details.\n kwargs : other parameters\n Users customized parameters such as description, version number.\n\n Examples\n ---------\n Monitors the database and pull tasks to run\n >>> while True:\n >>> print(\"waiting task from distributor\")\n >>> db.run_top_task(task_name='mnist', sort=[(\"time\", -1)])\n >>> time.sleep(1)\n\n Returns\n --------\n boolean : True for success, False for fail."} {"_id": "q_1544", "text": "Delete tasks.\n\n Parameters\n -----------\n kwargs : logging information\n Find items to delete, leave it empty to delete all log.\n\n Examples\n ---------\n >>> db.delete_tasks()"} {"_id": "q_1545", "text": "Finds and runs a pending task.\n\n Parameters\n -----------\n task_name : str\n The task name.\n kwargs : other parameters\n Users customized parameters such as description, version number.\n\n Examples\n ---------\n Wait until all tasks finish in user's local console\n\n >>> while not db.check_unfinished_task():\n >>> time.sleep(1)\n >>> print(\"all tasks finished\")\n >>> sess = tf.InteractiveSession()\n >>> net = db.find_top_model(sess=sess, sort=[(\"test_accuracy\", -1)])\n >>> print(\"the best accuracy {} is from model {}\".format(net._test_accuracy, net._name))\n\n Returns\n --------\n boolean : True for success, False for fail."} {"_id": "q_1546", "text": "Augment unigram features with hashed n-gram features."} {"_id": "q_1547", "text": "Load IMDb data and augment with hashed n-gram features."} {"_id": "q_1548", "text": "Read one image.\n\n Parameters\n -----------\n image : str\n The image file name.\n path : str\n The image folder path.\n\n Returns\n -------\n numpy.array\n The image."} {"_id": "q_1549", "text": "Returns all images in list by given path and name of each image file.\n\n Parameters\n -------------\n img_list : list of str\n The image file names.\n path : str\n The image folder path.\n n_threads : int\n The number of threads to read image.\n printable : boolean\n Whether to print information when reading images.\n\n Returns\n -------\n list of numpy.array\n The images."} {"_id": "q_1550", "text": "Save a image.\n\n Parameters\n -----------\n image : numpy array\n [w, h, c]\n image_path : str\n path"} {"_id": "q_1551", "text": "Save multiple images into one single image.\n\n Parameters\n -----------\n images : numpy array\n (batch, w, h, c)\n size : list of 2 ints\n row and column number.\n number of images should be equal or less than size[0] * size[1]\n image_path : str\n save path\n\n Examples\n ---------\n >>> import numpy as np\n >>> import tensorlayer as tl\n >>> images = np.random.rand(64, 100, 100, 3)\n >>> tl.visualize.save_images(images, [8, 8], 'temp.png')"} {"_id": "q_1552", "text": "Draw bboxes and class labels on image. Return or save the image with bboxes, example in the docs of ``tl.prepro``.\n\n Parameters\n -----------\n image : numpy.array\n The RGB image [height, width, channel].\n classes : list of int\n A list of class ID (int).\n coords : list of int\n A list of list for coordinates.\n - Should be [x, y, x2, y2] (up-left and botton-right format)\n - If [x_center, y_center, w, h] (set is_center to True).\n scores : list of float\n A list of score (float). (Optional)\n classes_list : list of str\n for converting ID to string on image.\n is_center : boolean\n Whether the coordinates is [x_center, y_center, w, h]\n - If coordinates are [x_center, y_center, w, h], set it to True for converting it to [x, y, x2, y2] (up-left and botton-right) internally.\n - If coordinates are [x1, x2, y1, y2], set it to False.\n is_rescale : boolean\n Whether to rescale the coordinates from pixel-unit format to ratio format.\n - If True, the input coordinates are the portion of width and high, this API will scale the coordinates to pixel unit internally.\n - If False, feed the coordinates with pixel unit format.\n save_name : None or str\n The name of image file (i.e. image.png), if None, not to save image.\n\n Returns\n -------\n numpy.array\n The saved image.\n\n References\n -----------\n - OpenCV rectangle and putText.\n - `scikit-image `__."} {"_id": "q_1553", "text": "Display a group of RGB or Greyscale CNN masks.\n\n Parameters\n ----------\n CNN : numpy.array\n The image. e.g: 64 5x5 RGB images can be (5, 5, 3, 64).\n second : int\n The display second(s) for the image(s), if saveable is False.\n saveable : boolean\n Save or plot the figure.\n name : str\n A name to save the image, if saveable is True.\n fig_idx : int\n The matplotlib figure index.\n\n Examples\n --------\n >>> tl.visualize.CNN2d(network.all_params[0].eval(), second=10, saveable=True, name='cnn1_mnist', fig_idx=2012)"} {"_id": "q_1554", "text": "Visualize the embeddings by using t-SNE.\n\n Parameters\n ----------\n embeddings : numpy.array\n The embedding matrix.\n reverse_dictionary : dictionary\n id_to_word, mapping id to unique word.\n plot_only : int\n The number of examples to plot, choice the most common words.\n second : int\n The display second(s) for the image(s), if saveable is False.\n saveable : boolean\n Save or plot the figure.\n name : str\n A name to save the image, if saveable is True.\n fig_idx : int\n matplotlib figure index.\n\n Examples\n --------\n >>> see 'tutorial_word2vec_basic.py'\n >>> final_embeddings = normalized_embeddings.eval()\n >>> tl.visualize.tsne_embedding(final_embeddings, labels, reverse_dictionary,\n ... plot_only=500, second=5, saveable=False, name='tsne')"} {"_id": "q_1555", "text": "Visualize every columns of the weight matrix to a group of Greyscale img.\n\n Parameters\n ----------\n W : numpy.array\n The weight matrix\n second : int\n The display second(s) for the image(s), if saveable is False.\n saveable : boolean\n Save or plot the figure.\n shape : a list with 2 int or None\n The shape of feature image, MNIST is [28, 80].\n name : a string\n A name to save the image, if saveable is True.\n fig_idx : int\n matplotlib figure index.\n\n Examples\n --------\n >>> tl.visualize.draw_weights(network.all_params[0].eval(), second=10, saveable=True, name='weight_of_1st_layer', fig_idx=2012)"} {"_id": "q_1556", "text": "Save data into TFRecord."} {"_id": "q_1557", "text": "Return tensor to read from TFRecord."} {"_id": "q_1558", "text": "Print all info of parameters in the network"} {"_id": "q_1559", "text": "Print all info of layers in the network."} {"_id": "q_1560", "text": "Return the parameters in a list of array."} {"_id": "q_1561", "text": "Get all arguments of current layer for saving the graph."} {"_id": "q_1562", "text": "Prefetches string values from disk into an input queue.\n\n In training the capacity of the queue is important because a larger queue\n means better mixing of training examples between shards. The minimum number of\n values kept in the queue is values_per_shard * input_queue_capacity_factor,\n where input_queue_memory factor should be chosen to trade-off better mixing\n with memory usage.\n\n Args:\n reader: Instance of tf.ReaderBase.\n file_pattern: Comma-separated list of file patterns (e.g.\n /tmp/train_data-?????-of-00100).\n is_training: Boolean; whether prefetching for training or eval.\n batch_size: Model batch size used to determine queue capacity.\n values_per_shard: Approximate number of values per shard.\n input_queue_capacity_factor: Minimum number of values to keep in the queue\n in multiples of values_per_shard. See comments above.\n num_reader_threads: Number of reader threads to fill the queue.\n shard_queue_name: Name for the shards filename queue.\n value_queue_name: Name for the values input queue.\n\n Returns:\n A Queue containing prefetched string values."} {"_id": "q_1563", "text": "Batches input images and captions.\n\n This function splits the caption into an input sequence and a target sequence,\n where the target sequence is the input sequence right-shifted by 1. Input and\n target sequences are batched and padded up to the maximum length of sequences\n in the batch. A mask is created to distinguish real words from padding words.\n\n Example:\n Actual captions in the batch ('-' denotes padded character):\n [\n [ 1 2 5 4 5 ],\n [ 1 2 3 4 - ],\n [ 1 2 3 - - ],\n ]\n\n input_seqs:\n [\n [ 1 2 3 4 ],\n [ 1 2 3 - ],\n [ 1 2 - - ],\n ]\n\n target_seqs:\n [\n [ 2 3 4 5 ],\n [ 2 3 4 - ],\n [ 2 3 - - ],\n ]\n\n mask:\n [\n [ 1 1 1 1 ],\n [ 1 1 1 0 ],\n [ 1 1 0 0 ],\n ]\n\n Args:\n images_and_captions: A list of pairs [image, caption], where image is a\n Tensor of shape [height, width, channels] and caption is a 1-D Tensor of\n any length. Each pair will be processed and added to the queue in a\n separate thread.\n batch_size: Batch size.\n queue_capacity: Queue capacity.\n add_summaries: If true, add caption length summaries.\n\n Returns:\n images: A Tensor of shape [batch_size, height, width, channels].\n input_seqs: An int32 Tensor of shape [batch_size, padded_length].\n target_seqs: An int32 Tensor of shape [batch_size, padded_length].\n mask: An int32 0/1 Tensor of shape [batch_size, padded_length]."} {"_id": "q_1564", "text": "Data Format aware version of tf.nn.batch_normalization."} {"_id": "q_1565", "text": "Reshapes a high-dimension vector input.\n\n [batch_size, mask_row, mask_col, n_mask] ---> [batch_size, mask_row x mask_col x n_mask]\n\n Parameters\n ----------\n variable : TensorFlow variable or tensor\n The variable or tensor to be flatten.\n name : str\n A unique layer name.\n\n Returns\n -------\n Tensor\n Flatten Tensor\n\n Examples\n --------\n >>> import tensorflow as tf\n >>> import tensorlayer as tl\n >>> x = tf.placeholder(tf.float32, [None, 128, 128, 3])\n >>> # Convolution Layer with 32 filters and a kernel size of 5\n >>> network = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)\n >>> # Max Pooling (down-sampling) with strides of 2 and kernel size of 2\n >>> network = tf.layers.max_pooling2d(network, 2, 2)\n >>> print(network.get_shape()[:].as_list())\n >>> [None, 62, 62, 32]\n >>> network = tl.layers.flatten_reshape(network)\n >>> print(network.get_shape()[:].as_list()[1:])\n >>> [None, 123008]"} {"_id": "q_1566", "text": "Get a list of layers' output in a network by a given name scope.\n\n Parameters\n -----------\n net : :class:`Layer`\n The last layer of the network.\n name : str\n Get the layers' output that contain this name.\n verbose : boolean\n If True, print information of all the layers' output\n\n Returns\n --------\n list of Tensor\n A list of layers' output (TensorFlow tensor)\n\n Examples\n ---------\n >>> import tensorlayer as tl\n >>> layers = tl.layers.get_layers_with_name(net, \"CNN\", True)"} {"_id": "q_1567", "text": "Returns the initialized RNN state.\n The inputs are `LSTMStateTuple` or `State` of `RNNCells`, and an optional `feed_dict`.\n\n Parameters\n ----------\n state : RNN state.\n The TensorFlow's RNN state.\n feed_dict : dictionary\n Initial RNN state; if None, returns zero state.\n\n Returns\n -------\n RNN state\n The TensorFlow's RNN state."} {"_id": "q_1568", "text": "Remove the repeated items in a list, and return the processed list.\n You may need it to create merged layer like Concat, Elementwise and etc.\n\n Parameters\n ----------\n x : list\n Input\n\n Returns\n -------\n list\n A list that after removing it's repeated items\n\n Examples\n -------\n >>> l = [2, 3, 4, 2, 3]\n >>> l = list_remove_repeat(l)\n [2, 3, 4]"} {"_id": "q_1569", "text": "Ternary operation use threshold computed with weights."} {"_id": "q_1570", "text": "Adds a deprecation notice to a docstring."} {"_id": "q_1571", "text": "Creates a tensor with all elements set to `alpha_value`.\n This operation returns a tensor of type `dtype` with shape `shape` and all\n elements set to alpha.\n\n Parameters\n ----------\n shape: A list of integers, a tuple of integers, or a 1-D `Tensor` of type `int32`.\n The shape of the desired tensor\n alpha_value: `float32`, `float64`, `int8`, `uint8`, `int16`, `uint16`, int32`, `int64`\n The value used to fill the resulting `Tensor`.\n name: str\n A name for the operation (optional).\n\n Returns\n -------\n A `Tensor` with all elements set to alpha.\n\n Examples\n --------\n >>> tl.alphas([2, 3], tf.int32) # [[alpha, alpha, alpha], [alpha, alpha, alpha]]"} {"_id": "q_1572", "text": "Return the predict results of given non time-series network.\n\n Parameters\n ----------\n sess : Session\n TensorFlow Session.\n network : TensorLayer layer\n The network.\n X : numpy.array\n The inputs.\n x : placeholder\n For inputs.\n y_op : placeholder\n The argmax expression of softmax outputs.\n batch_size : int or None\n The batch size for prediction, when dataset is large, we should use minibatche for prediction;\n if dataset is small, we can set it to None.\n\n Examples\n --------\n See `tutorial_mnist_simple.py `_\n\n >>> y = network.outputs\n >>> y_op = tf.argmax(tf.nn.softmax(y), 1)\n >>> print(tl.utils.predict(sess, network, X_test, x, y_op))"} {"_id": "q_1573", "text": "Input the predicted results, targets results and\n the number of class, return the confusion matrix, F1-score of each class,\n accuracy and macro F1-score.\n\n Parameters\n ----------\n y_test : list\n The target results\n y_predict : list\n The predicted results\n n_classes : int\n The number of classes\n\n Examples\n --------\n >>> c_mat, f1, acc, f1_macro = tl.utils.evaluation(y_test, y_predict, n_classes)"} {"_id": "q_1574", "text": "Return a list of random integer by the given range and quantity.\n\n Parameters\n -----------\n min_v : number\n The minimum value.\n max_v : number\n The maximum value.\n number : int\n Number of value.\n seed : int or None\n The seed for random.\n\n Examples\n ---------\n >>> r = get_random_int(min_v=0, max_v=10, number=5)\n [10, 2, 3, 3, 7]"} {"_id": "q_1575", "text": "Clears all the placeholder variables of keep prob,\n including keeping probabilities of all dropout, denoising, dropconnect etc.\n\n Parameters\n ----------\n printable : boolean\n If True, print all deleted variables."} {"_id": "q_1576", "text": "Sample an index from a probability array.\n\n Parameters\n ----------\n a : list of float\n List of probabilities.\n temperature : float or None\n The higher the more uniform. When a = [0.1, 0.2, 0.7],\n - temperature = 0.7, the distribution will be sharpen [0.05048273, 0.13588945, 0.81362782]\n - temperature = 1.0, the distribution will be the same [0.1, 0.2, 0.7]\n - temperature = 1.5, the distribution will be filtered [0.16008435, 0.25411807, 0.58579758]\n - If None, it will be ``np.argmax(a)``\n\n Notes\n ------\n - No matter what is the temperature and input list, the sum of all probabilities will be one. Even if input list = [1, 100, 200], the sum of all probabilities will still be one.\n - For large vocabulary size, choice a higher temperature or ``tl.nlp.sample_top`` to avoid error."} {"_id": "q_1577", "text": "Sample from ``top_k`` probabilities.\n\n Parameters\n ----------\n a : list of float\n List of probabilities.\n top_k : int\n Number of candidates to be considered."} {"_id": "q_1578", "text": "Creates the vocabulary of word to word_id.\n\n See ``tutorial_tfrecord3.py``.\n\n The vocabulary is saved to disk in a text file of word counts. The id of each\n word in the file is its corresponding 0-based line number.\n\n Parameters\n ------------\n sentences : list of list of str\n All sentences for creating the vocabulary.\n word_counts_output_file : str\n The file name.\n min_word_count : int\n Minimum number of occurrences for a word.\n\n Returns\n --------\n :class:`SimpleVocabulary`\n The simple vocabulary object, see :class:`Vocabulary` for more.\n\n Examples\n --------\n Pre-process sentences\n\n >>> captions = [\"one two , three\", \"four five five\"]\n >>> processed_capts = []\n >>> for c in captions:\n >>> c = tl.nlp.process_sentence(c, start_word=\"\", end_word=\"\")\n >>> processed_capts.append(c)\n >>> print(processed_capts)\n ...[['', 'one', 'two', ',', 'three', ''], ['', 'four', 'five', 'five', '']]\n\n Create vocabulary\n\n >>> tl.nlp.create_vocab(processed_capts, word_counts_output_file='vocab.txt', min_word_count=1)\n Creating vocabulary.\n Total words: 8\n Words in vocabulary: 8\n Wrote vocabulary file: vocab.txt\n\n Get vocabulary object\n\n >>> vocab = tl.nlp.Vocabulary('vocab.txt', start_word=\"\", end_word=\"\", unk_word=\"\")\n INFO:tensorflow:Initializing vocabulary from file: vocab.txt\n [TL] Vocabulary from vocab.txt : \n vocabulary with 10 words (includes start_word, end_word, unk_word)\n start_id: 2\n end_id: 3\n unk_id: 9\n pad_id: 0"} {"_id": "q_1579", "text": "Reads through an analogy question file, return its id format.\n\n Parameters\n ----------\n eval_file : str\n The file name.\n word2id : dictionary\n a dictionary that maps word to ID.\n\n Returns\n --------\n numpy.array\n A ``[n_examples, 4]`` numpy array containing the analogy question's word IDs.\n\n Examples\n ---------\n The file should be in this format\n\n >>> : capital-common-countries\n >>> Athens Greece Baghdad Iraq\n >>> Athens Greece Bangkok Thailand\n >>> Athens Greece Beijing China\n >>> Athens Greece Berlin Germany\n >>> Athens Greece Bern Switzerland\n >>> Athens Greece Cairo Egypt\n >>> Athens Greece Canberra Australia\n >>> Athens Greece Hanoi Vietnam\n >>> Athens Greece Havana Cuba\n\n Get the tokenized analogy question data\n\n >>> words = tl.files.load_matt_mahoney_text8_dataset()\n >>> data, count, dictionary, reverse_dictionary = tl.nlp.build_words_dataset(words, vocabulary_size, True)\n >>> analogy_questions = tl.nlp.read_analogies_file(eval_file='questions-words.txt', word2id=dictionary)\n >>> print(analogy_questions)\n [[ 3068 1248 7161 1581]\n [ 3068 1248 28683 5642]\n [ 3068 1248 3878 486]\n ...,\n [ 1216 4309 19982 25506]\n [ 1216 4309 3194 8650]\n [ 1216 4309 140 312]]"} {"_id": "q_1580", "text": "Given a dictionary that maps word to integer id.\n Returns a reverse dictionary that maps a id to word.\n\n Parameters\n ----------\n word_to_id : dictionary\n that maps word to ID.\n\n Returns\n --------\n dictionary\n A dictionary that maps IDs to words."} {"_id": "q_1581", "text": "Build the words dictionary and replace rare words with 'UNK' token.\n The most common word has the smallest integer id.\n\n Parameters\n ----------\n words : list of str or byte\n The context in list format. You may need to do preprocessing on the words, such as lower case, remove marks etc.\n vocabulary_size : int\n The maximum vocabulary size, limiting the vocabulary size. Then the script replaces rare words with 'UNK' token.\n printable : boolean\n Whether to print the read vocabulary size of the given words.\n unk_key : str\n Represent the unknown words.\n\n Returns\n --------\n data : list of int\n The context in a list of ID.\n count : list of tuple and list\n Pair words and IDs.\n - count[0] is a list : the number of rare words\n - count[1:] are tuples : the number of occurrence of each word\n - e.g. [['UNK', 418391], (b'the', 1061396), (b'of', 593677), (b'and', 416629), (b'one', 411764)]\n dictionary : dictionary\n It is `word_to_id` that maps word to ID.\n reverse_dictionary : a dictionary\n It is `id_to_word` that maps ID to word.\n\n Examples\n --------\n >>> words = tl.files.load_matt_mahoney_text8_dataset()\n >>> vocabulary_size = 50000\n >>> data, count, dictionary, reverse_dictionary = tl.nlp.build_words_dataset(words, vocabulary_size)\n\n References\n -----------------\n - `tensorflow/examples/tutorials/word2vec/word2vec_basic.py `__"} {"_id": "q_1582", "text": "Convert a string to list of integers representing token-ids.\n\n For example, a sentence \"I have a dog\" may become tokenized into\n [\"I\", \"have\", \"a\", \"dog\"] and with vocabulary {\"I\": 1, \"have\": 2,\n \"a\": 4, \"dog\": 7\"} this function will return [1, 2, 4, 7].\n\n Parameters\n -----------\n sentence : tensorflow.python.platform.gfile.GFile Object\n The sentence in bytes format to convert to token-ids, see ``basic_tokenizer()`` and ``data_to_token_ids()``.\n vocabulary : dictionary\n Mmapping tokens to integers.\n tokenizer : function\n A function to use to tokenize each sentence. If None, ``basic_tokenizer`` will be used.\n normalize_digits : boolean\n If true, all digits are replaced by 0.\n\n Returns\n --------\n list of int\n The token-ids for the sentence."} {"_id": "q_1583", "text": "Calculate the bleu score for hypotheses and references\n using the MOSES ulti-bleu.perl script.\n\n Parameters\n ------------\n hypotheses : numpy.array.string\n A numpy array of strings where each string is a single example.\n references : numpy.array.string\n A numpy array of strings where each string is a single example.\n lowercase : boolean\n If True, pass the \"-lc\" flag to the multi-bleu script\n\n Examples\n ---------\n >>> hypotheses = [\"a bird is flying on the sky\"]\n >>> references = [\"two birds are flying on the sky\", \"a bird is on the top of the tree\", \"an airplane is on the sky\",]\n >>> score = tl.nlp.moses_multi_bleu(hypotheses, references)\n\n Returns\n --------\n float\n The BLEU score\n\n References\n ----------\n - `Google/seq2seq/metric/bleu `__"} {"_id": "q_1584", "text": "Returns the integer id of a word string."} {"_id": "q_1585", "text": "Returns the integer word id of a word string."} {"_id": "q_1586", "text": "Returns the word string of an integer word id."} {"_id": "q_1587", "text": "Enable the diagnostic feature for debugging unexpected concurrency in\n acquiring ConnectionWrapper instances.\n\n NOTE: This MUST be done early in your application's execution, BEFORE any\n accesses to ConnectionFactory or connection policies from your application\n (including imports and sub-imports of your app).\n\n Parameters:\n ----------------------------------------------------------------\n maxConcurrency: A non-negative integer that represents the maximum expected\n number of outstanding connections. When this value is\n exceeded, useful information will be logged and, depending\n on the value of the raiseException arg,\n ConcurrencyExceededError may be raised.\n raiseException: If true, ConcurrencyExceededError will be raised when\n maxConcurrency is exceeded."} {"_id": "q_1588", "text": "Check for concurrency violation and add self to\n _clsOutstandingInstances.\n\n ASSUMPTION: Called from constructor BEFORE _clsNumOutstanding is\n incremented"} {"_id": "q_1589", "text": "Close the policy instance and its database connection pool."} {"_id": "q_1590", "text": "Get a connection from the pool.\n\n Parameters:\n ----------------------------------------------------------------\n retval: A ConnectionWrapper instance. NOTE: Caller\n is responsible for calling the ConnectionWrapper\n instance's release() method or use it in a context manager\n expression (with ... as:) to release resources."} {"_id": "q_1591", "text": "Create a Connection instance.\n\n Parameters:\n ----------------------------------------------------------------\n retval: A ConnectionWrapper instance. NOTE: Caller\n is responsible for calling the ConnectionWrapper\n instance's release() method or use it in a context manager\n expression (with ... as:) to release resources."} {"_id": "q_1592", "text": "Release database connection and cursor; passed as a callback to\n ConnectionWrapper"} {"_id": "q_1593", "text": "Reclassifies given state."} {"_id": "q_1594", "text": "Removes the given records from the classifier.\n\n parameters\n ------------\n recordsToDelete - list of records to delete from the classififier"} {"_id": "q_1595", "text": "Removes any stored records within the range from start to\n end. Noninclusive of end.\n\n parameters\n ------------\n start - integer representing the ROWID of the start of the deletion range,\n end - integer representing the ROWID of the end of the deletion range,\n if None, it will default to end."} {"_id": "q_1596", "text": "Since the KNN Classifier stores categories as numbers, we must store each\n label as a number. This method converts from a label to a unique number.\n Each label is assigned a unique bit so multiple labels may be assigned to\n a single record."} {"_id": "q_1597", "text": "Converts a category number into a list of labels"} {"_id": "q_1598", "text": "Returns a state's anomaly vertor converting it from spare to dense"} {"_id": "q_1599", "text": "Get the labels on classified points within range start to end. Not inclusive\n of end.\n\n :returns: (dict) with format:\n\n ::\n\n {\n 'isProcessing': boolean,\n 'recordLabels': list of results\n }\n\n ``isProcessing`` - currently always false as recalculation blocks; used if\n reprocessing of records is still being performed;\n\n Each item in ``recordLabels`` is of format:\n \n ::\n \n {\n 'ROWID': id of the row,\n 'labels': list of strings\n }"} {"_id": "q_1600", "text": "Remove labels from each record with record ROWID in range from\n ``start`` to ``end``, noninclusive of end. Removes all records if \n ``labelFilter`` is None, otherwise only removes the labels equal to \n ``labelFilter``.\n\n This will recalculate all points from end to the last record stored in the\n internal cache of this classifier.\n \n :param start: (int) start index \n :param end: (int) end index (noninclusive)\n :param labelFilter: (string) label filter"} {"_id": "q_1601", "text": "Returns True if the record matches any of the provided filters"} {"_id": "q_1602", "text": "Removes the set of columns who have never been active from the set of\n active columns selected in the inhibition round. Such columns cannot\n represent learned pattern and are therefore meaningless if only inference\n is required. This should not be done when using a random, unlearned SP\n since you would end up with no active columns.\n\n :param activeArray: An array whose size is equal to the number of columns.\n Any columns marked as active with an activeDutyCycle of 0 have\n never been activated before and therefore are not active due to\n learning. Any of these (unlearned) columns will be disabled (set to 0)."} {"_id": "q_1603", "text": "Updates the minimum duty cycles defining normal activity for a column. A\n column with activity duty cycle below this minimum threshold is boosted."} {"_id": "q_1604", "text": "Updates the minimum duty cycles in a global fashion. Sets the minimum duty\n cycles for the overlap all columns to be a percent of the maximum in the\n region, specified by minPctOverlapDutyCycle. Functionality it is equivalent\n to _updateMinDutyCyclesLocal, but this function exploits the globality of\n the computation to perform it in a straightforward, and efficient manner."} {"_id": "q_1605", "text": "Updates the minimum duty cycles. The minimum duty cycles are determined\n locally. Each column's minimum duty cycles are set to be a percent of the\n maximum duty cycles in the column's neighborhood. Unlike\n _updateMinDutyCyclesGlobal, here the values can be quite different for\n different columns."} {"_id": "q_1606", "text": "Updates the duty cycles for each column. The OVERLAP duty cycle is a moving\n average of the number of inputs which overlapped with the each column. The\n ACTIVITY duty cycles is a moving average of the frequency of activation for\n each column.\n\n Parameters:\n ----------------------------\n :param overlaps:\n An array containing the overlap score for each column.\n The overlap score for a column is defined as the number\n of synapses in a \"connected state\" (connected synapses)\n that are connected to input bits which are turned on.\n :param activeColumns:\n An array containing the indices of the active columns,\n the sparse set of columns which survived inhibition"} {"_id": "q_1607", "text": "The average number of columns per input, taking into account the topology\n of the inputs and columns. This value is used to calculate the inhibition\n radius. This function supports an arbitrary number of dimensions. If the\n number of column dimensions does not match the number of input dimensions,\n we treat the missing, or phantom dimensions as 'ones'."} {"_id": "q_1608", "text": "The range of connected synapses for column. This is used to\n calculate the inhibition radius. This variation of the function only\n supports a 1 dimensional column topology.\n\n Parameters:\n ----------------------------\n :param columnIndex: The index identifying a column in the permanence,\n potential and connectivity matrices"} {"_id": "q_1609", "text": "The range of connectedSynapses per column, averaged for each dimension.\n This value is used to calculate the inhibition radius. This variation of\n the function only supports a 2 dimensional column topology.\n\n Parameters:\n ----------------------------\n :param columnIndex: The index identifying a column in the permanence,\n potential and connectivity matrices"} {"_id": "q_1610", "text": "This method ensures that each column has enough connections to input bits\n to allow it to become active. Since a column must have at least\n 'self._stimulusThreshold' overlaps in order to be considered during the\n inhibition phase, columns without such minimal number of connections, even\n if all the input bits they are connected to turn on, have no chance of\n obtaining the minimum threshold. For such columns, the permanence values\n are increased until the minimum number of connections are formed.\n\n\n Parameters:\n ----------------------------\n :param perm: An array of permanence values for a column. The array is\n \"dense\", i.e. it contains an entry for each input bit, even\n if the permanence value is 0.\n :param mask: the indices of the columns whose permanences need to be\n raised."} {"_id": "q_1611", "text": "Returns a randomly generated permanence value for a synapses that is to be\n initialized in a non-connected state."} {"_id": "q_1612", "text": "Initializes the permanences of a column. The method\n returns a 1-D array the size of the input, where each entry in the\n array represents the initial permanence value between the input bit\n at the particular index in the array, and the column represented by\n the 'index' parameter.\n\n Parameters:\n ----------------------------\n :param potential: A numpy array specifying the potential pool of the column.\n Permanence values will only be generated for input bits\n corresponding to indices for which the mask value is 1.\n :param connectedPct: A value between 0 or 1 governing the chance, for each\n permanence, that the initial permanence value will\n be a value that is considered connected."} {"_id": "q_1613", "text": "Update boost factors when global inhibition is used"} {"_id": "q_1614", "text": "Performs inhibition. This method calculates the necessary values needed to\n actually perform inhibition and then delegates the task of picking the\n active columns to helper functions.\n\n Parameters:\n ----------------------------\n :param overlaps: an array containing the overlap score for each column.\n The overlap score for a column is defined as the number\n of synapses in a \"connected state\" (connected synapses)\n that are connected to input bits which are turned on."} {"_id": "q_1615", "text": "Gets a neighborhood of columns.\n\n Simply calls topology.neighborhood or topology.wrappingNeighborhood\n\n A subclass can insert different topology behavior by overriding this method.\n\n :param centerColumn (int)\n The center of the neighborhood.\n\n @returns (1D numpy array of integers)\n The columns in the neighborhood."} {"_id": "q_1616", "text": "Gets a neighborhood of inputs.\n\n Simply calls topology.wrappingNeighborhood or topology.neighborhood.\n\n A subclass can insert different topology behavior by overriding this method.\n\n :param centerInput (int)\n The center of the neighborhood.\n\n @returns (1D numpy array of integers)\n The inputs in the neighborhood."} {"_id": "q_1617", "text": "Factory function that creates typed Array or ArrayRef objects\n\n dtype - the data type of the array (as string).\n Supported types are: Byte, Int16, UInt16, Int32, UInt32, Int64, UInt64, Real32, Real64\n\n size - the size of the array. Must be positive integer."} {"_id": "q_1618", "text": "Get parameter value"} {"_id": "q_1619", "text": "Set parameter value"} {"_id": "q_1620", "text": "Get the collection of regions in a network\n\n This is a tricky one. The collection of regions returned from\n from the internal network is a collection of internal regions.\n The desired collection is a collelcion of net.Region objects\n that also points to this network (net.network) and not to\n the internal network. To achieve that a CollectionWrapper\n class is used with a custom makeRegion() function (see bellow)\n as a value wrapper. The CollectionWrapper class wraps each value in the\n original collection with the result of the valueWrapper."} {"_id": "q_1621", "text": "Write state to proto object.\n\n :param proto: SDRClassifierRegionProto capnproto object"} {"_id": "q_1622", "text": "Read state from proto object.\n\n :param proto: SDRClassifierRegionProto capnproto object"} {"_id": "q_1623", "text": "Runs the OPF Model\n\n Parameters:\n -------------------------------------------------------------------------\n retval: (completionReason, completionMsg)\n where completionReason is one of the ClientJobsDAO.CMPL_REASON_XXX\n equates."} {"_id": "q_1624", "text": "Run final activities after a model has run. These include recording and\n logging the final score"} {"_id": "q_1625", "text": "Create a checkpoint from the current model, and store it in a dir named\n after checkpoint GUID, and finally store the GUID in the Models DB"} {"_id": "q_1626", "text": "Delete the stored checkpoint for the specified modelID. This function is\n called if the current model is now the best model, making the old model's\n checkpoint obsolete\n\n Parameters:\n -----------------------------------------------------------------------\n modelID: The modelID for the checkpoint to delete. This is NOT the\n unique checkpointID"} {"_id": "q_1627", "text": "Writes the results of one iteration of a model. The results are written to\n this ModelRunner's in-memory cache unless this model is the \"best model\" for\n the job. If this model is the \"best model\", the predictions are written out\n to a permanent store via a prediction output stream instance\n\n\n Parameters:\n -----------------------------------------------------------------------\n result: A opf_utils.ModelResult object, which contains the input and\n output for this iteration"} {"_id": "q_1628", "text": "Delete's the output cache associated with the given modelID. This actually\n clears up the resources associated with the cache, rather than deleting al\n the records in the cache\n\n Parameters:\n -----------------------------------------------------------------------\n modelID: The id of the model whose output cache is being deleted"} {"_id": "q_1629", "text": "Creates and returns a PeriodicActivityMgr instance initialized with\n our periodic activities\n\n Parameters:\n -------------------------------------------------------------------------\n retval: a PeriodicActivityMgr instance"} {"_id": "q_1630", "text": "Check if the cancelation flag has been set for this model\n in the Model DB"} {"_id": "q_1631", "text": "Save the current metric value and see if the model's performance has\n 'leveled off.' We do this by looking at some number of previous number of\n recordings"} {"_id": "q_1632", "text": "Set our state to that obtained from the engWorkerState field of the\n job record.\n\n\n Parameters:\n ---------------------------------------------------------------------\n stateJSON: JSON encoded state from job record"} {"_id": "q_1633", "text": "Return the list of all swarms in the given sprint.\n\n Parameters:\n ---------------------------------------------------------------------\n retval: list of active swarm Ids in the given sprint"} {"_id": "q_1634", "text": "Return the list of all completing swarms.\n\n Parameters:\n ---------------------------------------------------------------------\n retval: list of active swarm Ids"} {"_id": "q_1635", "text": "Return True if the given sprint has completed."} {"_id": "q_1636", "text": "Convert the information of the node spec to a plain dict of basic types\n\n The description and singleNodeOnly attributes are placed directly in\n the result dicts. The inputs, outputs, parameters and commands dicts\n contain Spec item objects (InputSpec, OutputSpec, etc). Each such object\n is converted also to a plain dict using the internal items2dict() function\n (see bellow)."} {"_id": "q_1637", "text": "Create the encoder instance for our test and return it."} {"_id": "q_1638", "text": "Validates control dictionary for the experiment context"} {"_id": "q_1639", "text": "Extract all items from the 'allKeys' list whose key matches one of the regular\n expressions passed in 'reportKeys'.\n\n Parameters:\n ----------------------------------------------------------------------------\n reportKeyREs: List of regular expressions\n allReportKeys: List of all keys\n\n retval: list of keys from allReportKeys that match the regular expressions\n in 'reportKeyREs'\n If an invalid regular expression was included in 'reportKeys',\n then BadKeyError() is raised"} {"_id": "q_1640", "text": "Get a specific item by name out of the results dict.\n\n The format of itemName is a string of dictionary keys separated by colons,\n each key being one level deeper into the results dict. For example,\n 'key1:key2' would fetch results['key1']['key2'].\n\n If itemName is not found in results, then None is returned"} {"_id": "q_1641", "text": "Perform standard handling of an exception that occurs while running\n a model.\n\n Parameters:\n -------------------------------------------------------------------------\n jobID: ID for this hypersearch job in the jobs table\n modelID: model ID\n jobsDAO: ClientJobsDAO instance\n experimentDir: directory containing the experiment\n logger: the logger to use\n e: the exception that occurred\n retval: (completionReason, completionMsg)"} {"_id": "q_1642", "text": "This creates an experiment directory with a base.py description file\n created from 'baseDescription' and a description.py generated from the\n given params dict and then runs the experiment.\n\n Parameters:\n -------------------------------------------------------------------------\n modelID: ID for this model in the models table\n jobID: ID for this hypersearch job in the jobs table\n baseDescription: Contents of a description.py with the base experiment\n description\n params: Dictionary of specific parameters to override within\n the baseDescriptionFile.\n predictedField: Name of the input field for which this model is being\n optimized\n reportKeys: Which metrics of the experiment to store into the\n results dict of the model's database entry\n optimizeKey: Which metric we are optimizing for\n jobsDAO Jobs data access object - the interface to the\n jobs database which has the model's table.\n modelCheckpointGUID: A persistent, globally-unique identifier for\n constructing the model checkpoint key\n logLevel: override logging level to this value, if not None\n\n retval: (completionReason, completionMsg)"} {"_id": "q_1643", "text": "Recursively copies a dict and returns the result.\n\n Args:\n d: The dict to copy.\n f: A function to apply to values when copying that takes the value and the\n list of keys from the root of the dict to the value and returns a value\n for the new dict.\n discardNoneKeys: If True, discard key-value pairs when f returns None for\n the value.\n deepCopy: If True, all values in returned dict are true copies (not the\n same object).\n Returns:\n A new dict with keys and values from d replaced with the result of f."} {"_id": "q_1644", "text": "Recursively applies f to the values in dict d.\n\n Args:\n d: The dict to recurse over.\n f: A function to apply to values in d that takes the value and a list of\n keys from the root of the dict to the value."} {"_id": "q_1645", "text": "Return a clipped version of obj suitable for printing, This\n is useful when generating log messages by printing data structures, but\n don't want the message to be too long.\n\n If passed in a dict, list, or namedtuple, each element of the structure's\n string representation will be limited to 'maxElementSize' characters. This\n will return a new object where the string representation of each element\n has been truncated to fit within maxElementSize."} {"_id": "q_1646", "text": "Loads a json value from a file and converts it to the corresponding python\n object.\n\n inputFilePath:\n Path of the json file;\n\n Returns:\n python value that represents the loaded json value"} {"_id": "q_1647", "text": "Recursively updates the values in original with the values from updates."} {"_id": "q_1648", "text": "Compares two python dictionaries at the top level and report differences,\n if any, to stdout\n\n da: first dictionary\n db: second dictionary\n\n Returns: The same value as returned by dictDiff() for the given args"} {"_id": "q_1649", "text": "Given model params, figure out the correct resolution for the\n RandomDistributed encoder. Modifies params in place."} {"_id": "q_1650", "text": "Remove labels from each record with record ROWID in range from\n start to end, noninclusive of end. Removes all records if labelFilter is\n None, otherwise only removes the labels eqaul to labelFilter.\n\n This will recalculate all points from end to the last record stored in the\n internal cache of this classifier."} {"_id": "q_1651", "text": "This method will remove the given records from the classifier.\n\n parameters\n ------------\n recordsToDelete - list of records to delete from the classififier"} {"_id": "q_1652", "text": "Construct a _HTMClassificationRecord based on the current state of the\n htm_prediction_model of this classifier.\n\n ***This will look into the internals of the model and may depend on the\n SP, TM, and KNNClassifier***"} {"_id": "q_1653", "text": "Sets the autoDetectWaitRecords."} {"_id": "q_1654", "text": "Run one iteration, profiling it if requested.\n\n :param inputs: (dict) mapping region input names to numpy.array values\n :param outputs: (dict) mapping region output names to numpy.arrays that \n should be populated with output values by this method"} {"_id": "q_1655", "text": "Run one iteration of SPRegion's compute"} {"_id": "q_1656", "text": "Figure out whether reset, sequenceId,\n both or neither are present in the data.\n Compute once instead of every time.\n\n Taken from filesource.py"} {"_id": "q_1657", "text": "Get the default arguments from the function and assign as instance vars.\n\n Return a list of 3-tuples with (name, description, defaultValue) for each\n argument to the function.\n\n Assigns all arguments to the function as instance variables of TMRegion.\n If the argument was not provided, uses the default value.\n\n Pops any values from kwargs that go to the function."} {"_id": "q_1658", "text": "Run one iteration of TMRegion's compute"} {"_id": "q_1659", "text": "Perform an internal optimization step that speeds up inference if we know\n learning will not be performed anymore. This call may, for example, remove\n all potential inputs to each column."} {"_id": "q_1660", "text": "Computes the raw anomaly score.\n\n The raw anomaly score is the fraction of active columns not predicted.\n\n :param activeColumns: array of active column indices\n :param prevPredictedColumns: array of columns indices predicted in prev step\n :returns: anomaly score 0..1 (float)"} {"_id": "q_1661", "text": "Compute the anomaly score as the percent of active columns not predicted.\n\n :param activeColumns: array of active column indices\n :param predictedColumns: array of columns indices predicted in this step\n (used for anomaly in step T+1)\n :param inputValue: (optional) value of current input to encoders\n (eg \"cat\" for category encoder)\n (used in anomaly-likelihood)\n :param timestamp: (optional) date timestamp when the sample occured\n (used in anomaly-likelihood)\n :returns: the computed anomaly score; float 0..1"} {"_id": "q_1662", "text": "Adds an image to the plot's figure.\n\n @param data a 2D array. See matplotlib.Axes.imshow documentation.\n @param position A 3-digit number. The first two digits define a 2D grid\n where subplots may be added. The final digit specifies the nth grid\n location for the added subplot\n @param xlabel text to be displayed on the x-axis\n @param ylabel text to be displayed on the y-axis\n @param cmap color map used in the rendering\n @param aspect how aspect ratio is handled during resize\n @param interpolation interpolation method"} {"_id": "q_1663", "text": "Adds a subplot to the plot's figure at specified position.\n\n @param position A 3-digit number. The first two digits define a 2D grid\n where subplots may be added. The final digit specifies the nth grid\n location for the added subplot\n @param xlabel text to be displayed on the x-axis\n @param ylabel text to be displayed on the y-axis\n @returns (matplotlib.Axes) Axes instance"} {"_id": "q_1664", "text": "Get version from local file."} {"_id": "q_1665", "text": "Make an attempt to determine if a pre-release version of nupic.bindings is\n installed already.\n\n @return: boolean"} {"_id": "q_1666", "text": "Read the requirements.txt file and parse into requirements for setup's\n install_requirements option."} {"_id": "q_1667", "text": "Generates the string representation of a MetricSpec object, and returns\n the metric key associated with the metric.\n\n\n Parameters:\n -----------------------------------------------------------------------\n inferenceElement:\n An InferenceElement value that indicates which part of the inference this\n metric is computed on\n\n metric:\n The type of the metric being computed (e.g. aae, avg_error)\n\n params:\n A dictionary of parameters for the metric. The keys are the parameter names\n and the values should be the parameter values (e.g. window=200)\n\n field:\n The name of the field for which this metric is being computed\n\n returnLabel:\n If True, returns the label of the MetricSpec that was generated"} {"_id": "q_1668", "text": "Generates a file by applying token replacements to the given template\n file\n\n templateFileName:\n A list of template file names; these files are assumed to be in\n the same directory as the running experiment_generator.py script.\n ExpGenerator will perform the substitution and concanetate\n the files in the order they are specified\n\n outputFilePath: Absolute path of the output file\n\n replacementDict:\n A dictionary of token/replacement pairs"} {"_id": "q_1669", "text": "Returns the experiment description schema. This implementation loads it in\n from file experimentDescriptionSchema.json.\n\n Parameters:\n --------------------------------------------------------------------------\n Returns: returns a dict representing the experiment description schema."} {"_id": "q_1670", "text": "Generates the non-default metrics specified by the expGenerator params"} {"_id": "q_1671", "text": "Generates the token substitutions related to the predicted field\n and the supplemental arguments for prediction"} {"_id": "q_1672", "text": "Parses, validates, and executes command-line options;\n\n On success: Performs requested operation and exits program normally\n\n On Error: Dumps exception/error info in JSON format to stdout and exits the\n program with non-zero status."} {"_id": "q_1673", "text": "Parses a textual datetime format and return a Python datetime object.\n\n The supported format is: ``yyyy-mm-dd h:m:s.ms``\n\n The time component is optional.\n\n - hours are 00..23 (no AM/PM)\n - minutes are 00..59\n - seconds are 00..59\n - micro-seconds are 000000..999999\n\n :param s: (string) input time text\n :return: (datetime.datetime)"} {"_id": "q_1674", "text": "Translate an index into coordinates, using the given coordinate system.\n\n Similar to ``numpy.unravel_index``.\n\n :param index: (int) The index of the point. The coordinates are expressed as a \n single index by using the dimensions as a mixed radix definition. For \n example, in dimensions 42x10, the point [1, 4] is index \n 1*420 + 4*10 = 460.\n\n :param dimensions (list of ints) The coordinate system.\n\n :returns: (list) of coordinates of length ``len(dimensions)``."} {"_id": "q_1675", "text": "Translate coordinates into an index, using the given coordinate system.\n\n Similar to ``numpy.ravel_multi_index``.\n\n :param coordinates: (list of ints) A list of coordinates of length \n ``dimensions.size()``.\n\n :param dimensions: (list of ints) The coordinate system.\n\n :returns: (int) The index of the point. The coordinates are expressed as a \n single index by using the dimensions as a mixed radix definition. \n For example, in dimensions 42x10, the point [1, 4] is index \n 1*420 + 4*10 = 460."} {"_id": "q_1676", "text": "Get the points in the neighborhood of a point.\n\n A point's neighborhood is the n-dimensional hypercube with sides ranging\n [center - radius, center + radius], inclusive. For example, if there are two\n dimensions and the radius is 3, the neighborhood is 6x6. Neighborhoods are\n truncated when they are near an edge.\n\n This is designed to be fast. In C++ it's fastest to iterate through neighbors\n one by one, calculating them on-demand rather than creating a list of them.\n But in Python it's faster to build up the whole list in batch via a few calls\n to C code rather than calculating them on-demand with lots of calls to Python\n code.\n\n :param centerIndex: (int) The index of the point. The coordinates are \n expressed as a single index by using the dimensions as a mixed radix \n definition. For example, in dimensions 42x10, the point [1, 4] is index \n 1*420 + 4*10 = 460.\n\n :param radius: (int) The radius of this neighborhood about the \n ``centerIndex``.\n\n :param dimensions: (indexable sequence) The dimensions of the world outside \n this neighborhood.\n\n :returns: (numpy array) The points in the neighborhood, including \n ``centerIndex``."} {"_id": "q_1677", "text": "Returns coordinates around given coordinate, within given radius.\n Includes given coordinate.\n\n @param coordinate (numpy.array) N-dimensional integer coordinate\n @param radius (int) Radius around `coordinate`\n\n @return (numpy.array) List of coordinates"} {"_id": "q_1678", "text": "Returns the top W coordinates by order.\n\n @param coordinates (numpy.array) A 2D numpy array, where each element\n is a coordinate\n @param w (int) Number of top coordinates to return\n @return (numpy.array) A subset of `coordinates`, containing only the\n top ones by order"} {"_id": "q_1679", "text": "Hash a coordinate to a 64 bit integer."} {"_id": "q_1680", "text": "Maps the coordinate to a bit in the SDR.\n\n @param coordinate (numpy.array) Coordinate\n @param n (int) The number of available bits in the SDR\n @return (int) The index to a bit in the SDR"} {"_id": "q_1681", "text": "Function for running binary search on a sorted list.\n\n :param arr: (list) a sorted list of integers to search\n :param val: (int) a integer to search for in the sorted array\n :returns: (int) the index of the element if it is found and -1 otherwise."} {"_id": "q_1682", "text": "Adds a new segment on a cell.\n\n :param cell: (int) Cell index\n :returns: (int) New segment index"} {"_id": "q_1683", "text": "Destroys a segment.\n\n :param segment: (:class:`Segment`) representing the segment to be destroyed."} {"_id": "q_1684", "text": "Creates a new synapse on a segment.\n\n :param segment: (:class:`Segment`) Segment object for synapse to be synapsed \n to.\n :param presynapticCell: (int) Source cell index.\n :param permanence: (float) Initial permanence of synapse.\n :returns: (:class:`Synapse`) created synapse"} {"_id": "q_1685", "text": "Destroys a synapse.\n\n :param synapse: (:class:`Synapse`) synapse to destroy"} {"_id": "q_1686", "text": "Compute each segment's number of active synapses for a given input.\n In the returned lists, a segment's active synapse count is stored at index\n ``segment.flatIdx``.\n\n :param activePresynapticCells: (iter) Active cells.\n :param connectedPermanence: (float) Permanence threshold for a synapse to be \n considered connected\n\n :returns: (tuple) (``numActiveConnectedSynapsesForSegment`` [list],\n ``numActivePotentialSynapsesForSegment`` [list])"} {"_id": "q_1687", "text": "Returns the number of segments.\n\n :param cell: (int) Optional parameter to get the number of segments on a \n cell.\n :returns: (int) Number of segments on all cells if cell is not specified, or \n on a specific specified cell"} {"_id": "q_1688", "text": "Reads deserialized data from proto object\n\n :param proto: (DynamicStructBuilder) Proto object\n\n :returns: (:class:`Connections`) instance"} {"_id": "q_1689", "text": "Retrieve the requested property as a string. If property does not exist,\n then KeyError will be raised.\n\n :param prop: (string) name of the property\n :raises: KeyError\n :returns: (string) property value"} {"_id": "q_1690", "text": "Retrieve the requested property and return it as a bool. If property\n does not exist, then KeyError will be raised. If the property value is\n neither 0 nor 1, then ValueError will be raised\n\n :param prop: (string) name of the property\n :raises: KeyError, ValueError\n :returns: (bool) property value"} {"_id": "q_1691", "text": "Set the value of the given configuration property.\n\n :param prop: (string) name of the property\n :param value: (object) value to set"} {"_id": "q_1692", "text": "Return a dict containing all of the configuration properties\n\n :returns: (dict) containing all configuration properties."} {"_id": "q_1693", "text": "Parse the given XML file and store all properties it describes.\n\n :param filename: (string) name of XML file to parse (no path)\n :param path: (string) path of the XML file. If None, then use the standard\n configuration search path."} {"_id": "q_1694", "text": "Return the list of paths to search for configuration files.\n\n :returns: (list) of paths"} {"_id": "q_1695", "text": "Generate a list of random sparse distributed vectors. This is used to generate\n training vectors to the spatial or temporal learner and to compare the predicted\n output against.\n\n It generates a list of 'numVectors' elements, each element has length 'length'\n and has a total of 'activity' bits on.\n\n Parameters:\n -----------------------------------------------\n numVectors: the number of vectors to generate\n length: the length of each row\n activity: the number of ones to put into each row."} {"_id": "q_1696", "text": "Generate a set of simple sequences. The elements of the sequences will be\n integers from 0 to 'nCoinc'-1. The length of each sequence will be\n randomly chosen from the 'seqLength' list.\n\n Parameters:\n -----------------------------------------------\n nCoinc: the number of elements available to use in the sequences\n seqLength: a list of possible sequence lengths. The length of each\n sequence will be randomly chosen from here.\n nSeq: The number of sequences to generate\n\n retval: a list of sequences. Each sequence is itself a list\n containing the coincidence indices for that sequence."} {"_id": "q_1697", "text": "Generate a set of hub sequences. These are sequences which contain a hub\n element in the middle. The elements of the sequences will be integers\n from 0 to 'nCoinc'-1. The hub elements will only appear in the middle of\n each sequence. The length of each sequence will be randomly chosen from the\n 'seqLength' list.\n\n Parameters:\n -----------------------------------------------\n nCoinc: the number of elements available to use in the sequences\n hubs: which of the elements will be used as hubs.\n seqLength: a list of possible sequence lengths. The length of each\n sequence will be randomly chosen from here.\n nSeq: The number of sequences to generate\n\n retval: a list of sequences. Each sequence is itself a list\n containing the coincidence indices for that sequence."} {"_id": "q_1698", "text": "Generate a non overlapping coincidence matrix. This is used to generate random\n inputs to the temporal learner and to compare the predicted output against.\n\n It generates a matrix of nCoinc rows, each row has length 'length' and has\n a total of 'activity' bits on.\n\n Parameters:\n -----------------------------------------------\n nCoinc: the number of rows to generate\n length: the length of each row\n activity: the number of ones to put into each row."} {"_id": "q_1699", "text": "Function that compares two spatial pooler instances. Compares the\n static variables between the two poolers to make sure that they are equivalent.\n\n Parameters\n -----------------------------------------\n SP1 first spatial pooler to be compared\n\n SP2 second spatial pooler to be compared\n\n To establish equality, this function does the following:\n\n 1.Compares the connected synapse matrices for each coincidence\n\n 2.Compare the potential synapse matrices for each coincidence\n\n 3.Compare the permanence matrices for each coincidence\n\n 4.Compare the firing boosts between the two poolers.\n\n 5.Compare the duty cycles before and after inhibition for both poolers"} {"_id": "q_1700", "text": "Accumulate a list of values 'values' into the frequency counts 'freqCounts',\n and return the updated frequency counts\n\n For example, if values contained the following: [1,1,3,5,1,3,5], and the initial\n freqCounts was None, then the return value would be:\n [0,3,0,2,0,2]\n which corresponds to how many of each value we saw in the input, i.e. there\n were 0 0's, 3 1's, 0 2's, 2 3's, 0 4's, and 2 5's.\n\n If freqCounts is not None, the values will be added to the existing counts and\n the length of the frequency Counts will be automatically extended as necessary\n\n Parameters:\n -----------------------------------------------\n values: The values to accumulate into the frequency counts\n freqCounts: Accumulated frequency counts so far, or none"} {"_id": "q_1701", "text": "Helper function used by averageOnTimePerTimestep. 'durations' is a vector\n which must be the same len as vector. For each \"on\" in vector, it fills in\n the corresponding element of duration with the duration of that \"on\" signal\n up until that time\n\n Parameters:\n -----------------------------------------------\n vector: vector of output values over time\n durations: vector same length as 'vector', initialized to 0's.\n This is filled in with the durations of each 'on\" signal.\n\n Example:\n vector: 11100000001100000000011111100000\n durations: 12300000001200000000012345600000"} {"_id": "q_1702", "text": "Computes the average on-time of the outputs that are on at each time step, and\n then averages this over all time steps.\n\n This metric is resiliant to the number of outputs that are on at each time\n step. That is, if time step 0 has many more outputs on than time step 100, it\n won't skew the results. This is particularly useful when measuring the\n average on-time of things like the temporal memory output where you might\n have many columns bursting at the start of a sequence - you don't want those\n start of sequence bursts to over-influence the calculated average on-time.\n\n Parameters:\n -----------------------------------------------\n vectors: the vectors for which the onTime is calculated. Row 0\n contains the outputs from time step 0, row 1 from time step\n 1, etc.\n numSamples: the number of elements for which on-time is calculated.\n If not specified, then all elements are looked at.\n\n Returns (scalar average on-time over all time steps,\n list containing frequency counts of each encountered on-time)"} {"_id": "q_1703", "text": "Returns the average on-time, averaged over all on-time runs.\n\n Parameters:\n -----------------------------------------------\n vectors: the vectors for which the onTime is calculated. Row 0\n contains the outputs from time step 0, row 1 from time step\n 1, etc.\n numSamples: the number of elements for which on-time is calculated.\n If not specified, then all elements are looked at.\n\n Returns: (scalar average on-time of all outputs,\n list containing frequency counts of each encountered on-time)"} {"_id": "q_1704", "text": "This is usually used to display a histogram of the on-times encountered\n in a particular output.\n\n The freqCounts is a vector containg the frequency counts of each on-time\n (starting at an on-time of 0 and going to an on-time = len(freqCounts)-1)\n\n The freqCounts are typically generated from the averageOnTimePerTimestep\n or averageOnTime methods of this module.\n\n Parameters:\n -----------------------------------------------\n freqCounts: The frequency counts to plot\n title: Title of the plot"} {"_id": "q_1705", "text": "Returns the percent of the outputs that remain completely stable over\n N time steps.\n\n Parameters:\n -----------------------------------------------\n vectors: the vectors for which the stability is calculated\n numSamples: the number of time steps where stability is counted\n\n For each window of numSamples, count how many outputs are active during\n the entire window."} {"_id": "q_1706", "text": "Compares the actual input with the predicted input and returns results\n\n Parameters:\n -----------------------------------------------\n input: The actual input\n prediction: the predicted input\n verbosity: If > 0, print debugging messages\n sparse: If true, they are in sparse form (list of\n active indices)\n\n retval (foundInInput, totalActiveInInput, missingFromInput,\n totalActiveInPrediction)\n foundInInput: The number of predicted active elements that were\n found in the actual input\n totalActiveInInput: The total number of active elements in the input.\n missingFromInput: The number of predicted active elements that were not\n found in the actual input\n totalActiveInPrediction: The total number of active elements in the prediction"} {"_id": "q_1707", "text": "Generates centre offsets and spread offsets for block-mode based training\n regimes - star, cross, block.\n\n Parameters:\n -----------------------------------------------\n spaceShape: The (height, width) of the 2-D space to explore. This\n sets the number of center-points.\n spreadShape: The shape (height, width) of the area around each center-point\n to explore.\n stepSize: The step size. How big each step is, in pixels. This controls\n *both* the spacing of the center-points within the block and the\n points we explore around each center-point\n retval: (centreOffsets, spreadOffsets)"} {"_id": "q_1708", "text": "Make a two-dimensional clone map mapping columns to clone master.\n\n This makes a map that is (numColumnsHigh, numColumnsWide) big that can\n be used to figure out which clone master to use for each column. Here are\n a few sample calls\n\n >>> makeCloneMap(columnsShape=(10, 6), outputCloningWidth=4)\n (array([[ 0, 1, 2, 3, 0, 1],\n [ 4, 5, 6, 7, 4, 5],\n [ 8, 9, 10, 11, 8, 9],\n [12, 13, 14, 15, 12, 13],\n [ 0, 1, 2, 3, 0, 1],\n [ 4, 5, 6, 7, 4, 5],\n [ 8, 9, 10, 11, 8, 9],\n [12, 13, 14, 15, 12, 13],\n [ 0, 1, 2, 3, 0, 1],\n [ 4, 5, 6, 7, 4, 5]], dtype=uint32), 16)\n\n >>> makeCloneMap(columnsShape=(7, 8), outputCloningWidth=3)\n (array([[0, 1, 2, 0, 1, 2, 0, 1],\n [3, 4, 5, 3, 4, 5, 3, 4],\n [6, 7, 8, 6, 7, 8, 6, 7],\n [0, 1, 2, 0, 1, 2, 0, 1],\n [3, 4, 5, 3, 4, 5, 3, 4],\n [6, 7, 8, 6, 7, 8, 6, 7],\n [0, 1, 2, 0, 1, 2, 0, 1]], dtype=uint32), 9)\n\n >>> makeCloneMap(columnsShape=(7, 11), outputCloningWidth=5)\n (array([[ 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0],\n [ 5, 6, 7, 8, 9, 5, 6, 7, 8, 9, 5],\n [10, 11, 12, 13, 14, 10, 11, 12, 13, 14, 10],\n [15, 16, 17, 18, 19, 15, 16, 17, 18, 19, 15],\n [20, 21, 22, 23, 24, 20, 21, 22, 23, 24, 20],\n [ 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0],\n [ 5, 6, 7, 8, 9, 5, 6, 7, 8, 9, 5]], dtype=uint32), 25)\n\n >>> makeCloneMap(columnsShape=(7, 8), outputCloningWidth=3, outputCloningHeight=4)\n (array([[ 0, 1, 2, 0, 1, 2, 0, 1],\n [ 3, 4, 5, 3, 4, 5, 3, 4],\n [ 6, 7, 8, 6, 7, 8, 6, 7],\n [ 9, 10, 11, 9, 10, 11, 9, 10],\n [ 0, 1, 2, 0, 1, 2, 0, 1],\n [ 3, 4, 5, 3, 4, 5, 3, 4],\n [ 6, 7, 8, 6, 7, 8, 6, 7]], dtype=uint32), 12)\n\n The basic idea with this map is that, if you imagine things stretching off\n to infinity, every instance of a given clone master is seeing the exact\n same thing in all directions. That includes:\n - All neighbors must be the same\n - The \"meaning\" of the input to each of the instances of the same clone\n master must be the same. If input is pixels and we have translation\n invariance--this is easy. At higher levels where input is the output\n of lower levels, this can be much harder.\n - The \"meaning\" of the inputs to neighbors of a clone master must be the\n same for each instance of the same clone master.\n\n\n The best way to think of this might be in terms of 'inputCloningWidth' and\n 'outputCloningWidth'.\n - The 'outputCloningWidth' is the number of columns you'd have to move\n horizontally (or vertically) before you get back to the same the same\n clone that you started with. MUST BE INTEGRAL!\n - The 'inputCloningWidth' is the 'outputCloningWidth' of the node below us.\n If we're getting input from an sensor where every element just represents\n a shift of every other element, this is 1.\n At a conceptual level, it means that if two different inputs are shown\n to the node and the only difference between them is that one is shifted\n horizontally (or vertically) by this many pixels, it means we are looking\n at the exact same real world input, but shifted by some number of pixels\n (doesn't have to be 1). MUST BE INTEGRAL!\n\n At level 1, I think you could have this:\n * inputCloningWidth = 1\n * sqrt(coincToInputRatio^2) = 2.5\n * outputCloningWidth = 5\n ...in this case, you'd end up with 25 masters.\n\n\n Let's think about this case:\n input: - - - 0 1 2 3 4 5 - - - - -\n columns: 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4\n\n ...in other words, input 0 is fed to both column 0 and column 1. Input 1\n is fed to columns 2, 3, and 4, etc. Hopefully, you can see that you'll\n get the exact same output (except shifted) with:\n input: - - - - - 0 1 2 3 4 5 - - -\n columns: 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4\n\n ...in other words, we've shifted the input 2 spaces and the output shifted\n 5 spaces.\n\n\n *** The outputCloningWidth MUST ALWAYS be an integral multiple of the ***\n *** inputCloningWidth in order for all of our rules to apply. ***\n *** NOTE: inputCloningWidth isn't passed here, so it's the caller's ***\n *** responsibility to ensure that this is true. ***\n\n *** The outputCloningWidth MUST ALWAYS be an integral multiple of ***\n *** sqrt(coincToInputRatio^2), too. ***\n\n @param columnsShape The shape (height, width) of the columns.\n @param outputCloningWidth See docstring above.\n @param outputCloningHeight If non-negative, can be used to make\n rectangular (instead of square) cloning fields.\n @return cloneMap An array (numColumnsHigh, numColumnsWide) that\n contains the clone index to use for each\n column.\n @return numDistinctClones The number of distinct clones in the map. This\n is just outputCloningWidth*outputCloningHeight."} {"_id": "q_1709", "text": "Pretty print a numpy matrix using the given format string for each\n value. Return the string representation\n\n Parameters:\n ------------------------------------------------------------\n array: The numpy array to print. This can be either a 1D vector or 2D matrix\n format: The format string to use for each value\n includeIndices: If true, include [row,col] label for each value\n includeZeros: Can only be set to False if includeIndices is on.\n If True, include 0 values in the print-out\n If False, exclude 0 values from the print-out."} {"_id": "q_1710", "text": "Generates a random sample from the discrete probability distribution\n and returns its value and the log of the probability of sampling that value."} {"_id": "q_1711", "text": "Link sensor region to other region so that it can pass it data."} {"_id": "q_1712", "text": "Get prediction results for all prediction steps."} {"_id": "q_1713", "text": "Loads all the parameters for this dummy model. For any paramters\n specified as lists, read the appropriate value for this model using the model\n index"} {"_id": "q_1714", "text": "Protected function that can be overridden by subclasses. Its main purpose\n is to allow the the OPFDummyModelRunner to override this with deterministic\n values\n\n Returns: All the metrics being computed for this model"} {"_id": "q_1715", "text": "Returns a description of the dataset"} {"_id": "q_1716", "text": "Returns the sdr for jth value at column i"} {"_id": "q_1717", "text": "Returns the nth encoding with the predictedField zeroed out"} {"_id": "q_1718", "text": "Returns the cumulative n for all the fields in the dataset"} {"_id": "q_1719", "text": "Returns the cumulative w for all the fields in the dataset"} {"_id": "q_1720", "text": "Returns the nth encoding"} {"_id": "q_1721", "text": "Deletes all the values in the dataset"} {"_id": "q_1722", "text": "Value is encoded as a sdr using the encoding parameters of the Field"} {"_id": "q_1723", "text": "Set up the dataTypes and initialize encoders"} {"_id": "q_1724", "text": "Initialize the encoders"} {"_id": "q_1725", "text": "Loads the experiment description file from the path.\n\n :param path: (string) The path to a directory containing a description.py file\n or the file itself.\n :returns: (config, control)"} {"_id": "q_1726", "text": "Loads the experiment description python script from the given experiment\n directory.\n\n :param experimentDir: (string) experiment directory path\n\n :returns: module of the loaded experiment description scripts"} {"_id": "q_1727", "text": "Return the model ID of the model with the best result so far and\n it's score on the optimize metric. If swarm is None, then it returns\n the global best, otherwise it returns the best for the given swarm\n for all generatons up to and including genIdx.\n\n Parameters:\n ---------------------------------------------------------------------\n swarmId: A string representation of the sorted list of encoders in this\n swarm. For example '__address_encoder.__gym_encoder'\n genIdx: consider the best in all generations up to and including this\n generation if not None.\n retval: (modelID, result)"} {"_id": "q_1728", "text": "Return a list of particleStates for all particles we know about in\n the given swarm, their model Ids, and metric results.\n\n Parameters:\n ---------------------------------------------------------------------\n swarmId: A string representation of the sorted list of encoders in this\n swarm. For example '__address_encoder.__gym_encoder'\n\n genIdx: If not None, only return particles at this specific generation\n index.\n\n completed: If not None, only return particles of the given state (either\n completed if 'completed' is True, or running if 'completed'\n is false\n\n matured: If not None, only return particles of the given state (either\n matured if 'matured' is True, or not matured if 'matured'\n is false. Note that any model which has completed is also\n considered matured.\n\n lastDescendent: If True, only return particles that are the last descendent,\n that is, the highest generation index for a given particle Id\n\n retval: (particleStates, modelIds, errScores, completed, matured)\n particleStates: list of particleStates\n modelIds: list of modelIds\n errScores: list of errScores, numpy.inf is plugged in\n if we don't have a result yet\n completed: list of completed booleans\n matured: list of matured booleans"} {"_id": "q_1729", "text": "Return a list of particleStates for all particles in the given\n swarm generation that have been orphaned.\n\n Parameters:\n ---------------------------------------------------------------------\n swarmId: A string representation of the sorted list of encoders in this\n swarm. For example '__address_encoder.__gym_encoder'\n\n genIdx: If not None, only return particles at this specific generation\n index.\n\n retval: (particleStates, modelIds, errScores, completed, matured)\n particleStates: list of particleStates\n modelIds: list of modelIds\n errScores: list of errScores, numpy.inf is plugged in\n if we don't have a result yet\n completed: list of completed booleans\n matured: list of matured booleans"} {"_id": "q_1730", "text": "Return a dict of the errors obtained on models that were run with\n each value from a PermuteChoice variable.\n\n For example, if a PermuteChoice variable has the following choices:\n ['a', 'b', 'c']\n\n The dict will have 3 elements. The keys are the stringified choiceVars,\n and each value is tuple containing (choiceVar, errors) where choiceVar is\n the original form of the choiceVar (before stringification) and errors is\n the list of errors received from models that used the specific choice:\n retval:\n ['a':('a', [0.1, 0.2, 0.3]), 'b':('b', [0.5, 0.1, 0.6]), 'c':('c', [])]\n\n\n Parameters:\n ---------------------------------------------------------------------\n swarmId: swarm Id of the swarm to retrieve info from\n maxGenIdx: max generation index to consider from other models, ignored\n if None\n varName: which variable to retrieve\n\n retval: list of the errors obtained from each choice."} {"_id": "q_1731", "text": "Generate stream definition based on"} {"_id": "q_1732", "text": "Test if it's OK to exit this worker. This is only called when we run\n out of prospective new models to evaluate. This method sees if all models\n have matured yet. If not, it will sleep for a bit and return False. This\n will indicate to the hypersearch worker that we should keep running, and\n check again later. This gives this worker a chance to pick up and adopt any\n model which may become orphaned by another worker before it matures.\n\n If all models have matured, this method will send a STOP message to all\n matured, running models (presummably, there will be just one - the model\n which thinks it's the best) before returning True."} {"_id": "q_1733", "text": "Record or update the results for a model. This is called by the\n HSW whenever it gets results info for another model, or updated results\n on a model that is still running.\n\n The first time this is called for a given modelID, the modelParams will\n contain the params dict for that model and the modelParamsHash will contain\n the hash of the params. Subsequent updates of the same modelID will\n have params and paramsHash values of None (in order to save overhead).\n\n The Hypersearch object should save these results into it's own working\n memory into some table, which it then uses to determine what kind of\n new models to create next time createModels() is called.\n\n Parameters:\n ----------------------------------------------------------------------\n modelID: ID of this model in models table\n modelParams: params dict for this model, or None if this is just an update\n of a model that it already previously reported on.\n\n See the comments for the createModels() method for a\n description of this dict.\n\n modelParamsHash: hash of the modelParams dict, generated by the worker\n that put it into the model database.\n results: tuple containing (allMetrics, optimizeMetric). Each is a\n dict containing metricName:result pairs. .\n May be none if we have no results yet.\n completed: True if the model has completed evaluation, False if it\n is still running (and these are online results)\n completionReason: One of the ClientJobsDAO.CMPL_REASON_XXX equates\n matured: True if this model has matured. In most cases, once a\n model matures, it will complete as well. The only time a\n model matures and does not complete is if it's currently\n the best model and we choose to keep it running to generate\n predictions.\n numRecords: Number of records that have been processed so far by this\n model."} {"_id": "q_1734", "text": "Run the given model.\n\n This runs the model described by 'modelParams'. Periodically, it updates\n the results seen on the model to the model database using the databaseAO\n (database Access Object) methods.\n\n Parameters:\n -------------------------------------------------------------------------\n modelID: ID of this model in models table\n\n jobID: ID for this hypersearch job in the jobs table\n\n modelParams: parameters of this specific model\n modelParams is a dictionary containing the name/value\n pairs of each variable we are permuting over. Note that\n variables within an encoder spec have their name\n structure as:\n .\n\n modelParamsHash: hash of modelParamValues\n\n jobsDAO jobs data access object - the interface to the jobs\n database where model information is stored\n\n modelCheckpointGUID: A persistent, globally-unique identifier for\n constructing the model checkpoint key"} {"_id": "q_1735", "text": "Return true if the engine services are running"} {"_id": "q_1736", "text": "Starts a swarm, given a path to a permutations.py script.\n\n This function is meant to be used with a CLI wrapper that passes command line\n arguments in through the options parameter.\n\n @param permutationsFilePath {string} Path to permutations.py.\n @param options {dict} CLI options.\n @param outputLabel {string} Label for output.\n @param permWorkDir {string} Location of working directory.\n\n @returns {object} Model parameters."} {"_id": "q_1737", "text": "Back up a file\n\n Parameters:\n ----------------------------------------------------------------------\n retval: Filepath of the back-up"} {"_id": "q_1738", "text": "Launch worker processes to execute the given command line\n\n Parameters:\n -----------------------------------------------\n cmdLine: The command line for each worker\n numWorkers: number of workers to launch"} {"_id": "q_1739", "text": "Starts HyperSearch as a worker or runs it inline for the \"dryRun\" action\n\n Parameters:\n ----------------------------------------------------------------------\n retval: the new _HyperSearchJob instance representing the\n HyperSearch job"} {"_id": "q_1740", "text": "Instantiates a _HyperSearchJob instance from info saved in file\n\n Parameters:\n ----------------------------------------------------------------------\n permWorkDir: Directory path for saved jobID file\n outputLabel: Label string for incorporating into file name for saved jobID\n retval: _HyperSearchJob instance; raises exception if not found"} {"_id": "q_1741", "text": "Loads a saved jobID from file\n\n Parameters:\n ----------------------------------------------------------------------\n permWorkDir: Directory path for saved jobID file\n outputLabel: Label string for incorporating into file name for saved jobID\n retval: HyperSearch jobID; raises exception if not found."} {"_id": "q_1742", "text": "Emit model info to csv file\n\n Parameters:\n ----------------------------------------------------------------------\n modelInfo: _NupicModelInfo instance\n retval: nothing"} {"_id": "q_1743", "text": "Queuries DB for model IDs of all currently instantiated models\n associated with this HyperSearch job.\n\n See also: _iterModels()\n\n Parameters:\n ----------------------------------------------------------------------\n retval: A sequence of Nupic modelIDs"} {"_id": "q_1744", "text": "Retrives the optimization key name and optimization function.\n\n Parameters:\n ---------------------------------------------------------\n searchJobParams:\n Parameter for passing as the searchParams arg to\n Hypersearch constructor.\n retval: (optimizationMetricKey, maximize)\n optimizationMetricKey: which report key to optimize for\n maximize: True if we should try and maximize the optimizeKey\n metric. False if we should minimize it."} {"_id": "q_1745", "text": "Retrives a dictionary of metrics that combines all report and\n optimization metrics\n\n Parameters:\n ----------------------------------------------------------------------\n retval: a dictionary of optimization metrics that were collected\n for the model; an empty dictionary if there aren't any."} {"_id": "q_1746", "text": "Returns the periodic checks to see if the model should\n continue running.\n\n Parameters:\n -----------------------------------------------------------------------\n terminationFunc: The function that will be called in the model main loop\n as a wrapper around this function. Must have a parameter\n called 'index'\n\n Returns: A list of PeriodicActivityRequest objects."} {"_id": "q_1747", "text": "Iterates through stream to calculate total records after aggregation.\n This will alter the bookmark state."} {"_id": "q_1748", "text": "Return a pattern for a number.\n\n @param number (int) Number of pattern\n\n @return (set) Indices of on bits"} {"_id": "q_1749", "text": "Add noise to pattern.\n\n @param bits (set) Indices of on bits\n @param amount (float) Probability of switching an on bit with a random bit\n\n @return (set) Indices of on bits in noisy pattern"} {"_id": "q_1750", "text": "Return the set of pattern numbers that match a bit.\n\n @param bit (int) Index of bit\n\n @return (set) Indices of numbers"} {"_id": "q_1751", "text": "Return a map from number to matching on bits,\n for all numbers that match a set of bits.\n\n @param bits (set) Indices of bits\n\n @return (dict) Mapping from number => on bits."} {"_id": "q_1752", "text": "Pretty print a pattern.\n\n @param bits (set) Indices of on bits\n @param verbosity (int) Verbosity level\n\n @return (string) Pretty-printed text"} {"_id": "q_1753", "text": "Generates set of random patterns."} {"_id": "q_1754", "text": "Generates set of consecutive patterns."} {"_id": "q_1755", "text": "Calculate error signal\n\n :param bucketIdxList: list of encoder buckets\n\n :return: dict containing error. The key is the number of steps\n The value is a numpy array of error at the output layer"} {"_id": "q_1756", "text": "Sort a potentially big file\n\n filename - the input file (standard File format)\n key - a list of field names to sort by\n outputFile - the name of the output file\n fields - a list of fields that should be included (all fields if None)\n watermark - when available memory goes bellow the watermark create a new chunk\n\n sort() works by reading as records from the file into memory\n and calling _sortChunk() on each chunk. In the process it gets\n rid of unneeded fields if any. Once all the chunks have been sorted and\n written to chunk files it calls _merge() to merge all the chunks into a\n single sorted file.\n\n Note, that sort() gets a key that contains field names, which it converts\n into field indices for _sortChunk() becuase _sortChunk() doesn't need to know\n the field name.\n\n sort() figures out by itself how many chunk files to use by reading records\n from the file until the low watermark value of availabel memory is hit and\n then it sorts the current records, generates a chunk file, clears the sorted\n records and starts on a new chunk.\n\n The key field names are turned into indices"} {"_id": "q_1757", "text": "Sort in memory chunk of records\n\n records - a list of records read from the original dataset\n key - a list of indices to sort the records by\n chunkIndex - the index of the current chunk\n\n The records contain only the fields requested by the user.\n\n _sortChunk() will write the sorted records to a standard File\n named \"chunk_.csv\" (chunk_0.csv, chunk_1.csv,...)."} {"_id": "q_1758", "text": "Feeds input record through TM, performing inference and learning.\n Updates member variables with new state.\n\n @param activeColumns (set) Indices of active columns in `t`"} {"_id": "q_1759", "text": "Print a message to the console.\n\n Prints only if level <= self.consolePrinterVerbosity\n Printing with level 0 is equivalent to using a print statement,\n and should normally be avoided.\n\n :param level: (int) indicating the urgency of the message with\n lower values meaning more urgent (messages at level 0 are the most\n urgent and are always printed)\n\n :param message: (string) possibly with format specifiers\n\n :param args: specifies the values for any format specifiers in message\n\n :param kw: newline is the only keyword argument. True (default) if a newline\n should be printed"} {"_id": "q_1760", "text": "Returns radius for given speed.\n\n Tries to get the encodings of consecutive readings to be\n adjacent with some overlap.\n\n :param: speed (float) Speed (in meters per second)\n :returns: (int) Radius for given speed"} {"_id": "q_1761", "text": "Write serialized object to file.\n\n :param f: output file\n :param packed: If true, will pack contents."} {"_id": "q_1762", "text": "Decorator for functions that require anomaly models."} {"_id": "q_1763", "text": "Remove labels from the anomaly classifier within this model. Removes all\n records if ``labelFilter==None``, otherwise only removes the labels equal to\n ``labelFilter``.\n\n :param start: (int) index to start removing labels\n :param end: (int) index to end removing labels\n :param labelFilter: (string) If specified, only removes records that match"} {"_id": "q_1764", "text": "Add labels from the anomaly classifier within this model.\n\n :param start: (int) index to start label\n :param end: (int) index to end label\n :param labelName: (string) name of label"} {"_id": "q_1765", "text": "Compute Anomaly score, if required"} {"_id": "q_1766", "text": "Returns reference to the network's Classifier region"} {"_id": "q_1767", "text": "Attaches an 'AnomalyClassifier' region to the network. Will remove current\n 'AnomalyClassifier' region if it exists.\n\n Parameters\n -----------\n network - network to add the AnomalyClassifier region\n params - parameters to pass to the region\n spEnable - True if network has an SP region\n tmEnable - True if network has a TM region; Currently requires True"} {"_id": "q_1768", "text": "Tell the writer which metrics should be written\n\n Parameters:\n -----------------------------------------------------------------------\n metricsNames: A list of metric lables to be written"} {"_id": "q_1769", "text": "Get field metadate information for inferences that are of dict type"} {"_id": "q_1770", "text": "Creates the inference output directory for the given experiment\n\n experimentDir: experiment directory path that contains description.py\n\n Returns: path of the inference output directory"} {"_id": "q_1771", "text": "A decorator that maintains the attribute lock state of an object\n\n It coperates with the LockAttributesMetaclass (see bellow) that replaces\n the __setattr__ method with a custom one that checks the _canAddAttributes\n counter and allows setting new attributes only if _canAddAttributes > 0.\n\n New attributes can be set only from methods decorated\n with this decorator (should be only __init__ and __setstate__ normally)\n\n The decorator is reentrant (e.g. if from inside a decorated function another\n decorated function is invoked). Before invoking the target function it\n increments the counter (or sets it to 1). After invoking the target function\n it decrements the counter and if it's 0 it removed the counter."} {"_id": "q_1772", "text": "Creates a neighboring record for each record in the inputs and adds\n new records at the end of the inputs list"} {"_id": "q_1773", "text": "Modifies up to maxChanges number of bits in the inputVal"} {"_id": "q_1774", "text": "Returns a random selection from the inputSpace with randomly modified\n up to maxChanges number of bits."} {"_id": "q_1775", "text": "Creates and returns a new Network with a sensor region reading data from\n 'dataSource'. There are two hierarchical levels, each with one SP and one TM.\n @param dataSource - A RecordStream containing the input data\n @returns a Network ready to run"} {"_id": "q_1776", "text": "Runs specified Network writing the ensuing anomaly\n scores to writer.\n\n @param network: The Network instance to be run\n @param writer: A csv.writer used to write to output file."} {"_id": "q_1777", "text": "Removes trailing whitespace on each line."} {"_id": "q_1778", "text": "Gets the current metric values\n\n :returns: (dict) where each key is the metric-name, and the values are\n it scalar value. Same as the output of \n :meth:`~nupic.frameworks.opf.prediction_metrics_manager.MetricsManager.update`"} {"_id": "q_1779", "text": "Gets detailed info about a given metric, in addition to its value. This\n may including any statistics or auxilary data that are computed for a given\n metric.\n\n :param metricLabel: (string) label of the given metric (see \n :class:`~nupic.frameworks.opf.metrics.MetricSpec`)\n\n :returns: (dict) of metric information, as returned by \n :meth:`nupic.frameworks.opf.metrics.MetricsIface.getMetric`."} {"_id": "q_1780", "text": "Stores the current model results in the manager's internal store\n\n Parameters:\n -----------------------------------------------------------------------\n results: A ModelResults object that contains the current timestep's\n input/inferences"} {"_id": "q_1781", "text": "Get the actual value for this field\n\n Parameters:\n -----------------------------------------------------------------------\n sensorInputElement: The inference element (part of the inference) that\n is being used for this metric"} {"_id": "q_1782", "text": "Abbreviate the given text to threshold chars and append an ellipsis if its\n length exceeds threshold; used for logging;\n\n NOTE: the resulting text could be longer than threshold due to the ellipsis"} {"_id": "q_1783", "text": "Generates the ClientJobs database name for the given version of the\n database\n\n Parameters:\n ----------------------------------------------------------------\n dbVersion: ClientJobs database version number\n\n retval: the ClientJobs database name for the given DB version"} {"_id": "q_1784", "text": "Locate the current version of the jobs DB or create a new one, and\n optionally delete old versions laying around. If desired, this method\n can be called at any time to re-create the tables from scratch, delete\n old versions of the database, etc.\n\n Parameters:\n ----------------------------------------------------------------\n deleteOldVersions: if true, delete any old versions of the DB left\n on the server\n recreate: if true, recreate the database from scratch even\n if it already exists."} {"_id": "q_1785", "text": "Return a sequence of matching rows with the requested field values from\n a table or empty sequence if nothing matched.\n\n tableInfo: Table information: a ClientJobsDAO._TableInfoBase instance\n conn: Owned connection acquired from ConnectionFactory.get()\n fieldsToMatch: Dictionary of internal fieldName/value mappings that\n identify the desired rows. If a value is an instance of\n ClientJobsDAO._SEQUENCE_TYPES (list/set/tuple), then the\n operator 'IN' will be used in the corresponding SQL\n predicate; if the value is bool: \"IS TRUE/FALSE\"; if the\n value is None: \"IS NULL\"; '=' will be used for all other\n cases.\n selectFieldNames:\n list of fields to return, using internal field names\n maxRows: maximum number of rows to return; unlimited if maxRows\n is None\n\n retval: A sequence of matching rows, each row consisting of field\n values in the order of the requested field names. Empty\n sequence is returned when not match exists."} {"_id": "q_1786", "text": "Return a single matching row with the requested field values from the\n the requested table or None if nothing matched.\n\n tableInfo: Table information: a ClientJobsDAO._TableInfoBase instance\n conn: Owned connection acquired from ConnectionFactory.get()\n fieldsToMatch: Dictionary of internal fieldName/value mappings that\n identify the desired rows. If a value is an instance of\n ClientJobsDAO._SEQUENCE_TYPES (list/set/tuple), then the\n operator 'IN' will be used in the corresponding SQL\n predicate; if the value is bool: \"IS TRUE/FALSE\"; if the\n value is None: \"IS NULL\"; '=' will be used for all other\n cases.\n selectFieldNames:\n list of fields to return, using internal field names\n\n retval: A sequence of field values of the matching row in the order\n of the given field names; or None if there was no match."} {"_id": "q_1787", "text": "Place the given job in STATUS_RUNNING mode; the job is expected to be\n STATUS_NOTSTARTED.\n\n NOTE: this function was factored out of jobStartNext because it's also\n needed for testing (e.g., test_client_jobs_dao.py)"} {"_id": "q_1788", "text": "Set cancel field of all currently-running jobs to true."} {"_id": "q_1789", "text": "Look through the jobs table and count the running jobs whose\n cancel field is true.\n\n Parameters:\n ----------------------------------------------------------------\n retval: A count of running jobs with the cancel field set to true."} {"_id": "q_1790", "text": "Generator to allow iterating slices at dynamic intervals\n\n Parameters:\n ----------------------------------------------------------------\n data: Any data structure that supports slicing (i.e. list or tuple)\n *intervals: Iterable of intervals. The sum of intervals should be less\n than, or equal to the length of data."} {"_id": "q_1791", "text": "Get all info about a job, with model details, if available.\n\n Parameters:\n ----------------------------------------------------------------\n job: jobID of the job to query\n retval: A sequence of two-tuples if the jobID exists in the jobs\n table (exeption is raised if it doesn't exist). Each two-tuple\n contains an instance of jobInfoNamedTuple as the first element and\n an instance of modelInfoNamedTuple as the second element. NOTE: In\n the case where there are no matching model rows, a sequence of one\n two-tuple will still be returned, but the modelInfoNamedTuple\n fields will be None, and the jobInfoNamedTuple fields will be\n populated."} {"_id": "q_1792", "text": "Get all info about a job\n\n Parameters:\n ----------------------------------------------------------------\n job: jobID of the job to query\n retval: namedtuple containing the job info."} {"_id": "q_1793", "text": "Change the status on the given job\n\n Parameters:\n ----------------------------------------------------------------\n job: jobID of the job to change status\n status: new status string (ClientJobsDAO.STATUS_xxxxx)\n\n useConnectionID: True if the connection id of the calling function\n must be the same as the connection that created the job. Set\n to False for hypersearch workers"} {"_id": "q_1794", "text": "Change the status on the given job to completed\n\n Parameters:\n ----------------------------------------------------------------\n job: jobID of the job to mark as completed\n completionReason: completionReason string\n completionMsg: completionMsg string\n\n useConnectionID: True if the connection id of the calling function\n must be the same as the connection that created the job. Set\n to False for hypersearch workers"} {"_id": "q_1795", "text": "Cancel the given job. This will update the cancel field in the\n jobs table and will result in the job being cancelled.\n\n Parameters:\n ----------------------------------------------------------------\n jobID: jobID of the job to mark as completed\n\n to False for hypersearch workers"} {"_id": "q_1796", "text": "Fetch all the modelIDs that correspond to a given jobID; empty sequence\n if none"} {"_id": "q_1797", "text": "Return the number of jobs for the given clientKey and a status that is\n not completed."} {"_id": "q_1798", "text": "Delete all models from the models table\n\n Parameters:\n ----------------------------------------------------------------"} {"_id": "q_1799", "text": "Get ALL info for a set of models\n\n WARNING!!!: The order of the results are NOT necessarily in the same order as\n the order of the model IDs passed in!!!\n\n Parameters:\n ----------------------------------------------------------------\n modelIDs: list of model IDs\n retval: list of nametuples containing all the fields stored for each\n model."} {"_id": "q_1800", "text": "Gets the specified fields for all the models for a single job. This is\n similar to modelsGetFields\n\n Parameters:\n ----------------------------------------------------------------\n jobID: jobID for the models to be searched\n fields: A list of fields to return\n ignoreKilled: (True/False). If True, this will ignore models that\n have been killed\n\n Returns: a (possibly empty) list of tuples as follows\n [\n (model_id1, [field1, ..., fieldn]),\n (model_id2, [field1, ..., fieldn]),\n (model_id3, [field1, ..., fieldn])\n ...\n ]\n\n NOTE: since there is a window of time between a job getting inserted into\n jobs table and the job's worker(s) starting up and creating models, an\n empty-list result is one of the normal outcomes."} {"_id": "q_1801", "text": "Get the params and paramsHash for a set of models.\n\n WARNING!!!: The order of the results are NOT necessarily in the same order as\n the order of the model IDs passed in!!!\n\n Parameters:\n ----------------------------------------------------------------\n modelIDs: list of model IDs\n retval: list of result namedtuples defined in\n ClientJobsDAO._models.getParamsNamedTuple. Each tuple\n contains: (modelId, params, engParamsHash)"} {"_id": "q_1802", "text": "Get the results string and other status fields for a set of models.\n\n WARNING!!!: The order of the results are NOT necessarily in the same order\n as the order of the model IDs passed in!!!\n\n For each model, this returns a tuple containing:\n (modelID, results, status, updateCounter, numRecords, completionReason,\n completionMsg, engParamsHash\n\n Parameters:\n ----------------------------------------------------------------\n modelIDs: list of model IDs\n retval: list of result tuples. Each tuple contains:\n (modelID, results, status, updateCounter, numRecords,\n completionReason, completionMsg, engParamsHash)"} {"_id": "q_1803", "text": "Disable writing of output tap files."} {"_id": "q_1804", "text": "Does nothing. Kept here for API compatibility"} {"_id": "q_1805", "text": "Intercepts TemporalMemory deserialization request in order to initialize\n `TemporalMemoryMonitorMixin` state\n\n @param proto (DynamicStructBuilder) Proto object\n\n @return (TemporalMemory) TemporalMemory shim instance"} {"_id": "q_1806", "text": "Pick a value according to the provided distribution.\n\n Example:\n\n ::\n\n pickByDistribution([.2, .1])\n\n Returns 0 two thirds of the time and 1 one third of the time.\n\n :param distribution: Probability distribution. Need not be normalized.\n :param r: Instance of random.Random. Uses the system instance if one is\n not provided."} {"_id": "q_1807", "text": "Returns an array of length size and type dtype that is everywhere 0,\n except in the indices listed in sequence pos.\n\n :param pos: A single integer or sequence of integers that specify\n the position of ones to be set.\n :param size: The total size of the array to be returned.\n :param dtype: The element type (compatible with NumPy array())\n of the array to be returned.\n :returns: An array of length size and element type dtype."} {"_id": "q_1808", "text": "Add distribution to row row.\n Distribution should be an array of probabilities or counts.\n\n :param row: Integer index of the row to add to.\n May be larger than the current number of rows, in which case\n the histogram grows.\n :param distribution: Array of length equal to the number of columns."} {"_id": "q_1809", "text": "Run a named function specified by a filesystem path, module name\n and function name.\n\n Returns the value returned by the imported function.\n\n Use this when access is needed to code that has\n not been added to a package accessible from the ordinary Python\n path. Encapsulates the multiple lines usually needed to\n safely manipulate and restore the Python path.\n\n Parameters\n ----------\n path: filesystem path\n Path to the directory where the desired module is stored.\n This will be used to temporarily augment the Python path.\n\n moduleName: basestring\n Name of the module, without trailing extension, where the desired\n function is stored. This module should be in the directory specified\n with path.\n\n funcName: basestring\n Name of the function to import and call.\n\n keywords:\n Keyword arguments to be passed to the imported function."} {"_id": "q_1810", "text": "Routine for computing a moving average.\n\n @param slidingWindow a list of previous values to use in computation that\n will be modified and returned\n @param total the sum of the values in slidingWindow to be used in the\n calculation of the moving average\n @param newVal a new number compute the new windowed average\n @param windowSize how many values to use in the moving window\n\n @returns an updated windowed average, the modified input slidingWindow list,\n and the new total sum of the sliding window"} {"_id": "q_1811", "text": "Instance method wrapper around compute."} {"_id": "q_1812", "text": "Helper function to return a scalar value representing the most\n likely outcome given a probability distribution"} {"_id": "q_1813", "text": "Helper function to return a scalar value representing the expected\n value of a probability distribution"} {"_id": "q_1814", "text": "Return the field names for each of the scalar values returned by\n getScalars.\n\n :param parentFieldName: The name of the encoder which is our parent. This\n name is prefixed to each of the field names within this encoder to\n form the keys of the dict() in the retval.\n\n :return: array of field names"} {"_id": "q_1815", "text": "Gets the value of a given field from the input record"} {"_id": "q_1816", "text": "Return the offset and length of a given field within the encoded output.\n\n :param fieldName: Name of the field\n :return: tuple(``offset``, ``width``) of the field within the encoded output"} {"_id": "q_1817", "text": "Takes an encoded output and does its best to work backwards and generate\n the input that would have generated it.\n\n In cases where the encoded output contains more ON bits than an input\n would have generated, this routine will return one or more ranges of inputs\n which, if their encoded outputs were ORed together, would produce the\n target output. This behavior makes this method suitable for doing things\n like generating a description of a learned coincidence in the SP, which\n in many cases might be a union of one or more inputs.\n\n If instead, you want to figure the *most likely* single input scalar value\n that would have generated a specific encoded output, use the\n :meth:`.topDownCompute` method.\n\n If you want to pretty print the return value from this method, use the\n :meth:`.decodedToStr` method.\n\n :param encoded: The encoded output that you want decode\n :param parentFieldName: The name of the encoder which is our parent. This name\n is prefixed to each of the field names within this encoder to form the\n keys of the dict() in the retval.\n\n :return: tuple(``fieldsDict``, ``fieldOrder``)\n\n ``fieldsDict`` is a dict() where the keys represent field names\n (only 1 if this is a simple encoder, > 1 if this is a multi\n or date encoder) and the values are the result of decoding each\n field. If there are no bits in encoded that would have been\n generated by a field, it won't be present in the dict. The\n key of each entry in the dict is formed by joining the passed in\n parentFieldName with the child encoder name using a '.'.\n\n Each 'value' in ``fieldsDict`` consists of (ranges, desc), where\n ranges is a list of one or more (minVal, maxVal) ranges of\n input that would generate bits in the encoded output and 'desc'\n is a pretty print description of the ranges. For encoders like\n the category encoder, the 'desc' will contain the category\n names that correspond to the scalar values included in the\n ranges.\n\n ``fieldOrder`` is a list of the keys from ``fieldsDict``, in the\n same order as the fields appear in the encoded output.\n\n TODO: when we switch to Python 2.7 or 3.x, use OrderedDict\n\n Example retvals for a scalar encoder:\n\n .. code-block:: python\n\n {'amount': ( [[1,3], [7,10]], '1-3, 7-10' )}\n {'amount': ( [[2.5,2.5]], '2.5' )}\n\n Example retval for a category encoder:\n\n .. code-block:: python\n\n {'country': ( [[1,1], [5,6]], 'US, GB, ES' )}\n\n Example retval for a multi encoder:\n\n .. code-block:: python\n\n {'amount': ( [[2.5,2.5]], '2.5' ),\n 'country': ( [[1,1], [5,6]], 'US, GB, ES' )}"} {"_id": "q_1818", "text": "create a random input vector"} {"_id": "q_1819", "text": "Finds the category that best matches the input pattern. Returns the\n winning category index as well as a distribution over all categories.\n\n :param inputPattern: (list or array) The pattern to be classified. This\n must be a dense representation of the array (e.g. [0, 0, 1, 1, 0, 1]).\n\n :param computeScores: NO EFFECT\n\n :param overCategories: NO EFFECT\n\n :param partitionId: (int) If provided, all training vectors with partitionId\n equal to that of the input pattern are ignored.\n For example, this may be used to perform k-fold cross validation\n without repopulating the classifier. First partition all the data into\n k equal partitions numbered 0, 1, 2, ... and then call learn() for each\n vector passing in its partitionId. Then, during inference, by passing\n in the partition ID in the call to infer(), all other vectors with the\n same partitionId are ignored simulating the effect of repopulating the\n classifier while ommitting the training vectors in the same partition.\n\n :returns: 4-tuple with these keys:\n\n - ``winner``: The category with the greatest number of nearest neighbors\n within the kth nearest neighbors. If the inferenceResult contains no\n neighbors, the value of winner is None. This can happen, for example,\n in cases of exact matching, if there are no stored vectors, or if\n minSparsity is not met.\n - ``inferenceResult``: A list of length numCategories, each entry contains\n the number of neighbors within the top k neighbors that are in that\n category.\n - ``dist``: A list of length numPrototypes. Each entry is the distance\n from the unknown to that prototype. All distances are between 0.0 and\n 1.0.\n - ``categoryDist``: A list of length numCategories. Each entry is the\n distance from the unknown to the nearest prototype of\n that category. All distances are between 0 and 1.0."} {"_id": "q_1820", "text": "Returns the index of the pattern that is closest to inputPattern,\n the distances of all patterns to inputPattern, and the indices of the k\n closest categories."} {"_id": "q_1821", "text": "Returns the closest training pattern to inputPattern that belongs to\n category \"cat\".\n\n :param inputPattern: The pattern whose closest neighbor is sought\n\n :param cat: The required category of closest neighbor\n\n :returns: A dense version of the closest training pattern, or None if no\n such patterns exist"} {"_id": "q_1822", "text": "Gets a training pattern either by index or category number.\n\n :param idx: Index of the training pattern\n\n :param sparseBinaryForm: If true, returns a list of the indices of the\n non-zero bits in the training pattern\n\n :param cat: If not None, get the first pattern belonging to category cat. If\n this is specified, idx must be None.\n\n :returns: The training pattern with specified index"} {"_id": "q_1823", "text": "Gets the partition id given an index.\n\n :param i: index of partition\n :returns: the partition id associated with pattern i. Returns None if no id\n is associated with it."} {"_id": "q_1824", "text": "Adds partition id for pattern index"} {"_id": "q_1825", "text": "Rebuilds the partition Id map using the given partitionIdList"} {"_id": "q_1826", "text": "Calculate the distances from inputPattern to all stored patterns. All\n distances are between 0.0 and 1.0\n\n :param inputPattern The pattern from which distances to all other patterns\n are calculated\n\n :param distanceNorm Degree of the distance norm"} {"_id": "q_1827", "text": "Return the distances from inputPattern to all stored patterns.\n\n :param inputPattern The pattern from which distances to all other patterns\n are returned\n\n :param partitionId If provided, ignore all training vectors with this\n partitionId."} {"_id": "q_1828", "text": "Change the category indices.\n\n Used by the Network Builder to keep the category indices in sync with the\n ImageSensor categoryInfo when the user renames or removes categories.\n\n :param mapping: List of new category indices. For example, mapping=[2,0,1]\n would change all vectors of category 0 to be category 2, category 1 to\n 0, and category 2 to 1"} {"_id": "q_1829", "text": "Computes the width of dataOut.\n\n Overrides \n :meth:`nupic.bindings.regions.PyRegion.PyRegion.getOutputElementCount`."} {"_id": "q_1830", "text": "Set the value of a Spec parameter. Most parameters are handled\n automatically by PyRegion's parameter set mechanism. The ones that need\n special treatment are explicitly handled here."} {"_id": "q_1831", "text": "Saves the record in the underlying csv file.\n\n :param record: a list of Python objects that will be string-ified"} {"_id": "q_1832", "text": "Saves multiple records in the underlying storage.\n\n :param records: array of records as in\n :meth:`~.FileRecordStream.appendRecord`\n :param progressCB: (function) callback to report progress"} {"_id": "q_1833", "text": "Gets a bookmark or anchor to the current position.\n\n :returns: an anchor to the current position in the data. Passing this\n anchor to a constructor makes the current position to be the first\n returned record."} {"_id": "q_1834", "text": "Seeks to ``numRecords`` from the end and returns a bookmark to the new\n position.\n\n :param numRecords: how far to seek from end of file.\n :return: bookmark to desired location."} {"_id": "q_1835", "text": "Keep track of sequence and make sure time goes forward\n\n Check if the current record is the beginning of a new sequence\n A new sequence starts in 2 cases:\n\n 1. The sequence id changed (if there is a sequence id field)\n 2. The reset field is 1 (if there is a reset field)\n\n Note that if there is no sequenceId field or resetId field then the entire\n dataset is technically one big sequence. The function will not return True\n for the first record in this case. This is Ok because it is important to\n detect new sequences only when there are multiple sequences in the file."} {"_id": "q_1836", "text": "Returns the number of records that elapse between when an inference is\n made and when the corresponding input record will appear. For example, a\n multistep prediction for 3 timesteps out will have a delay of 3\n\n\n Parameters:\n -----------------------------------------------------------------------\n\n inferenceElement: The InferenceElement value being delayed\n key: If the inference is a dictionary type, this specifies\n key for the sub-inference that is being delayed"} {"_id": "q_1837", "text": "Returns True if the inference type is 'temporal', i.e. requires a\n temporal memory in the network."} {"_id": "q_1838", "text": "Makes directory for the given directory path with default permissions.\n If the directory already exists, it is treated as success.\n\n absDirPath: absolute path of the directory to create.\n\n Returns: absDirPath arg\n\n Exceptions: OSError if directory creation fails"} {"_id": "q_1839", "text": "Parse the given XML file and return a dict describing the file.\n\n Parameters:\n ----------------------------------------------------------------\n filename: name of XML file to parse (no path)\n path: path of the XML file. If None, then use the standard\n configuration search path.\n retval: returns a dict with each property as a key and a dict of all\n the property's attributes as value"} {"_id": "q_1840", "text": "Set multiple custom properties and persist them to the custom\n configuration store.\n\n Parameters:\n ----------------------------------------------------------------\n properties: a dict of property name/value pairs to set"} {"_id": "q_1841", "text": "Clear all custom configuration settings and delete the persistent\n custom configuration store."} {"_id": "q_1842", "text": "If persistent is True, delete the temporary file\n\n Parameters:\n ----------------------------------------------------------------\n persistent: if True, custom configuration file is deleted"} {"_id": "q_1843", "text": "Returns a dict of all temporary values in custom configuration file"} {"_id": "q_1844", "text": "Edits the XML configuration file with the parameters specified by\n properties\n\n Parameters:\n ----------------------------------------------------------------\n properties: dict of settings to be applied to the custom configuration store\n (key is property name, value is value)"} {"_id": "q_1845", "text": "Sets the path of the custom configuration file"} {"_id": "q_1846", "text": "Get the particle state as a dict. This is enough information to\n instantiate this particle on another worker."} {"_id": "q_1847", "text": "Init all of our variable positions, velocities, and optionally the best\n result and best position from the given particle.\n\n If newBest is true, we get the best result and position for this new\n generation from the resultsDB, This is used when evoloving a particle\n because the bestResult and position as stored in was the best AT THE TIME\n THAT PARTICLE STARTED TO RUN and does not include the best since that\n particle completed."} {"_id": "q_1848", "text": "Copy specific variables from particleState into this particle.\n\n Parameters:\n --------------------------------------------------------------\n particleState: dict produced by a particle's getState() method\n varNames: which variables to copy"} {"_id": "q_1849", "text": "Return the position of a particle given its state dict.\n\n Parameters:\n --------------------------------------------------------------\n retval: dict() of particle position, keys are the variable names,\n values are their positions"} {"_id": "q_1850", "text": "Agitate this particle so that it is likely to go to a new position.\n Every time agitate is called, the particle is jiggled an even greater\n amount.\n\n Parameters:\n --------------------------------------------------------------\n retval: None"} {"_id": "q_1851", "text": "Choose a new position based on results obtained so far from all other\n particles.\n\n Parameters:\n --------------------------------------------------------------\n whichVars: If not None, only move these variables\n retval: new position"} {"_id": "q_1852", "text": "Get the logger for this object.\n\n :returns: (Logger) A Logger object."} {"_id": "q_1853", "text": "Create a new model instance, given a description dictionary.\n\n :param modelConfig: (dict)\n A dictionary describing the current model,\n `described here <../../quick-start/example-model-params.html>`_.\n\n :param logLevel: (int) The level of logging output that should be generated\n\n :raises Exception: Unsupported model type\n\n :returns: :class:`nupic.frameworks.opf.model.Model`"} {"_id": "q_1854", "text": "Perform one time step of the Temporal Memory algorithm.\n\n This method calls :meth:`activateCells`, then calls \n :meth:`activateDendrites`. Using :class:`TemporalMemory` via its \n :meth:`compute` method ensures that you'll always be able to call \n :meth:`getPredictiveCells` to get predictions for the next time step.\n\n :param activeColumns: (iter) Indices of active columns.\n\n :param learn: (bool) Whether or not learning is enabled."} {"_id": "q_1855", "text": "Calculate the active cells, using the current active columns and dendrite\n segments. Grow and reinforce synapses.\n\n :param activeColumns: (iter) A sorted list of active column indices.\n\n :param learn: (bool) If true, reinforce / punish / grow synapses.\n\n **Pseudocode:**\n \n ::\n\n for each column\n if column is active and has active distal dendrite segments\n call activatePredictedColumn\n if column is active and doesn't have active distal dendrite segments\n call burstColumn\n if column is inactive and has matching distal dendrite segments\n call punishPredictedColumn"} {"_id": "q_1856", "text": "Determines which cells in a predicted column should be added to winner cells\n list, and learns on the segments that correctly predicted this column.\n\n :param column: (int) Index of bursting column.\n\n :param columnActiveSegments: (iter) Active segments in this column.\n\n :param columnMatchingSegments: (iter) Matching segments in this column.\n\n :param prevActiveCells: (list) Active cells in ``t-1``.\n\n :param prevWinnerCells: (list) Winner cells in ``t-1``.\n\n :param learn: (bool) If true, grow and reinforce synapses.\n\n :returns: (list) A list of predicted cells that will be added to \n active cells and winner cells."} {"_id": "q_1857", "text": "Punishes the Segments that incorrectly predicted a column to be active.\n\n :param column: (int) Index of bursting column.\n\n :param columnActiveSegments: (iter) Active segments for this column, or None \n if there aren't any.\n\n :param columnMatchingSegments: (iter) Matching segments for this column, or \n None if there aren't any.\n\n :param prevActiveCells: (list) Active cells in ``t-1``.\n\n :param prevWinnerCells: (list) Winner cells in ``t-1``."} {"_id": "q_1858", "text": "Create a segment on the connections, enforcing the maxSegmentsPerCell\n parameter."} {"_id": "q_1859", "text": "Creates nDesiredNewSynapes synapses on the segment passed in if\n possible, choosing random cells from the previous winner cells that are\n not already on the segment.\n\n :param connections: (Object) Connections instance for the tm\n :param random: (Object) TM object used to generate random\n numbers\n :param segment: (int) Segment to grow synapses on.\n :param nDesiredNewSynapes: (int) Desired number of synapses to grow\n :param prevWinnerCells: (list) Winner cells in `t-1`\n :param initialPermanence: (float) Initial permanence of a new synapse."} {"_id": "q_1860", "text": "Updates synapses on segment.\n Strengthens active synapses; weakens inactive synapses.\n\n :param connections: (Object) Connections instance for the tm\n :param segment: (int) Segment to adapt\n :param prevActiveCells: (list) Active cells in `t-1`\n :param permanenceIncrement: (float) Amount to increment active synapses\n :param permanenceDecrement: (float) Amount to decrement inactive synapses"} {"_id": "q_1861", "text": "Returns the index of the column that a cell belongs to.\n\n :param cell: (int) Cell index\n\n :returns: (int) Column index"} {"_id": "q_1862", "text": "Reads deserialized data from proto object.\n\n :param proto: (DynamicStructBuilder) Proto object\n\n :returns: (:class:TemporalMemory) TemporalMemory instance"} {"_id": "q_1863", "text": "Generate a sequence from a list of numbers.\n\n Note: Any `None` in the list of numbers is considered a reset.\n\n @param numbers (list) List of numbers\n\n @return (list) Generated sequence"} {"_id": "q_1864", "text": "Add spatial noise to each pattern in the sequence.\n\n @param sequence (list) Sequence\n @param amount (float) Amount of spatial noise\n\n @return (list) Sequence with spatial noise"} {"_id": "q_1865", "text": "Pretty print a sequence.\n\n @param sequence (list) Sequence\n @param verbosity (int) Verbosity level\n\n @return (string) Pretty-printed text"} {"_id": "q_1866", "text": "Returns pretty-printed table of traces.\n\n @param traces (list) Traces to print in table\n @param breakOnResets (BoolsTrace) Trace of resets to break table on\n\n @return (string) Pretty-printed table of traces."} {"_id": "q_1867", "text": "Compute updated probabilities for anomalyScores using the given params.\n\n :param anomalyScores: a list of records. Each record is a list with the\n following three elements: [timestamp, value, score]\n\n Example::\n\n [datetime.datetime(2013, 8, 10, 23, 0), 6.0, 1.0]\n\n :param params: the JSON dict returned by estimateAnomalyLikelihoods\n :param verbosity: integer controlling extent of printouts for debugging\n :type verbosity: int\n\n :returns: 3-tuple consisting of:\n\n - likelihoods\n\n numpy array of likelihoods, one for each aggregated point\n\n - avgRecordList\n\n list of averaged input records\n\n - params\n\n an updated JSON object containing the state of this metric."} {"_id": "q_1868", "text": "Return the value of skipRecords for passing to estimateAnomalyLikelihoods\n\n If `windowSize` is very large (bigger than the amount of data) then this\n could just return `learningPeriod`. But when some values have fallen out of\n the historical sliding window of anomaly records, then we have to take those\n into account as well so we return the `learningPeriod` minus the number\n shifted out.\n\n :param numIngested - (int) number of data points that have been added to the\n sliding window of historical data points.\n :param windowSize - (int) size of sliding window of historical data points.\n :param learningPeriod - (int) the number of iterations required for the\n algorithm to learn the basic patterns in the dataset and for the anomaly\n score to 'settle down'."} {"_id": "q_1869", "text": "capnp serialization method for the anomaly likelihood object\n\n :param proto: (Object) capnp proto object specified in\n nupic.regions.anomaly_likelihood.capnp"} {"_id": "q_1870", "text": "Replaces the Iteration Cycle phases\n\n :param phaseSpecs: Iteration cycle description consisting of a sequence of\n IterationPhaseSpecXXXXX elements that are performed in the\n given order"} {"_id": "q_1871", "text": "Processes the given record according to the current iteration cycle phase\n\n :param inputRecord: (object) record expected to be returned from\n :meth:`nupic.data.record_stream.RecordStreamIface.getNextRecord`.\n\n :returns: :class:`nupic.frameworks.opf.opf_utils.ModelResult`"} {"_id": "q_1872", "text": "Advances the iteration;\n\n Returns: True if more iterations remain; False if this is the final\n iteration."} {"_id": "q_1873", "text": "Serialize via capnp\n\n :param proto: capnp PreviousValueModelProto message builder"} {"_id": "q_1874", "text": "Accepts log-values as input, exponentiates them, computes the sum,\n then converts the sum back to log-space and returns the result.\n Handles underflow by rescaling so that the largest values is exactly 1.0."} {"_id": "q_1875", "text": "Accepts log-values as input, exponentiates them,\n normalizes and returns the result.\n Handles underflow by rescaling so that the largest values is exactly 1.0."} {"_id": "q_1876", "text": "Log 'msg % args' with severity 'DEBUG'.\n\n To pass exception information, use the keyword argument exc_info with\n a true value, e.g.\n\n logger.debug(\"Houston, we have a %s\", \"thorny problem\", exc_info=1)"} {"_id": "q_1877", "text": "Log 'msg % args' with severity 'INFO'.\n\n To pass exception information, use the keyword argument exc_info with\n a true value, e.g.\n\n logger.info(\"Houston, we have a %s\", \"interesting problem\", exc_info=1)"} {"_id": "q_1878", "text": "Log 'msg % args' with severity 'ERROR'.\n\n To pass exception information, use the keyword argument exc_info with\n a true value, e.g.\n\n logger.error(\"Houston, we have a %s\", \"major problem\", exc_info=1)"} {"_id": "q_1879", "text": "Log 'msg % args' with severity 'CRITICAL'.\n\n To pass exception information, use the keyword argument exc_info with\n a true value, e.g.\n\n logger.critical(\"Houston, we have a %s\", \"major disaster\", exc_info=1)"} {"_id": "q_1880", "text": "Log 'msg % args' with the integer severity 'level'.\n\n To pass exception information, use the keyword argument exc_info with\n a true value, e.g.\n\n logger.log(level, \"We have a %s\", \"mysterious problem\", exc_info=1)"} {"_id": "q_1881", "text": "Returns sum of the elements in the list. Missing items are replaced with\n the mean value"} {"_id": "q_1882", "text": "Returns mean of non-None elements of the list"} {"_id": "q_1883", "text": "Returns most common value seen in the non-None elements of the list"} {"_id": "q_1884", "text": "Generate a dataset of aggregated values\n\n Parameters:\n ----------------------------------------------------------------------------\n aggregationInfo: a dictionary that contains the following entries\n - fields: a list of pairs. Each pair is a field name and an\n aggregation function (e.g. sum). The function will be used to aggregate\n multiple values during the aggregation period.\n\n aggregation period: 0 or more of unit=value fields; allowed units are:\n [years months] |\n [weeks days hours minutes seconds milliseconds microseconds]\n NOTE: years and months are mutually-exclusive with the other units.\n See getEndTime() and _aggregate() for more details.\n Example1: years=1, months=6,\n Example2: hours=1, minutes=30,\n If none of the period fields are specified or if all that are specified\n have values of 0, then aggregation will be suppressed, and the given\n inputFile parameter value will be returned.\n\n inputFilename: filename of the input dataset within examples/prediction/data\n\n outputFilename: name for the output file. If not given, a name will be\n generated based on the input filename and the aggregation params\n\n retval: Name of the generated output file. This will be the same as the input\n file name if no aggregation needed to be performed\n\n\n\n If the input file contained a time field, sequence id field or reset field\n that were not specified in aggregationInfo fields, those fields will be\n added automatically with the following rules:\n\n 1. The order will be R, S, T, rest of the fields\n 2. The aggregation function for all will be to pick the first: lambda x: x[0]\n\n Returns: the path of the aggregated data file if aggregation was performed\n (in the same directory as the given input file); if aggregation did not\n need to be performed, then the given inputFile argument value is returned."} {"_id": "q_1885", "text": "Generate the filename for aggregated dataset\n\n The filename is based on the input filename and the\n aggregation period.\n\n Returns the inputFile if no aggregation required (aggregation\n info has all 0's)"} {"_id": "q_1886", "text": "Add the aggregation period to the input time t and return a datetime object\n\n Years and months are handled as aspecial case due to leap years\n and months with different number of dates. They can't be converted\n to a strict timedelta because a period of 3 months will have different\n durations actually. The solution is to just add the years and months\n fields directly to the current time.\n\n Other periods are converted to timedelta and just added to current time."} {"_id": "q_1887", "text": "Given the name of an aggregation function, returns the function pointer\n and param.\n\n Parameters:\n ------------------------------------------------------------------------\n funcName: a string (name of function) or funcPtr\n retval: (funcPtr, param)"} {"_id": "q_1888", "text": "Generate the aggregated output record\n\n Parameters:\n ------------------------------------------------------------------------\n retval: outputRecord"} {"_id": "q_1889", "text": "Run one iteration of this model.\n\n :param inputRecord: (object)\n A record object formatted according to\n :meth:`~nupic.data.record_stream.RecordStreamIface.getNextRecord` or\n :meth:`~nupic.data.record_stream.RecordStreamIface.getNextRecordDict`\n result format.\n :returns: (:class:`~nupic.frameworks.opf.opf_utils.ModelResult`)\n An ModelResult namedtuple. The contents of ModelResult.inferences\n depends on the the specific inference type of this model, which\n can be queried by :meth:`.getInferenceType`."} {"_id": "q_1890", "text": "Return the absolute path of the model's checkpoint file.\n\n :param checkpointDir: (string)\n Directory of where the experiment is to be or was saved\n :returns: (string) An absolute path."} {"_id": "q_1891", "text": "Serializes model using capnproto and writes data to ``checkpointDir``"} {"_id": "q_1892", "text": "Deserializes model from checkpointDir using capnproto"} {"_id": "q_1893", "text": "Save the state maintained by the Model base class\n\n :param proto: capnp ModelProto message builder"} {"_id": "q_1894", "text": "Return the absolute path of the model's pickle file.\n\n :param saveModelDir: (string)\n Directory of where the experiment is to be or was saved\n :returns: (string) An absolute path."} {"_id": "q_1895", "text": "Used as optparse callback for reaping a variable number of option args.\n The option may be specified multiple times, and all the args associated with\n that option name will be accumulated in the order that they are encountered"} {"_id": "q_1896", "text": "Report usage error and exit program with error indication."} {"_id": "q_1897", "text": "Creates and runs the experiment\n\n Args:\n options: namedtuple ParseCommandLineOptionsResult\n model: For testing: may pass in an existing OPF Model instance\n to use instead of creating a new one.\n\n Returns: reference to OPFExperiment instance that was constructed (this\n is provided to aid with debugging) or None, if none was\n created."} {"_id": "q_1898", "text": "Creates directory for serialization of the model\n\n checkpointLabel:\n Checkpoint label (string)\n\n Returns:\n absolute path to the serialization directory"} {"_id": "q_1899", "text": "Returns a checkpoint label string for the given model checkpoint directory\n\n checkpointDir: relative or absolute model checkpoint directory path"} {"_id": "q_1900", "text": "Return true iff checkpointDir appears to be a checkpoint directory."} {"_id": "q_1901", "text": "List available checkpoints for the specified experiment."} {"_id": "q_1902", "text": "Creates and returns a list of activites for this TaskRunner instance\n\n Returns: a list of PeriodicActivityRequest elements"} {"_id": "q_1903", "text": "Shows predictions of the TM when presented with the characters A, B, C, D, X, and\n Y without any contextual information, that is, not embedded within a sequence."} {"_id": "q_1904", "text": "Utility function to get information about function callers\n\n The information is the tuple (function/method name, filename, class)\n The class will be None if the caller is just a function and not an object\n method.\n\n :param depth: (int) how far back in the callstack to go to extract the caller\n info"} {"_id": "q_1905", "text": "Get the arguments, default values, and argument descriptions for a function.\n\n Parses the argument descriptions out of the function docstring, using a\n format something lke this:\n\n ::\n\n [junk]\n argument_name: description...\n description...\n description...\n [junk]\n [more arguments]\n\n It will find an argument as long as the exact argument name starts the line.\n It will then strip a trailing colon, if present, then strip the rest of the\n line and use it to start the description. It will then strip and append any\n subsequent lines with a greater indent level than the original argument name.\n\n :param f: (function) to inspect\n :returns: (list of tuples) (``argName``, ``argDescription``, ``defaultValue``)\n If an argument has no default value, the tuple is only two elements long (as\n ``None`` cannot be used, since it could be a default value itself)."} {"_id": "q_1906", "text": "Generate a filepath for the calling app"} {"_id": "q_1907", "text": "Return the number of months and seconds from an aggregation dict that\n represents a date and time.\n\n Interval is a dict that contain one or more of the following keys: 'years',\n 'months', 'weeks', 'days', 'hours', 'minutes', seconds', 'milliseconds',\n 'microseconds'.\n\n For example:\n\n ::\n\n aggregationMicroseconds({'years': 1, 'hours': 4, 'microseconds':42}) ==\n {'months':12, 'seconds':14400.000042}\n\n :param interval: (dict) The aggregation interval representing a date and time\n :returns: (dict) number of months and seconds in the interval:\n ``{months': XX, 'seconds': XX}``. The seconds is\n a floating point that can represent resolutions down to a\n microsecond."} {"_id": "q_1908", "text": "Return the result from dividing two dicts that represent date and time.\n\n Both dividend and divisor are dicts that contain one or more of the following\n keys: 'years', 'months', 'weeks', 'days', 'hours', 'minutes', seconds',\n 'milliseconds', 'microseconds'.\n\n For example:\n\n ::\n\n aggregationDivide({'hours': 4}, {'minutes': 15}) == 16\n\n :param dividend: (dict) The numerator, as a dict representing a date and time\n :param divisor: (dict) the denominator, as a dict representing a date and time\n :returns: (float) number of times divisor goes into dividend"} {"_id": "q_1909", "text": "Helper function to create a logger object for the current object with\n the standard Numenta prefix.\n\n :param obj: (object) to add a logger to"} {"_id": "q_1910", "text": "Returns a subset of the keys that match any of the given patterns\n\n :param patterns: (list) regular expressions to match\n :param keys: (list) keys to search for matches"} {"_id": "q_1911", "text": "Convert the input, which is in normal space, into log space"} {"_id": "q_1912", "text": "Exports a network as a networkx MultiDiGraph intermediate representation\n suitable for visualization.\n\n :return: networkx MultiDiGraph"} {"_id": "q_1913", "text": "Computes the percentage of overlap between vectors x1 and x2.\n\n @param x1 (array) binary vector\n @param x2 (array) binary vector\n @param size (int) length of binary vectors\n\n @return percentOverlap (float) percentage overlap between x1 and x2"} {"_id": "q_1914", "text": "Poll CPU usage, make predictions, and plot the results. Runs forever."} {"_id": "q_1915", "text": "List of our member variables that we don't need to be saved"} {"_id": "q_1916", "text": "If state is allocated in CPP, copy over the data into our numpy arrays."} {"_id": "q_1917", "text": "If we are having CPP use numpy-allocated buffers, set these buffer\n pointers. This is a relatively fast operation and, for safety, should be\n done before every call to the cells4 compute methods. This protects us\n in situations where code can cause Python or numpy to create copies."} {"_id": "q_1918", "text": "A segment is active if it has >= activationThreshold connected\n synapses that are active due to infActiveState."} {"_id": "q_1919", "text": "Given a bucket index, return the list of non-zero bits. If the bucket\n index does not exist, it is created. If the index falls outside our range\n we clip it.\n\n :param index The bucket index to get non-zero bits for.\n @returns numpy array of indices of non-zero bits for specified index."} {"_id": "q_1920", "text": "Create the given bucket index. Recursively create as many in-between\n bucket indices as necessary."} {"_id": "q_1921", "text": "Return a new representation for newIndex that overlaps with the\n representation at index by exactly w-1 bits"} {"_id": "q_1922", "text": "Return the overlap between bucket indices i and j"} {"_id": "q_1923", "text": "Return the overlap between two representations. rep1 and rep2 are lists of\n non-zero indices."} {"_id": "q_1924", "text": "Return True if the given overlap between bucket indices i and j are\n acceptable. If overlap is not specified, calculate it from the bucketMap"} {"_id": "q_1925", "text": "Create a SDR classifier factory.\n The implementation of the SDR Classifier can be specified with\n the \"implementation\" keyword argument.\n\n The SDRClassifierFactory uses the implementation as specified in\n `Default NuPIC Configuration `_."} {"_id": "q_1926", "text": "Convenience method to compute a metric over an indices trace, excluding\n resets.\n\n @param (IndicesTrace) Trace of indices\n\n @return (Metric) Metric over trace excluding resets"} {"_id": "q_1927", "text": "Metric for number of predicted => active cells per column for each sequence\n\n @return (Metric) metric"} {"_id": "q_1928", "text": "Metric for number of sequences each predicted => active cell appears in\n\n Note: This metric is flawed when it comes to high-order sequences.\n\n @return (Metric) metric"} {"_id": "q_1929", "text": "Pretty print the connections in the temporal memory.\n\n TODO: Use PrettyTable.\n\n @return (string) Pretty-printed text"} {"_id": "q_1930", "text": "Generates a Network with connected RecordSensor, SP, TM.\n\n This function takes care of generating regions and the canonical links.\n The network has a sensor region reading data from a specified input and\n passing the encoded representation to an SPRegion.\n The SPRegion output is passed to a TMRegion.\n\n Note: this function returns a network that needs to be initialized. This\n allows the user to extend the network by adding further regions and\n connections.\n\n :param recordParams: a dict with parameters for creating RecordSensor region.\n :param spatialParams: a dict with parameters for creating SPRegion.\n :param temporalParams: a dict with parameters for creating TMRegion.\n :param verbosity: an integer representing how chatty the network will be."} {"_id": "q_1931", "text": "Multiplies a value over a range of rows.\n\n Args:\n reader: A FileRecordStream object with input data.\n writer: A FileRecordStream object to write output data to.\n column: The column of data to modify.\n start: The first row in the range to modify.\n end: The last row in the range to modify.\n multiple: The value to scale/multiply by."} {"_id": "q_1932", "text": "Copies a range of values to a new location in the data set.\n\n Args:\n reader: A FileRecordStream object with input data.\n writer: A FileRecordStream object to write output data to.\n start: The first row in the range to copy.\n stop: The last row in the range to copy.\n insertLocation: The location to insert the copied range. If not specified,\n the range is inserted immediately following itself."} {"_id": "q_1933", "text": "generate description from a text description of the ranges"} {"_id": "q_1934", "text": "Reset the state of all cells.\n\n This is normally used between sequences while training. All internal states\n are reset to 0."} {"_id": "q_1935", "text": "Called at the end of learning and inference, this routine will update\n a number of stats in our _internalStats dictionary, including our computed\n prediction score.\n\n :param stats internal stats dictionary\n :param bottomUpNZ list of the active bottom-up inputs\n :param predictedState The columns we predicted on the last time step (should\n match the current bottomUpNZ in the best case)\n :param colConfidence Column confidences we determined on the last time step"} {"_id": "q_1936", "text": "Print an integer array that is the same shape as activeState.\n\n :param aState: TODO: document"} {"_id": "q_1937", "text": "Print a floating point array that is the same shape as activeState.\n\n :param aState: TODO: document\n :param maxCols: TODO: document"} {"_id": "q_1938", "text": "Print up to maxCols number from a flat floating point array.\n\n :param aState: TODO: document\n :param maxCols: TODO: document"} {"_id": "q_1939", "text": "Print the parameter settings for the TM."} {"_id": "q_1940", "text": "Called at the end of inference to print out various diagnostic\n information based on the current verbosity level.\n\n :param output: TODO: document\n :param learn: TODO: document"} {"_id": "q_1941", "text": "Update our moving average of learned sequence length."} {"_id": "q_1942", "text": "A utility method called from learnBacktrack. This will backtrack\n starting from the given startOffset in our prevLrnPatterns queue.\n\n It returns True if the backtrack was successful and we managed to get\n predictions all the way up to the current time step.\n\n If readOnly, then no segments are updated or modified, otherwise, all\n segment updates that belong to the given path are applied.\n \n This updates/modifies:\n\n - lrnActiveState['t']\n\n This trashes:\n\n - lrnPredictedState['t']\n - lrnPredictedState['t-1']\n - lrnActiveState['t-1']\n\n :param startOffset: Start offset within the prevLrnPatterns input history\n :param readOnly: \n :return: True if we managed to lock on to a sequence that started\n earlier.\n If False, we lost predictions somewhere along the way\n leading up to the current time."} {"_id": "q_1943", "text": "This \"backtracks\" our learning state, trying to see if we can lock onto\n the current set of inputs by assuming the sequence started up to N steps\n ago on start cells.\n\n This will adjust @ref lrnActiveState['t'] if it does manage to lock on to a\n sequence that started earlier.\n\n :returns: >0 if we managed to lock on to a sequence that started\n earlier. The value returned is how many steps in the\n past we locked on.\n If 0 is returned, the caller needs to change active\n state to start on start cells.\n\n How it works:\n -------------------------------------------------------------------\n This method gets called from updateLearningState when we detect either of\n the following two conditions:\n\n #. Our PAM counter (@ref pamCounter) expired\n #. We reached the max allowed learned sequence length\n\n Either of these two conditions indicate that we want to start over on start\n cells.\n\n Rather than start over on start cells on the current input, we can\n accelerate learning by backtracking a few steps ago and seeing if perhaps\n a sequence we already at least partially know already started.\n\n This updates/modifies:\n - @ref lrnActiveState['t']\n\n This trashes:\n - @ref lrnActiveState['t-1']\n - @ref lrnPredictedState['t']\n - @ref lrnPredictedState['t-1']"} {"_id": "q_1944", "text": "Compute the learning active state given the predicted state and\n the bottom-up input.\n\n :param activeColumns list of active bottom-ups\n :param readOnly True if being called from backtracking logic.\n This tells us not to increment any segment\n duty cycles or queue up any updates.\n :returns: True if the current input was sufficiently predicted, OR\n if we started over on startCells. False indicates that the current\n input was NOT predicted, well enough to consider it as \"inSequence\"\n\n This looks at:\n - @ref lrnActiveState['t-1']\n - @ref lrnPredictedState['t-1']\n\n This modifies:\n - @ref lrnActiveState['t']\n - @ref lrnActiveState['t-1']"} {"_id": "q_1945", "text": "Compute the predicted segments given the current set of active cells.\n\n :param readOnly True if being called from backtracking logic.\n This tells us not to increment any segment\n duty cycles or queue up any updates.\n\n This computes the lrnPredictedState['t'] and queues up any segments that\n became active (and the list of active synapses for each segment) into\n the segmentUpdates queue\n\n This looks at:\n - @ref lrnActiveState['t']\n\n This modifies:\n - @ref lrnPredictedState['t']\n - @ref segmentUpdates"} {"_id": "q_1946", "text": "Handle one compute, possibly learning.\n\n .. note:: It is an error to have both ``enableLearn`` and \n ``enableInference`` set to False\n\n .. note:: By default, we don't compute the inference output when learning \n because it slows things down, but you can override this by passing \n in True for ``enableInference``.\n\n :param bottomUpInput: The bottom-up input as numpy list, typically from a \n spatial pooler.\n :param enableLearn: (bool) If true, perform learning\n :param enableInference: (bool) If None, default behavior is to disable the \n inference output when ``enableLearn`` is on. If true, compute the \n inference output. If false, do not compute the inference output.\n\n :returns: TODO: document"} {"_id": "q_1947", "text": "This method goes through a list of segments for a given cell and\n deletes all synapses whose permanence is less than minPermanence and deletes\n any segments that have less than minNumSyns synapses remaining.\n\n :param colIdx Column index\n :param cellIdx Cell index within the column\n :param segList List of segment references\n :param minPermanence Any syn whose permamence is 0 or < minPermanence will\n be deleted.\n :param minNumSyns Any segment with less than minNumSyns synapses remaining\n in it will be deleted.\n\n :returns: tuple (numSegsRemoved, numSynsRemoved)"} {"_id": "q_1948", "text": "This method deletes all synapses whose permanence is less than\n minPermanence and deletes any segments that have less than\n minNumSyns synapses remaining.\n\n :param minPermanence: (float) Any syn whose permanence is 0 or < \n ``minPermanence`` will be deleted. If None is passed in, then \n ``self.connectedPerm`` is used.\n :param minNumSyns: (int) Any segment with less than ``minNumSyns`` synapses \n remaining in it will be deleted. If None is passed in, then \n ``self.activationThreshold`` is used.\n :returns: (tuple) ``numSegsRemoved``, ``numSynsRemoved``"} {"_id": "q_1949", "text": "Removes any update that would be for the given col, cellIdx, segIdx.\n\n NOTE: logically, we need to do this when we delete segments, so that if\n an update refers to a segment that was just deleted, we also remove\n that update from the update list. However, I haven't seen it trigger\n in any of the unit tests yet, so it might mean that it's not needed\n and that situation doesn't occur, by construction."} {"_id": "q_1950", "text": "Find weakly activated cell in column with at least minThreshold active\n synapses.\n\n :param c which column to look at\n :param activeState the active cells\n :param minThreshold minimum number of synapses required\n\n :returns: tuple (cellIdx, segment, numActiveSynapses)"} {"_id": "q_1951", "text": "For the given cell, find the segment with the largest number of active\n synapses. This routine is aggressive in finding the best match. The\n permanence value of synapses is allowed to be below connectedPerm. The number\n of active synapses is allowed to be below activationThreshold, but must be\n above minThreshold. The routine returns the segment index. If no segments are\n found, then an index of -1 is returned.\n\n :param c TODO: document\n :param i TODO: document\n :param activeState TODO: document"} {"_id": "q_1952", "text": "This function applies segment update information to a segment in a\n cell.\n\n Synapses on the active list get their permanence counts incremented by\n permanenceInc. All other synapses get their permanence counts decremented\n by permanenceDec.\n\n We also increment the positiveActivations count of the segment.\n\n :param segUpdate SegmentUpdate instance\n :returns: True if some synapses were decremented to 0 and the segment is a\n candidate for trimming"} {"_id": "q_1953", "text": "Create training sequences that share some elements in the middle.\n\n Parameters:\n -----------------------------------------------------\n numSequences: Number of unique training sequences to generate\n seqLen: Overall length of each sequence\n sharedElements: Which element indices of each sequence are shared. These\n will be in the range between 0 and seqLen-1\n numOnBitsPerPattern: Number of ON bits in each TM input pattern\n patternOverlap: Max number of bits of overlap between any 2 patterns\n retval: (numCols, trainingSequences)\n numCols - width of the patterns\n trainingSequences - a list of training sequences"} {"_id": "q_1954", "text": "Create a bunch of sequences of various lengths, all built from\n a fixed set of patterns.\n\n Parameters:\n -----------------------------------------------------\n numSequences: Number of training sequences to generate\n seqLen: List of possible sequence lengths\n numPatterns: How many possible patterns there are to use within\n sequences\n numOnBitsPerPattern: Number of ON bits in each TM input pattern\n patternOverlap: Max number of bits of overlap between any 2 patterns\n retval: (numCols, trainingSequences)\n numCols - width of the patterns\n trainingSequences - a list of training sequences"} {"_id": "q_1955", "text": "Create one or more TM instances, placing each into a dict keyed by\n name.\n\n Parameters:\n ------------------------------------------------------------------\n retval: tms - dict of TM instances"} {"_id": "q_1956", "text": "Check for diffs among the TM instances in the passed in tms dict and\n raise an assert if any are detected\n\n Parameters:\n ---------------------------------------------------------------------\n tms: dict of TM instances"} {"_id": "q_1957", "text": "Compress a byte string.\n\n Args:\n string (bytes): The input data.\n mode (int, optional): The compression mode can be MODE_GENERIC (default),\n MODE_TEXT (for UTF-8 format text input) or MODE_FONT (for WOFF 2.0).\n quality (int, optional): Controls the compression-speed vs compression-\n density tradeoff. The higher the quality, the slower the compression.\n Range is 0 to 11. Defaults to 11.\n lgwin (int, optional): Base 2 logarithm of the sliding window size. Range\n is 10 to 24. Defaults to 22.\n lgblock (int, optional): Base 2 logarithm of the maximum input block size.\n Range is 16 to 24. If set to 0, the value will be set based on the\n quality. Defaults to 0.\n\n Returns:\n The compressed byte string.\n\n Raises:\n brotli.error: If arguments are invalid, or compressor fails."} {"_id": "q_1958", "text": "Show string or char."} {"_id": "q_1959", "text": "Read n bytes from the stream on a byte boundary."} {"_id": "q_1960", "text": "Store decodeTable,\n and compute lengthTable, minLength, maxLength from encodings."} {"_id": "q_1961", "text": "Show all words of the code in a nice format."} {"_id": "q_1962", "text": "Read symbol from stream. Returns symbol, length."} {"_id": "q_1963", "text": "Override if you don't define value0 and extraTable"} {"_id": "q_1964", "text": "Give the range of possible values in a tuple\n Useful for mnemonic and explanation"} {"_id": "q_1965", "text": "Give count and value."} {"_id": "q_1966", "text": "Make a nice mnemonic"} {"_id": "q_1967", "text": "Give mnemonic representation of meaning.\n verbose compresses strings of x's"} {"_id": "q_1968", "text": "Perform the proper action"} {"_id": "q_1969", "text": "Read MNIBBLES and meta block length;\n if empty block, skip block and return true."} {"_id": "q_1970", "text": "In place inverse move to front transform."} {"_id": "q_1971", "text": "Implementation of Dataset.to_arrow_table"} {"_id": "q_1972", "text": "Adds method f to the Dataset class"} {"_id": "q_1973", "text": "Convert proper motion to perpendicular velocities.\n\n :param distance:\n :param pm_long:\n :param pm_lat:\n :param vl:\n :param vb:\n :param cov_matrix_distance_pm_long_pm_lat:\n :param uncertainty_postfix:\n :param covariance_postfix:\n :param radians:\n :return:"} {"_id": "q_1974", "text": "Return a graphviz.Digraph object with a graph of the expression"} {"_id": "q_1975", "text": "Map values of an expression or in memory column accoring to an input\n dictionary or a custom callable function.\n\n Example:\n\n >>> import vaex\n >>> df = vaex.from_arrays(color=['red', 'red', 'blue', 'red', 'green'])\n >>> mapper = {'red': 1, 'blue': 2, 'green': 3}\n >>> df['color_mapped'] = df.color.map(mapper)\n >>> df\n # color color_mapped\n 0 red 1\n 1 red 1\n 2 blue 2\n 3 red 1\n 4 green 3\n >>> import numpy as np\n >>> df = vaex.from_arrays(type=[0, 1, 2, 2, 2, np.nan])\n >>> df['role'] = df['type'].map({0: 'admin', 1: 'maintainer', 2: 'user', np.nan: 'unknown'})\n >>> df\n # type role\n 0 0 admin\n 1 1 maintainer\n 2 2 user\n 3 2 user\n 4 2 user\n 5 nan unknown \n\n :param mapper: dict like object used to map the values from keys to values\n :param nan_mapping: value to be used when a nan is present (and not in the mapper)\n :param null_mapping: value to use used when there is a missing value\n :return: A vaex expression\n :rtype: vaex.expression.Expression"} {"_id": "q_1976", "text": "Create a vaex app, the QApplication mainloop must be started.\n\n In ipython notebook/jupyter do the following:\n\n >>> import vaex.ui.main # this causes the qt api level to be set properly\n >>> import vaex\n\n Next cell:\n\n >>> %gui qt\n\n Next cell:\n\n >>> app = vaex.app()\n\n From now on, you can run the app along with jupyter"} {"_id": "q_1977", "text": "Open a list of filenames, and return a DataFrame with all DataFrames cocatenated.\n\n :param list[str] filenames: list of filenames/paths\n :rtype: DataFrame"} {"_id": "q_1978", "text": "Create a vaex DataFrame from an Astropy Table."} {"_id": "q_1979", "text": "Create an in memory DataFrame from numpy arrays.\n\n Example\n\n >>> import vaex, numpy as np\n >>> x = np.arange(5)\n >>> y = x ** 2\n >>> vaex.from_arrays(x=x, y=y)\n # x y\n 0 0 0\n 1 1 1\n 2 2 4\n 3 3 9\n 4 4 16\n >>> some_dict = {'x': x, 'y': y}\n >>> vaex.from_arrays(**some_dict) # in case you have your columns in a dict\n # x y\n 0 0 0\n 1 1 1\n 2 2 4\n 3 3 9\n 4 4 16\n\n :param arrays: keyword arguments with arrays\n :rtype: DataFrame"} {"_id": "q_1980", "text": "Creates a zeldovich DataFrame."} {"_id": "q_1981", "text": "Concatenate a list of DataFrames.\n\n :rtype: DataFrame"} {"_id": "q_1982", "text": "Add a dataset and add it to the UI"} {"_id": "q_1983", "text": "Decorator to transparantly accept delayed computation.\n\n Example:\n\n >>> delayed_sum = ds.sum(ds.E, binby=ds.x, limits=limits,\n >>> shape=4, delay=True)\n >>> @vaex.delayed\n >>> def total_sum(sums):\n >>> return sums.sum()\n >>> sum_of_sums = total_sum(delayed_sum)\n >>> ds.execute()\n >>> sum_of_sums.get()\n See the tutorial for a more complete example https://docs.vaex.io/en/latest/tutorial.html#Parallel-computations"} {"_id": "q_1984", "text": "Helper function for returning tasks results, result when immediate is True, otherwise the task itself, which is a promise"} {"_id": "q_1985", "text": "Sort table by given column number."} {"_id": "q_1986", "text": "Used for unittesting to make sure the plots are all done"} {"_id": "q_1987", "text": "Evaluates expression, and drop the result, usefull for benchmarking, since vaex is usually lazy"} {"_id": "q_1988", "text": "Calculate the sum for the given expression, possible on a grid defined by binby\n\n Example:\n\n >>> df.sum(\"L\")\n 304054882.49378014\n >>> df.sum(\"L\", binby=\"E\", shape=4)\n array([ 8.83517994e+06, 5.92217598e+07, 9.55218726e+07,\n 1.40008776e+08])\n\n :param expression: {expression}\n :param binby: {binby}\n :param limits: {limits}\n :param shape: {shape}\n :param selection: {selection}\n :param delay: {delay}\n :param progress: {progress}\n :return: {return_stat_scalar}"} {"_id": "q_1989", "text": "Calculate the standard deviation for the given expression, possible on a grid defined by binby\n\n\n >>> df.std(\"vz\")\n 110.31773397535071\n >>> df.std(\"vz\", binby=[\"(x**2+y**2)**0.5\"], shape=4)\n array([ 123.57954851, 85.35190177, 61.14345748, 38.0740619 ])\n\n :param expression: {expression}\n :param binby: {binby}\n :param limits: {limits}\n :param shape: {shape}\n :param selection: {selection}\n :param delay: {delay}\n :param progress: {progress}\n :return: {return_stat_scalar}"} {"_id": "q_1990", "text": "Calculate the covariance matrix for x and y or more expressions, possibly on a grid defined by binby.\n\n Either x and y are expressions, e.g:\n\n >>> df.cov(\"x\", \"y\")\n\n Or only the x argument is given with a list of expressions, e,g.:\n\n >>> df.cov([\"x, \"y, \"z\"])\n\n Example:\n\n >>> df.cov(\"x\", \"y\")\n array([[ 53.54521742, -3.8123135 ],\n [ -3.8123135 , 60.62257881]])\n >>> df.cov([\"x\", \"y\", \"z\"])\n array([[ 53.54521742, -3.8123135 , -0.98260511],\n [ -3.8123135 , 60.62257881, 1.21381057],\n [ -0.98260511, 1.21381057, 25.55517638]])\n\n >>> df.cov(\"x\", \"y\", binby=\"E\", shape=2)\n array([[[ 9.74852878e+00, -3.02004780e-02],\n [ -3.02004780e-02, 9.99288215e+00]],\n [[ 8.43996546e+01, -6.51984181e+00],\n [ -6.51984181e+00, 9.68938284e+01]]])\n\n\n :param x: {expression}\n :param y: {expression_single}\n :param binby: {binby}\n :param limits: {limits}\n :param shape: {shape}\n :param selection: {selection}\n :param delay: {delay}\n :return: {return_stat_scalar}, the last dimensions are of shape (2,2)"} {"_id": "q_1991", "text": "Calculate the median , possibly on a grid defined by binby.\n\n NOTE: this value is approximated by calculating the cumulative distribution on a grid defined by\n percentile_shape and percentile_limits\n\n\n :param expression: {expression}\n :param binby: {binby}\n :param limits: {limits}\n :param shape: {shape}\n :param percentile_limits: {percentile_limits}\n :param percentile_shape: {percentile_shape}\n :param selection: {selection}\n :param delay: {delay}\n :return: {return_stat_scalar}"} {"_id": "q_1992", "text": "Viz data in 2d using a healpix column.\n\n :param healpix_expression: {healpix_max_level}\n :param healpix_max_level: {healpix_max_level}\n :param healpix_level: {healpix_level}\n :param what: {what}\n :param selection: {selection}\n :param grid: {grid}\n :param healpix_input: Specificy if the healpix index is in \"equatorial\", \"galactic\" or \"ecliptic\".\n :param healpix_output: Plot in \"equatorial\", \"galactic\" or \"ecliptic\".\n :param f: function to apply to the data\n :param colormap: matplotlib colormap\n :param grid_limits: Optional sequence [minvalue, maxvalue] that determine the min and max value that map to the colormap (values below and above these are clipped to the the min/max). (default is [min(f(grid)), max(f(grid)))\n :param image_size: size for the image that healpy uses for rendering\n :param nest: If the healpix data is in nested (True) or ring (False)\n :param figsize: If given, modify the matplotlib figure size. Example (14,9)\n :param interactive: (Experimental, uses healpy.mollzoom is True)\n :param title: Title of figure\n :param smooth: apply gaussian smoothing, in degrees\n :param show: Call matplotlib's show (True) or not (False, defaut)\n :param rotation: Rotatate the plot, in format (lon, lat, psi) such that (lon, lat) is the center, and rotate on the screen by angle psi. All angles are degrees.\n :return:"} {"_id": "q_1993", "text": "Use at own risk, requires ipyvolume"} {"_id": "q_1994", "text": "Return the numpy dtype for the given expression, if not a column, the first row will be evaluated to get the dtype."} {"_id": "q_1995", "text": "Each DataFrame has a directory where files are stored for metadata etc.\n\n Example\n\n >>> import vaex\n >>> ds = vaex.example()\n >>> vaex.get_private_dir()\n '/Users/users/breddels/.vaex/dfs/_Users_users_breddels_vaex-testing_data_helmi-dezeeuw-2000-10p.hdf5'\n\n :param bool create: is True, it will create the directory if it does not exist"} {"_id": "q_1996", "text": "Return the internal state of the DataFrame in a dictionary\n\n Example:\n\n >>> import vaex\n >>> df = vaex.from_scalars(x=1, y=2)\n >>> df['r'] = (df.x**2 + df.y**2)**0.5\n >>> df.state_get()\n {'active_range': [0, 1],\n 'column_names': ['x', 'y', 'r'],\n 'description': None,\n 'descriptions': {},\n 'functions': {},\n 'renamed_columns': [],\n 'selections': {'__filter__': None},\n 'ucds': {},\n 'units': {},\n 'variables': {},\n 'virtual_columns': {'r': '(((x ** 2) + (y ** 2)) ** 0.5)'}}"} {"_id": "q_1997", "text": "Writes virtual columns, variables and their ucd,description and units.\n\n The default implementation is to write this to a file called virtual_meta.yaml in the directory defined by\n :func:`DataFrame.get_private_dir`. Other implementation may store this in the DataFrame file itself.\n\n This method is called after virtual columns or variables are added. Upon opening a file, :func:`DataFrame.update_virtual_meta`\n is called, so that the information is not lost between sessions.\n\n Note: opening a DataFrame twice may result in corruption of this file."} {"_id": "q_1998", "text": "Writes all meta data, ucd,description and units\n\n The default implementation is to write this to a file called meta.yaml in the directory defined by\n :func:`DataFrame.get_private_dir`. Other implementation may store this in the DataFrame file itself.\n (For instance the vaex hdf5 implementation does this)\n\n This method is called after virtual columns or variables are added. Upon opening a file, :func:`DataFrame.update_meta`\n is called, so that the information is not lost between sessions.\n\n Note: opening a DataFrame twice may result in corruption of this file."} {"_id": "q_1999", "text": "Generate a Subspaces object, based on a custom list of expressions or all possible combinations based on\n dimension\n\n :param expressions_list: list of list of expressions, where the inner list defines the subspace\n :param dimensions: if given, generates a subspace with all possible combinations for that dimension\n :param exclude: list of"} {"_id": "q_2000", "text": "Set the variable to an expression or value defined by expression_or_value.\n\n Example\n\n >>> df.set_variable(\"a\", 2.)\n >>> df.set_variable(\"b\", \"a**2\")\n >>> df.get_variable(\"b\")\n 'a**2'\n >>> df.evaluate_variable(\"b\")\n 4.0\n\n :param name: Name of the variable\n :param write: write variable to meta file\n :param expression: value or expression"} {"_id": "q_2001", "text": "Evaluates the variable given by name."} {"_id": "q_2002", "text": "Internal use, ignores the filter"} {"_id": "q_2003", "text": "Return a dict containing the ndarray corresponding to the evaluated data\n\n :param column_names: list of column names, to export, when None DataFrame.get_column_names(strings=strings, virtual=virtual) is used\n :param selection: {selection}\n :param strings: argument passed to DataFrame.get_column_names when column_names is None\n :param virtual: argument passed to DataFrame.get_column_names when column_names is None\n :return: dict"} {"_id": "q_2004", "text": "Return a copy of the DataFrame, if selection is None, it does not copy the data, it just has a reference\n\n :param column_names: list of column names, to copy, when None DataFrame.get_column_names(strings=strings, virtual=virtual) is used\n :param selection: {selection}\n :param strings: argument passed to DataFrame.get_column_names when column_names is None\n :param virtual: argument passed to DataFrame.get_column_names when column_names is None\n :param selections: copy selections to a new DataFrame\n :return: dict"} {"_id": "q_2005", "text": "Return a pandas DataFrame containing the ndarray corresponding to the evaluated data\n\n If index is given, that column is used for the index of the dataframe.\n\n Example\n\n >>> df_pandas = df.to_pandas_df([\"x\", \"y\", \"z\"])\n >>> df_copy = vaex.from_pandas(df_pandas)\n\n :param column_names: list of column names, to export, when None DataFrame.get_column_names(strings=strings, virtual=virtual) is used\n :param selection: {selection}\n :param strings: argument passed to DataFrame.get_column_names when column_names is None\n :param virtual: argument passed to DataFrame.get_column_names when column_names is None\n :param index_column: if this column is given it is used for the index of the DataFrame\n :return: pandas.DataFrame object"} {"_id": "q_2006", "text": "Returns a astropy table object containing the ndarrays corresponding to the evaluated data\n\n :param column_names: list of column names, to export, when None DataFrame.get_column_names(strings=strings, virtual=virtual) is used\n :param selection: {selection}\n :param strings: argument passed to DataFrame.get_column_names when column_names is None\n :param virtual: argument passed to DataFrame.get_column_names when column_names is None\n :param index: if this column is given it is used for the index of the DataFrame\n :return: astropy.table.Table object"} {"_id": "q_2007", "text": "Add an in memory array as a column."} {"_id": "q_2008", "text": "Renames a column, not this is only the in memory name, this will not be reflected on disk"} {"_id": "q_2009", "text": "Convert cartesian to polar coordinates\n\n :param x: expression for x\n :param y: expression for y\n :param radius_out: name for the virtual column for the radius\n :param azimuth_out: name for the virtual column for the azimuth angle\n :param propagate_uncertainties: {propagate_uncertainties}\n :param radians: if True, azimuth is in radians, defaults to degrees\n :return:"} {"_id": "q_2010", "text": "Convert cartesian to polar velocities.\n\n :param x:\n :param y:\n :param vx:\n :param radius_polar: Optional expression for the radius, may lead to a better performance when given.\n :param vy:\n :param vr_out:\n :param vazimuth_out:\n :param propagate_uncertainties: {propagate_uncertainties}\n :return:"} {"_id": "q_2011", "text": "Rotation in 2d.\n\n :param str x: Name/expression of x column\n :param str y: idem for y\n :param str xnew: name of transformed x column\n :param str ynew:\n :param float angle_degrees: rotation in degrees, anti clockwise\n :return:"} {"_id": "q_2012", "text": "Convert spherical to cartesian coordinates.\n\n\n\n :param alpha:\n :param delta: polar angle, ranging from the -90 (south pole) to 90 (north pole)\n :param distance: radial distance, determines the units of x, y and z\n :param xname:\n :param yname:\n :param zname:\n :param propagate_uncertainties: {propagate_uncertainties}\n :param center:\n :param center_name:\n :param radians:\n :return:"} {"_id": "q_2013", "text": "Convert cartesian to spherical coordinates.\n\n\n\n :param x:\n :param y:\n :param z:\n :param alpha:\n :param delta: name for polar angle, ranges from -90 to 90 (or -pi to pi when radians is True).\n :param distance:\n :param radians:\n :param center:\n :param center_name:\n :return:"} {"_id": "q_2014", "text": "Add a virtual column to the DataFrame.\n\n Example:\n\n >>> df.add_virtual_column(\"r\", \"sqrt(x**2 + y**2 + z**2)\")\n >>> df.select(\"r < 10\")\n\n :param: str name: name of virtual column\n :param: expression: expression for the column\n :param str unique: if name is already used, make it unique by adding a postfix, e.g. _1, or _2"} {"_id": "q_2015", "text": "Deletes a virtual column from a DataFrame."} {"_id": "q_2016", "text": "Add a variable to to a DataFrame.\n\n A variable may refer to other variables, and virtual columns and expression may refer to variables.\n\n Example\n\n >>> df.add_variable('center', 0)\n >>> df.add_virtual_column('x_prime', 'x-center')\n >>> df.select('x_prime < 0')\n\n :param: str name: name of virtual varible\n :param: expression: expression for the variable"} {"_id": "q_2017", "text": "Display the first and last n elements of a DataFrame."} {"_id": "q_2018", "text": "Give a description of the DataFrame.\n\n >>> import vaex\n >>> df = vaex.example()[['x', 'y', 'z']]\n >>> df.describe()\n x y z\n dtype float64 float64 float64\n count 330000 330000 330000\n missing 0 0 0\n mean -0.0671315 -0.0535899 0.0169582\n std 7.31746 7.78605 5.05521\n min -128.294 -71.5524 -44.3342\n max 271.366 146.466 50.7185\n >>> df.describe(selection=df.x > 0)\n x y z\n dtype float64 float64 float64\n count 164060 164060 164060\n missing 165940 165940 165940\n mean 5.13572 -0.486786 -0.0868073\n std 5.18701 7.61621 5.02831\n min 1.51635e-05 -71.5524 -44.3342\n max 271.366 78.0724 40.2191\n\n :param bool strings: Describe string columns or not\n :param bool virtual: Describe virtual columns or not\n :param selection: Optional selection to use.\n :return: Pandas dataframe"} {"_id": "q_2019", "text": "Set the current row, and emit the signal signal_pick."} {"_id": "q_2020", "text": "Return a DataFrame, where all columns are 'trimmed' by the active range.\n\n For the returned DataFrame, df.get_active_range() returns (0, df.length_original()).\n\n {note_copy}\n\n :param inplace: {inplace}\n :rtype: DataFrame"} {"_id": "q_2021", "text": "Returns a DataFrame containing only rows indexed by indices\n\n {note_copy}\n\n Example:\n\n >>> import vaex, numpy as np\n >>> df = vaex.from_arrays(s=np.array(['a', 'b', 'c', 'd']), x=np.arange(1,5))\n >>> df.take([0,2])\n # s x\n 0 a 1\n 1 c 3\n\n :param indices: sequence (list or numpy array) with row numbers\n :return: DataFrame which is a shallow copy of the original data.\n :rtype: DataFrame"} {"_id": "q_2022", "text": "Return a DataFrame containing only the filtered rows.\n\n {note_copy}\n\n The resulting DataFrame may be more efficient to work with when the original DataFrame is\n heavily filtered (contains just a small number of rows).\n\n If no filtering is applied, it returns a trimmed view.\n For the returned df, len(df) == df.length_original() == df.length_unfiltered()\n\n :rtype: DataFrame"} {"_id": "q_2023", "text": "Returns a list containing random portions of the DataFrame.\n\n {note_copy}\n\n Example:\n\n >>> import vaex, import numpy as np\n >>> np.random.seed(111)\n >>> df = vaex.from_arrays(x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n >>> for dfs in df.split_random(frac=0.3, random_state=42):\n ... print(dfs.x.values)\n ...\n [8 1 5]\n [0 7 2 9 4 3 6]\n >>> for split in df.split_random(frac=[0.2, 0.3, 0.5], random_state=42):\n ... print(dfs.x.values)\n [8 1]\n [5 0 7]\n [2 9 4 3 6]\n\n :param int/list frac: If int will split the DataFrame in two portions, the first of which will have size as specified by this parameter. If list, the generator will generate as many portions as elements in the list, where each element defines the relative fraction of that portion.\n :param int random_state: (default, None) Random number seed for reproducibility.\n :return: A list of DataFrames.\n :rtype: list"} {"_id": "q_2024", "text": "Returns a list containing ordered subsets of the DataFrame.\n\n {note_copy}\n\n Example:\n\n >>> import vaex\n >>> df = vaex.from_arrays(x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n >>> for dfs in df.split(frac=0.3):\n ... print(dfs.x.values)\n ...\n [0 1 3]\n [3 4 5 6 7 8 9]\n >>> for split in df.split(frac=[0.2, 0.3, 0.5]):\n ... print(dfs.x.values)\n [0 1]\n [2 3 4]\n [5 6 7 8 9]\n\n :param int/list frac: If int will split the DataFrame in two portions, the first of which will have size as specified by this parameter. If list, the generator will generate as many portions as elements in the list, where each element defines the relative fraction of that portion.\n :return: A list of DataFrames.\n :rtype: list"} {"_id": "q_2025", "text": "Undo selection, for the name."} {"_id": "q_2026", "text": "Can selection name be redone?"} {"_id": "q_2027", "text": "Perform a selection, defined by the boolean expression, and combined with the previous selection using the given mode.\n\n Selections are recorded in a history tree, per name, undo/redo can be done for them separately.\n\n :param str boolean_expression: Any valid column expression, with comparison operators\n :param str mode: Possible boolean operator: replace/and/or/xor/subtract\n :param str name: history tree or selection 'slot' to use\n :param executor:\n :return:"} {"_id": "q_2028", "text": "Create a selection that selects rows having non missing values for all columns in column_names.\n\n The name reflect Panda's, no rows are really dropped, but a mask is kept to keep track of the selection\n\n :param drop_nan: drop rows when there is a NaN in any of the columns (will only affect float values)\n :param drop_masked: drop rows when there is a masked value in any of the columns\n :param column_names: The columns to consider, default: all (real, non-virtual) columns\n :param str mode: Possible boolean operator: replace/and/or/xor/subtract\n :param str name: history tree or selection 'slot' to use\n :return:"} {"_id": "q_2029", "text": "Create a shallow copy of a DataFrame, with filtering set using select_non_missing.\n\n :param drop_nan: drop rows when there is a NaN in any of the columns (will only affect float values)\n :param drop_masked: drop rows when there is a masked value in any of the columns\n :param column_names: The columns to consider, default: all (real, non-virtual) columns\n :rtype: DataFrame"} {"_id": "q_2030", "text": "Select a 2d rectangular box in the space given by x and y, bounds by limits.\n\n Example:\n\n >>> df.select_box('x', 'y', [(0, 10), (0, 1)])\n\n :param x: expression for the x space\n :param y: expression fo the y space\n :param limits: sequence of shape [(x1, x2), (y1, y2)]\n :param mode:"} {"_id": "q_2031", "text": "Select a n-dimensional rectangular box bounded by limits.\n\n The following examples are equivalent:\n\n >>> df.select_box(['x', 'y'], [(0, 10), (0, 1)])\n >>> df.select_rectangle('x', 'y', [(0, 10), (0, 1)])\n\n :param spaces: list of expressions\n :param limits: sequence of shape [(x1, x2), (y1, y2)]\n :param mode:\n :param name:\n :return:"} {"_id": "q_2032", "text": "Select an elliptical region centred on xc, yc, with a certain width, height\n and angle.\n\n Example:\n\n >>> df.select_ellipse('x','y', 2, -1, 5,1, 30, name='my_ellipse')\n\n :param x: expression for the x space\n :param y: expression for the y space\n :param xc: location of the centre of the ellipse in x\n :param yc: location of the centre of the ellipse in y\n :param width: the width of the ellipse (diameter)\n :param height: the width of the ellipse (diameter)\n :param angle: (degrees) orientation of the ellipse, counter-clockwise\n measured from the y axis\n :param name: name of the selection\n :param mode:\n :return:"} {"_id": "q_2033", "text": "Invert the selection, i.e. what is selected will not be, and vice versa\n\n :param str name:\n :param executor:\n :return:"} {"_id": "q_2034", "text": "Sets the selection object\n\n :param selection: Selection object\n :param name: selection 'slot'\n :param executor:\n :return:"} {"_id": "q_2035", "text": "Finds a non-colliding name by optional postfixing"} {"_id": "q_2036", "text": "Return a graphviz.Digraph object with a graph of all virtual columns"} {"_id": "q_2037", "text": "Mark column as categorical, with given labels, assuming zero indexing"} {"_id": "q_2038", "text": "Gives direct access to the data as numpy arrays.\n\n Convenient when working with IPython in combination with small DataFrames, since this gives tab-completion.\n Only real columns (i.e. no virtual) columns can be accessed, for getting the data from virtual columns, use\n DataFrame.evalulate(...).\n\n Columns can be accesed by there names, which are attributes. The attribues are of type numpy.ndarray.\n\n Example:\n\n >>> df = vaex.example()\n >>> r = np.sqrt(df.data.x**2 + df.data.y**2)"} {"_id": "q_2039", "text": "Get the length of the DataFrames, for the selection of the whole DataFrame.\n\n If selection is False, it returns len(df).\n\n TODO: Implement this in DataFrameRemote, and move the method up in :func:`DataFrame.length`\n\n :param selection: When True, will return the number of selected rows\n :return:"} {"_id": "q_2040", "text": "Join the columns of the other DataFrame to this one, assuming the ordering is the same"} {"_id": "q_2041", "text": "Concatenates two DataFrames, adding the rows of one the other DataFrame to the current, returned in a new DataFrame.\n\n No copy of the data is made.\n\n :param other: The other DataFrame that is concatenated with this DataFrame\n :return: New DataFrame with the rows concatenated\n :rtype: DataFrameConcatenated"} {"_id": "q_2042", "text": "Exports the DataFrame to a vaex hdf5 file\n\n :param DataFrameLocal df: DataFrame to export\n :param str path: path for file\n :param lis[str] column_names: list of column names to export or None for all columns\n :param str byteorder: = for native, < for little endian and > for big endian\n :param bool shuffle: export rows in random order\n :param bool selection: export selection or not\n :param progress: progress callback that gets a progress fraction as argument and should return True to continue,\n or a default progress bar when progress=True\n :param: bool virtual: When True, export virtual columns\n :param str sort: expression used for sorting the output\n :param bool ascending: sort ascending (True) or descending\n :return:"} {"_id": "q_2043", "text": "Add a column to the DataFrame\n\n :param str name: name of column\n :param data: numpy array with the data"} {"_id": "q_2044", "text": "Adds method f to the DataFrame class"} {"_id": "q_2045", "text": "Returns an array where missing values are replaced by value.\n\n If the dtype is object, nan values and 'nan' string values\n are replaced by value when fill_nan==True."} {"_id": "q_2046", "text": "Obtain the day of the week with Monday=0 and Sunday=6\n\n :returns: an expression containing the day of week.\n\n Example:\n\n >>> import vaex\n >>> import numpy as np\n >>> date = np.array(['2009-10-12T03:31:00', '2016-02-11T10:17:34', '2015-11-12T11:34:22'], dtype=np.datetime64)\n >>> df = vaex.from_arrays(date=date)\n >>> df\n # date\n 0 2009-10-12 03:31:00\n 1 2016-02-11 10:17:34\n 2 2015-11-12 11:34:22\n\n >>> df.date.dt.dayofweek\n Expression = dt_dayofweek(date)\n Length: 3 dtype: int64 (expression)\n -----------------------------------\n 0 0\n 1 3\n 2 3"} {"_id": "q_2047", "text": "The ordinal day of the year.\n\n :returns: an expression containing the ordinal day of the year.\n\n Example:\n\n >>> import vaex\n >>> import numpy as np\n >>> date = np.array(['2009-10-12T03:31:00', '2016-02-11T10:17:34', '2015-11-12T11:34:22'], dtype=np.datetime64)\n >>> df = vaex.from_arrays(date=date)\n >>> df\n # date\n 0 2009-10-12 03:31:00\n 1 2016-02-11 10:17:34\n 2 2015-11-12 11:34:22\n\n >>> df.date.dt.dayofyear\n Expression = dt_dayofyear(date)\n Length: 3 dtype: int64 (expression)\n -----------------------------------\n 0 285\n 1 42\n 2 316"} {"_id": "q_2048", "text": "Extracts the month out of a datetime sample.\n\n :returns: an expression containing the month extracted from a datetime column.\n\n Example:\n\n >>> import vaex\n >>> import numpy as np\n >>> date = np.array(['2009-10-12T03:31:00', '2016-02-11T10:17:34', '2015-11-12T11:34:22'], dtype=np.datetime64)\n >>> df = vaex.from_arrays(date=date)\n >>> df\n # date\n 0 2009-10-12 03:31:00\n 1 2016-02-11 10:17:34\n 2 2015-11-12 11:34:22\n\n >>> df.date.dt.month\n Expression = dt_month(date)\n Length: 3 dtype: int64 (expression)\n -----------------------------------\n 0 10\n 1 2\n 2 11"} {"_id": "q_2049", "text": "Returns the month names of a datetime sample in English.\n\n :returns: an expression containing the month names extracted from a datetime column.\n\n Example:\n\n >>> import vaex\n >>> import numpy as np\n >>> date = np.array(['2009-10-12T03:31:00', '2016-02-11T10:17:34', '2015-11-12T11:34:22'], dtype=np.datetime64)\n >>> df = vaex.from_arrays(date=date)\n >>> df\n # date\n 0 2009-10-12 03:31:00\n 1 2016-02-11 10:17:34\n 2 2015-11-12 11:34:22\n\n >>> df.date.dt.month_name\n Expression = dt_month_name(date)\n Length: 3 dtype: str (expression)\n ---------------------------------\n 0 October\n 1 February\n 2 November"} {"_id": "q_2050", "text": "Returns the day names of a datetime sample in English.\n\n :returns: an expression containing the day names extracted from a datetime column.\n\n Example:\n\n >>> import vaex\n >>> import numpy as np\n >>> date = np.array(['2009-10-12T03:31:00', '2016-02-11T10:17:34', '2015-11-12T11:34:22'], dtype=np.datetime64)\n >>> df = vaex.from_arrays(date=date)\n >>> df\n # date\n 0 2009-10-12 03:31:00\n 1 2016-02-11 10:17:34\n 2 2015-11-12 11:34:22\n\n >>> df.date.dt.day_name\n Expression = dt_day_name(date)\n Length: 3 dtype: str (expression)\n ---------------------------------\n 0 Monday\n 1 Thursday\n 2 Thursday"} {"_id": "q_2051", "text": "Returns the week ordinal of the year.\n\n :returns: an expression containing the week ordinal of the year, extracted from a datetime column.\n\n Example:\n\n >>> import vaex\n >>> import numpy as np\n >>> date = np.array(['2009-10-12T03:31:00', '2016-02-11T10:17:34', '2015-11-12T11:34:22'], dtype=np.datetime64)\n >>> df = vaex.from_arrays(date=date)\n >>> df\n # date\n 0 2009-10-12 03:31:00\n 1 2016-02-11 10:17:34\n 2 2015-11-12 11:34:22\n\n >>> df.date.dt.weekofyear\n Expression = dt_weekofyear(date)\n Length: 3 dtype: int64 (expression)\n -----------------------------------\n 0 42\n 1 6\n 2 46"} {"_id": "q_2052", "text": "Extracts the hour out of a datetime samples.\n\n :returns: an expression containing the hour extracted from a datetime column.\n\n Example:\n\n >>> import vaex\n >>> import numpy as np\n >>> date = np.array(['2009-10-12T03:31:00', '2016-02-11T10:17:34', '2015-11-12T11:34:22'], dtype=np.datetime64)\n >>> df = vaex.from_arrays(date=date)\n >>> df\n # date\n 0 2009-10-12 03:31:00\n 1 2016-02-11 10:17:34\n 2 2015-11-12 11:34:22\n\n >>> df.date.dt.hour\n Expression = dt_hour(date)\n Length: 3 dtype: int64 (expression)\n -----------------------------------\n 0 3\n 1 10\n 2 11"} {"_id": "q_2053", "text": "Extracts the minute out of a datetime samples.\n\n :returns: an expression containing the minute extracted from a datetime column.\n\n Example:\n\n >>> import vaex\n >>> import numpy as np\n >>> date = np.array(['2009-10-12T03:31:00', '2016-02-11T10:17:34', '2015-11-12T11:34:22'], dtype=np.datetime64)\n >>> df = vaex.from_arrays(date=date)\n >>> df\n # date\n 0 2009-10-12 03:31:00\n 1 2016-02-11 10:17:34\n 2 2015-11-12 11:34:22\n\n >>> df.date.dt.minute\n Expression = dt_minute(date)\n Length: 3 dtype: int64 (expression)\n -----------------------------------\n 0 31\n 1 17\n 2 34"} {"_id": "q_2054", "text": "Capitalize the first letter of a string sample.\n\n :returns: an expression containing the capitalized strings.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.capitalize()\n Expression = str_capitalize(text)\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 Something\n 1 Very pretty\n 2 Is coming\n 3 Our\n 4 Way."} {"_id": "q_2055", "text": "Concatenate two string columns on a row-by-row basis.\n\n :param expression other: The expression of the other column to be concatenated.\n :returns: an expression containing the concatenated columns.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.cat(df.text)\n Expression = str_cat(text, text)\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 SomethingSomething\n 1 very prettyvery pretty\n 2 is comingis coming\n 3 ourour\n 4 way.way."} {"_id": "q_2056", "text": "Check if a string pattern or regex is contained within a sample of a string column.\n\n :param str pattern: A string or regex pattern\n :param bool regex: If True,\n :returns: an expression which is evaluated to True if the pattern is found in a given sample, and it is False otherwise.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.contains('very')\n Expression = str_contains(text, 'very')\n Length: 5 dtype: bool (expression)\n ----------------------------------\n 0 False\n 1 True\n 2 False\n 3 False\n 4 False"} {"_id": "q_2057", "text": "Count the occurences of a pattern in sample of a string column.\n\n :param str pat: A string or regex pattern\n :param bool regex: If True,\n :returns: an expression containing the number of times a pattern is found in each sample.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.count(pat=\"et\", regex=False)\n Expression = str_count(text, pat='et', regex=False)\n Length: 5 dtype: int64 (expression)\n -----------------------------------\n 0 1\n 1 1\n 2 0\n 3 0\n 4 0"} {"_id": "q_2058", "text": "Returns the lowest indices in each string in a column, where the provided substring is fully contained between within a\n sample. If the substring is not found, -1 is returned.\n\n :param str sub: A substring to be found in the samples\n :param int start:\n :param int end:\n :returns: an expression containing the lowest indices specifying the start of the substring.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.find(sub=\"et\")\n Expression = str_find(text, sub='et')\n Length: 5 dtype: int64 (expression)\n -----------------------------------\n 0 3\n 1 7\n 2 -1\n 3 -1\n 4 -1"} {"_id": "q_2059", "text": "Converts string samples to lower case.\n\n :returns: an expression containing the converted strings.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.lower()\n Expression = str_lower(text)\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way."} {"_id": "q_2060", "text": "Remove leading characters from a string sample.\n\n :param str to_strip: The string to be removed\n :returns: an expression containing the modified string column.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.lstrip(to_strip='very ')\n Expression = str_lstrip(text, to_strip='very ')\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 Something\n 1 pretty\n 2 is coming\n 3 our\n 4 way."} {"_id": "q_2061", "text": "Pad strings in a given column.\n\n :param int width: The total width of the string\n :param str side: If 'left' than pad on the left, if 'right' than pad on the right side the string.\n :param str fillchar: The character used for padding.\n :returns: an expression containing the padded strings.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.pad(width=10, side='left', fillchar='!')\n Expression = str_pad(text, width=10, side='left', fillchar='!')\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 !Something\n 1 very pretty\n 2 !is coming\n 3 !!!!!!!our\n 4 !!!!!!way."} {"_id": "q_2062", "text": "Duplicate each string in a column.\n\n :param int repeats: number of times each string sample is to be duplicated.\n :returns: an expression containing the duplicated strings\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.repeat(3)\n Expression = str_repeat(text, 3)\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 SomethingSomethingSomething\n 1 very prettyvery prettyvery pretty\n 2 is comingis comingis coming\n 3 ourourour\n 4 way.way.way."} {"_id": "q_2063", "text": "Returns the highest indices in each string in a column, where the provided substring is fully contained between within a\n sample. If the substring is not found, -1 is returned.\n\n :param str sub: A substring to be found in the samples\n :param int start:\n :param int end:\n :returns: an expression containing the highest indices specifying the start of the substring.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.rfind(sub=\"et\")\n Expression = str_rfind(text, sub='et')\n Length: 5 dtype: int64 (expression)\n -----------------------------------\n 0 3\n 1 7\n 2 -1\n 3 -1\n 4 -1"} {"_id": "q_2064", "text": "Returns the highest indices in each string in a column, where the provided substring is fully contained between within a\n sample. If the substring is not found, -1 is returned. Same as `str.rfind`.\n\n :param str sub: A substring to be found in the samples\n :param int start:\n :param int end:\n :returns: an expression containing the highest indices specifying the start of the substring.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.rindex(sub=\"et\")\n Expression = str_rindex(text, sub='et')\n Length: 5 dtype: int64 (expression)\n -----------------------------------\n 0 3\n 1 7\n 2 -1\n 3 -1\n 4 -1"} {"_id": "q_2065", "text": "Fills the left side of string samples with a specified character such that the strings are left-hand justified.\n\n :param int width: The minimal width of the strings.\n :param str fillchar: The character used for filling.\n :returns: an expression containing the filled strings.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.rjust(width=10, fillchar='!')\n Expression = str_rjust(text, width=10, fillchar='!')\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 !Something\n 1 very pretty\n 2 !is coming\n 3 !!!!!!!our\n 4 !!!!!!way."} {"_id": "q_2066", "text": "Remove trailing characters from a string sample.\n\n :param str to_strip: The string to be removed\n :returns: an expression containing the modified string column.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.rstrip(to_strip='ing')\n Expression = str_rstrip(text, to_strip='ing')\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 Someth\n 1 very pretty\n 2 is com\n 3 our\n 4 way."} {"_id": "q_2067", "text": "Slice substrings from each string element in a column.\n\n :param int start: The start position for the slice operation.\n :param int end: The stop position for the slice operation.\n :returns: an expression containing the sliced substrings.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.slice(start=2, stop=5)\n Expression = str_pandas_slice(text, start=2, stop=5)\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 met\n 1 ry\n 2 co\n 3 r\n 4 y."} {"_id": "q_2068", "text": "Removes leading and trailing characters.\n\n Strips whitespaces (including new lines), or a set of specified\n characters from each string saple in a column, both from the left\n right sides.\n\n :param str to_strip: The characters to be removed. All combinations of the characters will be removed.\n If None, it removes whitespaces.\n :param returns: an expression containing the modified string samples.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.strip(to_strip='very')\n Expression = str_strip(text, to_strip='very')\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 Something\n 1 prett\n 2 is coming\n 3 ou\n 4 way."} {"_id": "q_2069", "text": "Converts all string samples to titlecase.\n\n :returns: an expression containing the converted strings.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n >>> df.text.str.title()\n Expression = str_title(text)\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 Something\n 1 Very Pretty\n 2 Is Coming\n 3 Our\n 4 Way."} {"_id": "q_2070", "text": "Converts all strings in a column to uppercase.\n\n :returns: an expression containing the converted strings.\n\n Example:\n\n >>> import vaex\n >>> text = ['Something', 'very pretty', 'is coming', 'our', 'way.']\n >>> df = vaex.from_arrays(text=text)\n >>> df\n # text\n 0 Something\n 1 very pretty\n 2 is coming\n 3 our\n 4 way.\n\n\n >>> df.text.str.upper()\n Expression = str_upper(text)\n Length: 5 dtype: str (expression)\n ---------------------------------\n 0 SOMETHING\n 1 VERY PRETTY\n 2 IS COMING\n 3 OUR\n 4 WAY."} {"_id": "q_2071", "text": "Writes a comment to the file in Java properties format.\n\n Newlines in the comment text are automatically turned into a continuation\n of the comment by adding a \"#\" to the beginning of each line.\n\n :param fh: a writable file-like object\n :param comment: comment string to write"} {"_id": "q_2072", "text": "Incrementally read properties from a Java .properties file.\n\n Yields tuples of key/value pairs.\n\n If ``comments`` is `True`, comments will be included with ``jprops.COMMENT``\n in place of the key.\n\n :param fh: a readable file-like object\n :param comments: should include comments (default: False)"} {"_id": "q_2073", "text": "Return the version information for all librosa dependencies."} {"_id": "q_2074", "text": "Handle renamed arguments.\n\n Parameters\n ----------\n old_name : str\n old_value\n The name and value of the old argument\n\n new_name : str\n new_value\n The name and value of the new argument\n\n version_deprecated : str\n The version at which the old name became deprecated\n\n version_removed : str\n The version at which the old name will be removed\n\n Returns\n -------\n value\n - `new_value` if `old_value` of type `Deprecated`\n - `old_value` otherwise\n\n Warnings\n --------\n if `old_value` is not of type `Deprecated`"} {"_id": "q_2075", "text": "Set the FFT library used by librosa.\n\n Parameters\n ----------\n lib : None or module\n Must implement an interface compatible with `numpy.fft`.\n If `None`, reverts to `numpy.fft`.\n\n Examples\n --------\n Use `pyfftw`:\n\n >>> import pyfftw\n >>> librosa.set_fftlib(pyfftw.interfaces.numpy_fft)\n\n Reset to default `numpy` implementation\n\n >>> librosa.set_fftlib()"} {"_id": "q_2076", "text": "Beat tracking function\n\n :parameters:\n - input_file : str\n Path to input audio file (wav, mp3, m4a, flac, etc.)\n\n - output_file : str\n Path to save beat event timestamps as a CSV file"} {"_id": "q_2077", "text": "Load audio, estimate tuning, apply pitch correction, and save."} {"_id": "q_2078", "text": "Convert one or more MIDI numbers to note strings.\n\n MIDI numbers will be rounded to the nearest integer.\n\n Notes will be of the format 'C0', 'C#0', 'D0', ...\n\n Examples\n --------\n >>> librosa.midi_to_note(0)\n 'C-1'\n >>> librosa.midi_to_note(37)\n 'C#2'\n >>> librosa.midi_to_note(-2)\n 'A#-2'\n >>> librosa.midi_to_note(104.7)\n 'A7'\n >>> librosa.midi_to_note(104.7, cents=True)\n 'A7-30'\n >>> librosa.midi_to_note(list(range(12, 24)))\n ['C0', 'C#0', 'D0', 'D#0', 'E0', 'F0', 'F#0', 'G0', 'G#0', 'A0', 'A#0', 'B0']\n\n Parameters\n ----------\n midi : int or iterable of int\n Midi numbers to convert.\n\n octave: bool\n If True, include the octave number\n\n cents: bool\n If true, cent markers will be appended for fractional notes.\n Eg, `midi_to_note(69.3, cents=True)` == `A4+03`\n\n Returns\n -------\n notes : str or iterable of str\n Strings describing each midi note.\n\n Raises\n ------\n ParameterError\n if `cents` is True and `octave` is False\n\n See Also\n --------\n midi_to_hz\n note_to_midi\n hz_to_note"} {"_id": "q_2079", "text": "Convert Hz to Mels\n\n Examples\n --------\n >>> librosa.hz_to_mel(60)\n 0.9\n >>> librosa.hz_to_mel([110, 220, 440])\n array([ 1.65, 3.3 , 6.6 ])\n\n Parameters\n ----------\n frequencies : number or np.ndarray [shape=(n,)] , float\n scalar or array of frequencies\n htk : bool\n use HTK formula instead of Slaney\n\n Returns\n -------\n mels : number or np.ndarray [shape=(n,)]\n input frequencies in Mels\n\n See Also\n --------\n mel_to_hz"} {"_id": "q_2080", "text": "Convert mel bin numbers to frequencies\n\n Examples\n --------\n >>> librosa.mel_to_hz(3)\n 200.\n\n >>> librosa.mel_to_hz([1,2,3,4,5])\n array([ 66.667, 133.333, 200. , 266.667, 333.333])\n\n Parameters\n ----------\n mels : np.ndarray [shape=(n,)], float\n mel bins to convert\n htk : bool\n use HTK formula instead of Slaney\n\n Returns\n -------\n frequencies : np.ndarray [shape=(n,)]\n input mels in Hz\n\n See Also\n --------\n hz_to_mel"} {"_id": "q_2081", "text": "Alternative implementation of `np.fft.fftfreq`\n\n Parameters\n ----------\n sr : number > 0 [scalar]\n Audio sampling rate\n\n n_fft : int > 0 [scalar]\n FFT window size\n\n\n Returns\n -------\n freqs : np.ndarray [shape=(1 + n_fft/2,)]\n Frequencies `(0, sr/n_fft, 2*sr/n_fft, ..., sr/2)`\n\n\n Examples\n --------\n >>> librosa.fft_frequencies(sr=22050, n_fft=16)\n array([ 0. , 1378.125, 2756.25 , 4134.375,\n 5512.5 , 6890.625, 8268.75 , 9646.875, 11025. ])"} {"_id": "q_2082", "text": "Compute the A-weighting of a set of frequencies.\n\n Parameters\n ----------\n frequencies : scalar or np.ndarray [shape=(n,)]\n One or more frequencies (in Hz)\n\n min_db : float [scalar] or None\n Clip weights below this threshold.\n If `None`, no clipping is performed.\n\n Returns\n -------\n A_weighting : scalar or np.ndarray [shape=(n,)]\n `A_weighting[i]` is the A-weighting of `frequencies[i]`\n\n See Also\n --------\n perceptual_weighting\n\n\n Examples\n --------\n\n Get the A-weighting for CQT frequencies\n\n >>> import matplotlib.pyplot as plt\n >>> freqs = librosa.cqt_frequencies(108, librosa.note_to_hz('C1'))\n >>> aw = librosa.A_weighting(freqs)\n >>> plt.plot(freqs, aw)\n >>> plt.xlabel('Frequency (Hz)')\n >>> plt.ylabel('Weighting (log10)')\n >>> plt.title('A-Weighting of CQT frequencies')"} {"_id": "q_2083", "text": "Return an array of time values to match the time axis from a feature matrix.\n\n Parameters\n ----------\n X : np.ndarray or scalar\n - If ndarray, X is a feature matrix, e.g. STFT, chromagram, or mel spectrogram.\n - If scalar, X represents the number of frames.\n\n sr : number > 0 [scalar]\n audio sampling rate\n\n hop_length : int > 0 [scalar]\n number of samples between successive frames\n\n n_fft : None or int > 0 [scalar]\n Optional: length of the FFT window.\n If given, time conversion will include an offset of `n_fft / 2`\n to counteract windowing effects when using a non-centered STFT.\n\n axis : int [scalar]\n The axis representing the time axis of X.\n By default, the last axis (-1) is taken.\n\n Returns\n -------\n times : np.ndarray [shape=(n,)]\n ndarray of times (in seconds) corresponding to each frame of X.\n\n See Also\n --------\n samples_like : Return an array of sample indices to match the time axis from a feature matrix.\n\n Examples\n --------\n Provide a feature matrix input:\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> X = librosa.stft(y)\n >>> times = librosa.times_like(X)\n >>> times\n array([ 0.00000000e+00, 2.32199546e-02, 4.64399093e-02, ...,\n 6.13935601e+01, 6.14167800e+01, 6.14400000e+01])\n\n Provide a scalar input:\n\n >>> n_frames = 2647\n >>> times = librosa.times_like(n_frames)\n >>> times\n array([ 0.00000000e+00, 2.32199546e-02, 4.64399093e-02, ...,\n 6.13935601e+01, 6.14167800e+01, 6.14400000e+01])"} {"_id": "q_2084", "text": "Return an array of sample indices to match the time axis from a feature matrix.\n\n Parameters\n ----------\n X : np.ndarray or scalar\n - If ndarray, X is a feature matrix, e.g. STFT, chromagram, or mel spectrogram.\n - If scalar, X represents the number of frames.\n\n hop_length : int > 0 [scalar]\n number of samples between successive frames\n\n n_fft : None or int > 0 [scalar]\n Optional: length of the FFT window.\n If given, time conversion will include an offset of `n_fft / 2`\n to counteract windowing effects when using a non-centered STFT.\n\n axis : int [scalar]\n The axis representing the time axis of X.\n By default, the last axis (-1) is taken.\n\n Returns\n -------\n samples : np.ndarray [shape=(n,)]\n ndarray of sample indices corresponding to each frame of X.\n\n See Also\n --------\n times_like : Return an array of time values to match the time axis from a feature matrix.\n\n Examples\n --------\n Provide a feature matrix input:\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> X = librosa.stft(y)\n >>> samples = librosa.samples_like(X)\n >>> samples\n array([ 0, 512, 1024, ..., 1353728, 1354240, 1354752])\n\n Provide a scalar input:\n\n >>> n_frames = 2647\n >>> samples = librosa.samples_like(n_frames)\n >>> samples\n array([ 0, 512, 1024, ..., 1353728, 1354240, 1354752])"} {"_id": "q_2085", "text": "Compute the hybrid constant-Q transform of an audio signal.\n\n Here, the hybrid CQT uses the pseudo CQT for higher frequencies where\n the hop_length is longer than half the filter length and the full CQT\n for lower frequencies.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time series\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n hop_length : int > 0 [scalar]\n number of samples between successive CQT columns.\n\n fmin : float > 0 [scalar]\n Minimum frequency. Defaults to C1 ~= 32.70 Hz\n\n n_bins : int > 0 [scalar]\n Number of frequency bins, starting at `fmin`\n\n bins_per_octave : int > 0 [scalar]\n Number of bins per octave\n\n tuning : None or float in `[-0.5, 0.5)`\n Tuning offset in fractions of a bin (cents).\n\n If `None`, tuning will be automatically estimated from the signal.\n\n filter_scale : float > 0\n Filter filter_scale factor. Larger values use longer windows.\n\n sparsity : float in [0, 1)\n Sparsify the CQT basis by discarding up to `sparsity`\n fraction of the energy in each basis.\n\n Set `sparsity=0` to disable sparsification.\n\n window : str, tuple, number, or function\n Window specification for the basis filters.\n See `filters.get_window` for details.\n\n pad_mode : string\n Padding mode for centered frame analysis.\n\n See also: `librosa.core.stft` and `np.pad`.\n\n res_type : string\n Resampling mode. See `librosa.core.cqt` for details.\n\n Returns\n -------\n CQT : np.ndarray [shape=(n_bins, t), dtype=np.float]\n Constant-Q energy for each frequency at each time.\n\n Raises\n ------\n ParameterError\n If `hop_length` is not an integer multiple of\n `2**(n_bins / bins_per_octave)`\n\n Or if `y` is too short to support the frequency range of the CQT.\n\n See Also\n --------\n cqt\n pseudo_cqt\n\n Notes\n -----\n This function caches at level 20."} {"_id": "q_2086", "text": "Compute the pseudo constant-Q transform of an audio signal.\n\n This uses a single fft size that is the smallest power of 2 that is greater\n than or equal to the max of:\n\n 1. The longest CQT filter\n 2. 2x the hop_length\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time series\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n hop_length : int > 0 [scalar]\n number of samples between successive CQT columns.\n\n fmin : float > 0 [scalar]\n Minimum frequency. Defaults to C1 ~= 32.70 Hz\n\n n_bins : int > 0 [scalar]\n Number of frequency bins, starting at `fmin`\n\n bins_per_octave : int > 0 [scalar]\n Number of bins per octave\n\n tuning : None or float in `[-0.5, 0.5)`\n Tuning offset in fractions of a bin (cents).\n\n If `None`, tuning will be automatically estimated from the signal.\n\n filter_scale : float > 0\n Filter filter_scale factor. Larger values use longer windows.\n\n sparsity : float in [0, 1)\n Sparsify the CQT basis by discarding up to `sparsity`\n fraction of the energy in each basis.\n\n Set `sparsity=0` to disable sparsification.\n\n window : str, tuple, number, or function\n Window specification for the basis filters.\n See `filters.get_window` for details.\n\n pad_mode : string\n Padding mode for centered frame analysis.\n\n See also: `librosa.core.stft` and `np.pad`.\n\n Returns\n -------\n CQT : np.ndarray [shape=(n_bins, t), dtype=np.float]\n Pseudo Constant-Q energy for each frequency at each time.\n\n Raises\n ------\n ParameterError\n If `hop_length` is not an integer multiple of\n `2**(n_bins / bins_per_octave)`\n\n Or if `y` is too short to support the frequency range of the CQT.\n\n Notes\n -----\n This function caches at level 20."} {"_id": "q_2087", "text": "Generate the frequency domain constant-Q filter basis."} {"_id": "q_2088", "text": "Helper function to trim and stack a collection of CQT responses"} {"_id": "q_2089", "text": "Compute the filter response with a target STFT hop."} {"_id": "q_2090", "text": "Compute the number of early downsampling operations"} {"_id": "q_2091", "text": "Perform early downsampling on an audio signal, if it applies."} {"_id": "q_2092", "text": "Calculate the accumulated cost matrix D.\n\n Use dynamic programming to calculate the accumulated costs.\n\n Parameters\n ----------\n C : np.ndarray [shape=(N, M)]\n pre-computed cost matrix\n\n D : np.ndarray [shape=(N, M)]\n accumulated cost matrix\n\n D_steps : np.ndarray [shape=(N, M)]\n steps which were used for calculating D\n\n step_sizes_sigma : np.ndarray [shape=[n, 2]]\n Specifies allowed step sizes as used by the dtw.\n\n weights_add : np.ndarray [shape=[n, ]]\n Additive weights to penalize certain step sizes.\n\n weights_mul : np.ndarray [shape=[n, ]]\n Multiplicative weights to penalize certain step sizes.\n\n max_0 : int\n maximum number of steps in step_sizes_sigma in dim 0.\n\n max_1 : int\n maximum number of steps in step_sizes_sigma in dim 1.\n\n Returns\n -------\n D : np.ndarray [shape=(N,M)]\n accumulated cost matrix.\n D[N,M] is the total alignment cost.\n When doing subsequence DTW, D[N,:] indicates a matching function.\n\n D_steps : np.ndarray [shape=(N,M)]\n steps which were used for calculating D.\n\n See Also\n --------\n dtw"} {"_id": "q_2093", "text": "Backtrack optimal warping path.\n\n Uses the saved step sizes from the cost accumulation\n step to backtrack the index pairs for an optimal\n warping path.\n\n\n Parameters\n ----------\n D_steps : np.ndarray [shape=(N, M)]\n Saved indices of the used steps used in the calculation of D.\n\n step_sizes_sigma : np.ndarray [shape=[n, 2]]\n Specifies allowed step sizes as used by the dtw.\n\n Returns\n -------\n wp : list [shape=(N,)]\n Warping path with index pairs.\n Each list entry contains an index pair\n (n,m) as a tuple\n\n See Also\n --------\n dtw"} {"_id": "q_2094", "text": "Core Viterbi algorithm.\n\n This is intended for internal use only.\n\n Parameters\n ----------\n log_prob : np.ndarray [shape=(T, m)]\n `log_prob[t, s]` is the conditional log-likelihood\n log P[X = X(t) | State(t) = s]\n\n log_trans : np.ndarray [shape=(m, m)]\n The log transition matrix\n `log_trans[i, j]` = log P[State(t+1) = j | State(t) = i]\n\n log_p_init : np.ndarray [shape=(m,)]\n log of the initial state distribution\n\n state : np.ndarray [shape=(T,), dtype=int]\n Pre-allocated state index array\n\n value : np.ndarray [shape=(T, m)] float\n Pre-allocated value array\n\n ptr : np.ndarray [shape=(T, m), dtype=int]\n Pre-allocated pointer array\n\n Returns\n -------\n None\n All computations are performed in-place on `state, value, ptr`."} {"_id": "q_2095", "text": "Construct a self-loop transition matrix over `n_states`.\n\n The transition matrix will have the following properties:\n\n - `transition[i, i] = p` for all i\n - `transition[i, j] = (1 - p) / (n_states - 1)` for all `j != i`\n\n This type of transition matrix is appropriate when states tend to be\n locally stable, and there is no additional structure between different\n states. This is primarily useful for de-noising frame-wise predictions.\n\n Parameters\n ----------\n n_states : int > 1\n The number of states\n\n prob : float in [0, 1] or iterable, length=n_states\n If a scalar, this is the probability of a self-transition.\n\n If a vector of length `n_states`, `p[i]` is the probability of state `i`'s self-transition.\n\n Returns\n -------\n transition : np.ndarray [shape=(n_states, n_states)]\n The transition matrix\n\n Examples\n --------\n >>> librosa.sequence.transition_loop(3, 0.5)\n array([[0.5 , 0.25, 0.25],\n [0.25, 0.5 , 0.25],\n [0.25, 0.25, 0.5 ]])\n\n >>> librosa.sequence.transition_loop(3, [0.8, 0.5, 0.25])\n array([[0.8 , 0.1 , 0.1 ],\n [0.25 , 0.5 , 0.25 ],\n [0.375, 0.375, 0.25 ]])"} {"_id": "q_2096", "text": "Construct a localized transition matrix.\n\n The transition matrix will have the following properties:\n\n - `transition[i, j] = 0` if `|i - j| > width`\n - `transition[i, i]` is maximal\n - `transition[i, i - width//2 : i + width//2]` has shape `window`\n\n This type of transition matrix is appropriate for state spaces\n that discretely approximate continuous variables, such as in fundamental\n frequency estimation.\n\n Parameters\n ----------\n n_states : int > 1\n The number of states\n\n width : int >= 1 or iterable\n The maximum number of states to treat as \"local\".\n If iterable, it should have length equal to `n_states`,\n and specify the width independently for each state.\n\n window : str, callable, or window specification\n The window function to determine the shape of the \"local\" distribution.\n\n Any window specification supported by `filters.get_window` will work here.\n\n .. note:: Certain windows (e.g., 'hann') are identically 0 at the boundaries,\n so and effectively have `width-2` non-zero values. You may have to expand\n `width` to get the desired behavior.\n\n\n wrap : bool\n If `True`, then state locality `|i - j|` is computed modulo `n_states`.\n If `False` (default), then locality is absolute.\n\n See Also\n --------\n filters.get_window\n\n Returns\n -------\n transition : np.ndarray [shape=(n_states, n_states)]\n The transition matrix\n\n Examples\n --------\n\n Triangular distributions with and without wrapping\n\n >>> librosa.sequence.transition_local(5, 3, window='triangle', wrap=False)\n array([[0.667, 0.333, 0. , 0. , 0. ],\n [0.25 , 0.5 , 0.25 , 0. , 0. ],\n [0. , 0.25 , 0.5 , 0.25 , 0. ],\n [0. , 0. , 0.25 , 0.5 , 0.25 ],\n [0. , 0. , 0. , 0.333, 0.667]])\n\n >>> librosa.sequence.transition_local(5, 3, window='triangle', wrap=True)\n array([[0.5 , 0.25, 0. , 0. , 0.25],\n [0.25, 0.5 , 0.25, 0. , 0. ],\n [0. , 0.25, 0.5 , 0.25, 0. ],\n [0. , 0. , 0.25, 0.5 , 0.25],\n [0.25, 0. , 0. , 0.25, 0.5 ]])\n\n Uniform local distributions with variable widths and no wrapping\n\n >>> librosa.sequence.transition_local(5, [1, 2, 3, 3, 1], window='ones', wrap=False)\n array([[1. , 0. , 0. , 0. , 0. ],\n [0.5 , 0.5 , 0. , 0. , 0. ],\n [0. , 0.333, 0.333, 0.333, 0. ],\n [0. , 0. , 0.333, 0.333, 0.333],\n [0. , 0. , 0. , 0. , 1. ]])"} {"_id": "q_2097", "text": "Basic onset detector. Locate note onset events by picking peaks in an\n onset strength envelope.\n\n The `peak_pick` parameters were chosen by large-scale hyper-parameter\n optimization over the dataset provided by [1]_.\n\n .. [1] https://github.com/CPJKU/onset_db\n\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time series\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n onset_envelope : np.ndarray [shape=(m,)]\n (optional) pre-computed onset strength envelope\n\n hop_length : int > 0 [scalar]\n hop length (in samples)\n\n units : {'frames', 'samples', 'time'}\n The units to encode detected onset events in.\n By default, 'frames' are used.\n\n backtrack : bool\n If `True`, detected onset events are backtracked to the nearest\n preceding minimum of `energy`.\n\n This is primarily useful when using onsets as slice points for segmentation.\n\n energy : np.ndarray [shape=(m,)] (optional)\n An energy function to use for backtracking detected onset events.\n If none is provided, then `onset_envelope` is used.\n\n kwargs : additional keyword arguments\n Additional parameters for peak picking.\n\n See `librosa.util.peak_pick` for details.\n\n\n Returns\n -------\n\n onsets : np.ndarray [shape=(n_onsets,)]\n estimated positions of detected onsets, in whichever units\n are specified. By default, frame indices.\n\n .. note::\n If no onset strength could be detected, onset_detect returns\n an empty list.\n\n\n Raises\n ------\n ParameterError\n if neither `y` nor `onsets` are provided\n\n or if `units` is not one of 'frames', 'samples', or 'time'\n\n See Also\n --------\n onset_strength : compute onset strength per-frame\n onset_backtrack : backtracking onset events\n librosa.util.peak_pick : pick peaks from a time series\n\n\n Examples\n --------\n Get onset times from a signal\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(),\n ... offset=30, duration=2.0)\n >>> onset_frames = librosa.onset.onset_detect(y=y, sr=sr)\n >>> librosa.frames_to_time(onset_frames, sr=sr)\n array([ 0.07 , 0.395, 0.511, 0.627, 0.766, 0.975,\n 1.207, 1.324, 1.44 , 1.788, 1.881])\n\n Or use a pre-computed onset envelope\n\n >>> o_env = librosa.onset.onset_strength(y, sr=sr)\n >>> times = librosa.frames_to_time(np.arange(len(o_env)), sr=sr)\n >>> onset_frames = librosa.onset.onset_detect(onset_envelope=o_env, sr=sr)\n\n\n >>> import matplotlib.pyplot as plt\n >>> D = np.abs(librosa.stft(y))\n >>> plt.figure()\n >>> ax1 = plt.subplot(2, 1, 1)\n >>> librosa.display.specshow(librosa.amplitude_to_db(D, ref=np.max),\n ... x_axis='time', y_axis='log')\n >>> plt.title('Power spectrogram')\n >>> plt.subplot(2, 1, 2, sharex=ax1)\n >>> plt.plot(times, o_env, label='Onset strength')\n >>> plt.vlines(times[onset_frames], 0, o_env.max(), color='r', alpha=0.9,\n ... linestyle='--', label='Onsets')\n >>> plt.axis('tight')\n >>> plt.legend(frameon=True, framealpha=0.75)"} {"_id": "q_2098", "text": "Compute a spectral flux onset strength envelope across multiple channels.\n\n Onset strength for channel `i` at time `t` is determined by:\n\n `mean_{f in channels[i]} max(0, S[f, t+1] - S[f, t])`\n\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time-series\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n S : np.ndarray [shape=(d, m)]\n pre-computed (log-power) spectrogram\n\n lag : int > 0\n time lag for computing differences\n\n max_size : int > 0\n size (in frequency bins) of the local max filter.\n set to `1` to disable filtering.\n\n ref : None or np.ndarray [shape=(d, m)]\n An optional pre-computed reference spectrum, of the same shape as `S`.\n If not provided, it will be computed from `S`.\n If provided, it will override any local max filtering governed by `max_size`.\n\n detrend : bool [scalar]\n Filter the onset strength to remove the DC component\n\n center : bool [scalar]\n Shift the onset function by `n_fft / (2 * hop_length)` frames\n\n feature : function\n Function for computing time-series features, eg, scaled spectrograms.\n By default, uses `librosa.feature.melspectrogram` with `fmax=11025.0`\n\n aggregate : function or False\n Aggregation function to use when combining onsets\n at different frequency bins.\n\n If `False`, then no aggregation is performed.\n\n Default: `np.mean`\n\n channels : list or None\n Array of channel boundaries or slice objects.\n If `None`, then a single channel is generated to span all bands.\n\n kwargs : additional keyword arguments\n Additional parameters to `feature()`, if `S` is not provided.\n\n\n Returns\n -------\n onset_envelope : np.ndarray [shape=(n_channels, m)]\n array containing the onset strength envelope for each specified channel\n\n\n Raises\n ------\n ParameterError\n if neither `(y, sr)` nor `S` are provided\n\n\n See Also\n --------\n onset_strength\n\n Notes\n -----\n This function caches at level 30.\n\n Examples\n --------\n First, load some audio and plot the spectrogram\n\n >>> import matplotlib.pyplot as plt\n >>> y, sr = librosa.load(librosa.util.example_audio_file(),\n ... duration=10.0)\n >>> D = np.abs(librosa.stft(y))\n >>> plt.figure()\n >>> plt.subplot(2, 1, 1)\n >>> librosa.display.specshow(librosa.amplitude_to_db(D, ref=np.max),\n ... y_axis='log')\n >>> plt.title('Power spectrogram')\n\n Construct a standard onset function over four sub-bands\n\n >>> onset_subbands = librosa.onset.onset_strength_multi(y=y, sr=sr,\n ... channels=[0, 32, 64, 96, 128])\n >>> plt.subplot(2, 1, 2)\n >>> librosa.display.specshow(onset_subbands, x_axis='time')\n >>> plt.ylabel('Sub-bands')\n >>> plt.title('Sub-band onset strength')"} {"_id": "q_2099", "text": "r\"\"\"Save time steps as in CSV format. This can be used to store the output\n of a beat-tracker or segmentation algorithm.\n\n If only `times` are provided, the file will contain each value\n of `times` on a row::\n\n times[0]\\n\n times[1]\\n\n times[2]\\n\n ...\n\n If `annotations` are also provided, the file will contain\n delimiter-separated values::\n\n times[0],annotations[0]\\n\n times[1],annotations[1]\\n\n times[2],annotations[2]\\n\n ...\n\n\n Parameters\n ----------\n path : string\n path to save the output CSV file\n\n times : list-like of floats\n list of frame numbers for beat events\n\n annotations : None or list-like\n optional annotations for each time step\n\n delimiter : str\n character to separate fields\n\n fmt : str\n format-string for rendering time\n\n Raises\n ------\n ParameterError\n if `annotations` is not `None` and length does not\n match `times`\n\n Examples\n --------\n Write beat-tracker time to CSV\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> tempo, beats = librosa.beat.beat_track(y, sr=sr, units='time')\n >>> librosa.output.times_csv('beat_times.csv', beats)"} {"_id": "q_2100", "text": "Output a time series as a .wav file\n\n Note: only mono or stereo, floating-point data is supported.\n For more advanced and flexible output options, refer to\n `soundfile`.\n\n Parameters\n ----------\n path : str\n path to save the output wav file\n\n y : np.ndarray [shape=(n,) or (2,n), dtype=np.float]\n audio time series (mono or stereo).\n\n Note that only floating-point values are supported.\n\n sr : int > 0 [scalar]\n sampling rate of `y`\n\n norm : boolean [scalar]\n enable amplitude normalization.\n For floating point `y`, scale the data to the range [-1, +1].\n\n Examples\n --------\n Trim a signal to 5 seconds and save it back\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(),\n ... duration=5.0)\n >>> librosa.output.write_wav('file_trim_5s.wav', y, sr)\n\n See Also\n --------\n soundfile.write"} {"_id": "q_2101", "text": "Get a default colormap from the given data.\n\n If the data is boolean, use a black and white colormap.\n\n If the data has both positive and negative values,\n use a diverging colormap.\n\n Otherwise, use a sequential colormap.\n\n Parameters\n ----------\n data : np.ndarray\n Input data\n\n robust : bool\n If True, discard the top and bottom 2% of data when calculating\n range.\n\n cmap_seq : str\n The sequential colormap name\n\n cmap_bool : str\n The boolean colormap name\n\n cmap_div : str\n The diverging colormap name\n\n Returns\n -------\n cmap : matplotlib.colors.Colormap\n The colormap to use for `data`\n\n See Also\n --------\n matplotlib.pyplot.colormaps"} {"_id": "q_2102", "text": "Plot the amplitude envelope of a waveform.\n\n If `y` is monophonic, a filled curve is drawn between `[-abs(y), abs(y)]`.\n\n If `y` is stereo, the curve is drawn between `[-abs(y[1]), abs(y[0])]`,\n so that the left and right channels are drawn above and below the axis,\n respectively.\n\n Long signals (`duration >= max_points`) are down-sampled to at\n most `max_sr` before plotting.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,) or (2,n)]\n audio time series (mono or stereo)\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n max_points : postive number or None\n Maximum number of time-points to plot: if `max_points` exceeds\n the duration of `y`, then `y` is downsampled.\n\n If `None`, no downsampling is performed.\n\n x_axis : str {'time', 'off', 'none'} or None\n If 'time', the x-axis is given time tick-marks.\n\n ax : matplotlib.axes.Axes or None\n Axes to plot on instead of the default `plt.gca()`.\n\n offset : float\n Horizontal offset (in seconds) to start the waveform plot\n\n max_sr : number > 0 [scalar]\n Maximum sampling rate for the visualization\n\n kwargs\n Additional keyword arguments to `matplotlib.pyplot.fill_between`\n\n Returns\n -------\n pc : matplotlib.collections.PolyCollection\n The PolyCollection created by `fill_between`.\n\n See also\n --------\n librosa.core.resample\n matplotlib.pyplot.fill_between\n\n\n Examples\n --------\n Plot a monophonic waveform\n\n >>> import matplotlib.pyplot as plt\n >>> y, sr = librosa.load(librosa.util.example_audio_file(), duration=10)\n >>> plt.figure()\n >>> plt.subplot(3, 1, 1)\n >>> librosa.display.waveplot(y, sr=sr)\n >>> plt.title('Monophonic')\n\n Or a stereo waveform\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(),\n ... mono=False, duration=10)\n >>> plt.subplot(3, 1, 2)\n >>> librosa.display.waveplot(y, sr=sr)\n >>> plt.title('Stereo')\n\n Or harmonic and percussive components with transparency\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(), duration=10)\n >>> y_harm, y_perc = librosa.effects.hpss(y)\n >>> plt.subplot(3, 1, 3)\n >>> librosa.display.waveplot(y_harm, sr=sr, alpha=0.25)\n >>> librosa.display.waveplot(y_perc, sr=sr, color='r', alpha=0.5)\n >>> plt.title('Harmonic + Percussive')\n >>> plt.tight_layout()"} {"_id": "q_2103", "text": "Helper to set the current image in pyplot mode.\n\n If the provided `ax` is not `None`, then we assume that the user is using the object API.\n In this case, the pyplot current image is not set."} {"_id": "q_2104", "text": "Compute axis coordinates"} {"_id": "q_2105", "text": "Check if \"axes\" is an instance of an axis object. If not, use `gca`."} {"_id": "q_2106", "text": "Get CQT bin frequencies"} {"_id": "q_2107", "text": "Get chroma bin numbers"} {"_id": "q_2108", "text": "Get time coordinates from frames"} {"_id": "q_2109", "text": "Estimate the tuning of an audio time series or spectrogram input.\n\n Parameters\n ----------\n y: np.ndarray [shape=(n,)] or None\n audio signal\n\n sr : number > 0 [scalar]\n audio sampling rate of `y`\n\n S: np.ndarray [shape=(d, t)] or None\n magnitude or power spectrogram\n\n n_fft : int > 0 [scalar] or None\n number of FFT bins to use, if `y` is provided.\n\n resolution : float in `(0, 1)`\n Resolution of the tuning as a fraction of a bin.\n 0.01 corresponds to measurements in cents.\n\n bins_per_octave : int > 0 [scalar]\n How many frequency bins per octave\n\n kwargs : additional keyword arguments\n Additional arguments passed to `piptrack`\n\n Returns\n -------\n tuning: float in `[-0.5, 0.5)`\n estimated tuning deviation (fractions of a bin)\n\n See Also\n --------\n piptrack\n Pitch tracking by parabolic interpolation\n\n Examples\n --------\n >>> # With time-series input\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> librosa.estimate_tuning(y=y, sr=sr)\n 0.089999999999999969\n\n >>> # In tenths of a cent\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> librosa.estimate_tuning(y=y, sr=sr, resolution=1e-3)\n 0.093999999999999972\n\n >>> # Using spectrogram input\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> S = np.abs(librosa.stft(y))\n >>> librosa.estimate_tuning(S=S, sr=sr)\n 0.089999999999999969\n\n >>> # Using pass-through arguments to `librosa.piptrack`\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> librosa.estimate_tuning(y=y, sr=sr, n_fft=8192,\n ... fmax=librosa.note_to_hz('G#9'))\n 0.070000000000000062"} {"_id": "q_2110", "text": "Pitch tracking on thresholded parabolically-interpolated STFT.\n\n This implementation uses the parabolic interpolation method described by [1]_.\n\n .. [1] https://ccrma.stanford.edu/~jos/sasp/Sinusoidal_Peak_Interpolation.html\n\n Parameters\n ----------\n y: np.ndarray [shape=(n,)] or None\n audio signal\n\n sr : number > 0 [scalar]\n audio sampling rate of `y`\n\n S: np.ndarray [shape=(d, t)] or None\n magnitude or power spectrogram\n\n n_fft : int > 0 [scalar] or None\n number of FFT bins to use, if `y` is provided.\n\n hop_length : int > 0 [scalar] or None\n number of samples to hop\n\n threshold : float in `(0, 1)`\n A bin in spectrum `S` is considered a pitch when it is greater than\n `threshold*ref(S)`.\n\n By default, `ref(S)` is taken to be `max(S, axis=0)` (the maximum value in\n each column).\n\n fmin : float > 0 [scalar]\n lower frequency cutoff.\n\n fmax : float > 0 [scalar]\n upper frequency cutoff.\n\n win_length : int <= n_fft [scalar]\n Each frame of audio is windowed by `window()`.\n The window will be of length `win_length` and then padded\n with zeros to match `n_fft`.\n\n If unspecified, defaults to ``win_length = n_fft``.\n\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n - a window specification (string, tuple, or number);\n see `scipy.signal.get_window`\n - a window function, such as `scipy.signal.hanning`\n - a vector or array of length `n_fft`\n\n .. see also:: `filters.get_window`\n\n center : boolean\n - If `True`, the signal `y` is padded so that frame\n `t` is centered at `y[t * hop_length]`.\n - If `False`, then frame `t` begins at `y[t * hop_length]`\n\n pad_mode : string\n If `center=True`, the padding mode to use at the edges of the signal.\n By default, STFT uses reflection padding.\n\n ref : scalar or callable [default=np.max]\n If scalar, the reference value against which `S` is compared for determining\n pitches.\n\n If callable, the reference value is computed as `ref(S, axis=0)`.\n\n .. note::\n One of `S` or `y` must be provided.\n\n If `S` is not given, it is computed from `y` using\n the default parameters of `librosa.core.stft`.\n\n Returns\n -------\n pitches : np.ndarray [shape=(d, t)]\n magnitudes : np.ndarray [shape=(d,t)]\n Where `d` is the subset of FFT bins within `fmin` and `fmax`.\n\n `pitches[f, t]` contains instantaneous frequency at bin\n `f`, time `t`\n\n `magnitudes[f, t]` contains the corresponding magnitudes.\n\n Both `pitches` and `magnitudes` take value 0 at bins\n of non-maximal magnitude.\n\n Notes\n -----\n This function caches at level 30.\n\n Examples\n --------\n Computing pitches from a waveform input\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> pitches, magnitudes = librosa.piptrack(y=y, sr=sr)\n\n Or from a spectrogram input\n\n >>> S = np.abs(librosa.stft(y))\n >>> pitches, magnitudes = librosa.piptrack(S=S, sr=sr)\n\n Or with an alternate reference value for pitch detection, where\n values above the mean spectral energy in each frame are counted as pitches\n\n >>> pitches, magnitudes = librosa.piptrack(S=S, sr=sr, threshold=1,\n ... ref=np.mean)"} {"_id": "q_2111", "text": "Decompose an audio time series into harmonic and percussive components.\n\n This function automates the STFT->HPSS->ISTFT pipeline, and ensures that\n the output waveforms have equal length to the input waveform `y`.\n\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time series\n kwargs : additional keyword arguments.\n See `librosa.decompose.hpss` for details.\n\n\n Returns\n -------\n y_harmonic : np.ndarray [shape=(n,)]\n audio time series of the harmonic elements\n\n y_percussive : np.ndarray [shape=(n,)]\n audio time series of the percussive elements\n\n See Also\n --------\n harmonic : Extract only the harmonic component\n percussive : Extract only the percussive component\n librosa.decompose.hpss : HPSS on spectrograms\n\n\n Examples\n --------\n >>> # Extract harmonic and percussive components\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> y_harmonic, y_percussive = librosa.effects.hpss(y)\n\n >>> # Get a more isolated percussive component by widening its margin\n >>> y_harmonic, y_percussive = librosa.effects.hpss(y, margin=(1.0,5.0))"} {"_id": "q_2112", "text": "Extract percussive elements from an audio time-series.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time series\n kwargs : additional keyword arguments.\n See `librosa.decompose.hpss` for details.\n\n Returns\n -------\n y_percussive : np.ndarray [shape=(n,)]\n audio time series of just the percussive portion\n\n See Also\n --------\n hpss : Separate harmonic and percussive components\n harmonic : Extract only the harmonic component\n librosa.decompose.hpss : HPSS for spectrograms\n\n Examples\n --------\n >>> # Extract percussive component\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> y_percussive = librosa.effects.percussive(y)\n\n >>> # Use a margin > 1.0 for greater percussive separation\n >>> y_percussive = librosa.effects.percussive(y, margin=3.0)"} {"_id": "q_2113", "text": "Time-stretch an audio series by a fixed rate.\n\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time series\n\n rate : float > 0 [scalar]\n Stretch factor. If `rate > 1`, then the signal is sped up.\n\n If `rate < 1`, then the signal is slowed down.\n\n Returns\n -------\n y_stretch : np.ndarray [shape=(rate * n,)]\n audio time series stretched by the specified rate\n\n See Also\n --------\n pitch_shift : pitch shifting\n librosa.core.phase_vocoder : spectrogram phase vocoder\n\n\n Examples\n --------\n Compress to be twice as fast\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> y_fast = librosa.effects.time_stretch(y, 2.0)\n\n Or half the original speed\n\n >>> y_slow = librosa.effects.time_stretch(y, 0.5)"} {"_id": "q_2114", "text": "Pitch-shift the waveform by `n_steps` half-steps.\n\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time-series\n\n sr : number > 0 [scalar]\n audio sampling rate of `y`\n\n n_steps : float [scalar]\n how many (fractional) half-steps to shift `y`\n\n bins_per_octave : float > 0 [scalar]\n how many steps per octave\n\n res_type : string\n Resample type.\n Possible options: 'kaiser_best', 'kaiser_fast', and 'scipy', 'polyphase',\n 'fft'.\n By default, 'kaiser_best' is used.\n \n See `core.resample` for more information.\n\n Returns\n -------\n y_shift : np.ndarray [shape=(n,)]\n The pitch-shifted audio time-series\n\n\n See Also\n --------\n time_stretch : time stretching\n librosa.core.phase_vocoder : spectrogram phase vocoder\n\n\n Examples\n --------\n Shift up by a major third (four half-steps)\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> y_third = librosa.effects.pitch_shift(y, sr, n_steps=4)\n\n Shift down by a tritone (six half-steps)\n\n >>> y_tritone = librosa.effects.pitch_shift(y, sr, n_steps=-6)\n\n Shift up by 3 quarter-tones\n\n >>> y_three_qt = librosa.effects.pitch_shift(y, sr, n_steps=3,\n ... bins_per_octave=24)"} {"_id": "q_2115", "text": "Split an audio signal into non-silent intervals.\n\n Parameters\n ----------\n y : np.ndarray, shape=(n,) or (2, n)\n An audio signal\n\n top_db : number > 0\n The threshold (in decibels) below reference to consider as\n silence\n\n ref : number or callable\n The reference power. By default, it uses `np.max` and compares\n to the peak power in the signal.\n\n frame_length : int > 0\n The number of samples per analysis frame\n\n hop_length : int > 0\n The number of samples between analysis frames\n\n Returns\n -------\n intervals : np.ndarray, shape=(m, 2)\n `intervals[i] == (start_i, end_i)` are the start and end time\n (in samples) of non-silent interval `i`."} {"_id": "q_2116", "text": "Phase vocoder. Given an STFT matrix D, speed up by a factor of `rate`\n\n Based on the implementation provided by [1]_.\n\n .. [1] Ellis, D. P. W. \"A phase vocoder in Matlab.\"\n Columbia University, 2002.\n http://www.ee.columbia.edu/~dpwe/resources/matlab/pvoc/\n\n Examples\n --------\n >>> # Play at double speed\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> D = librosa.stft(y, n_fft=2048, hop_length=512)\n >>> D_fast = librosa.phase_vocoder(D, 2.0, hop_length=512)\n >>> y_fast = librosa.istft(D_fast, hop_length=512)\n\n >>> # Or play at 1/3 speed\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> D = librosa.stft(y, n_fft=2048, hop_length=512)\n >>> D_slow = librosa.phase_vocoder(D, 1./3, hop_length=512)\n >>> y_slow = librosa.istft(D_slow, hop_length=512)\n\n Parameters\n ----------\n D : np.ndarray [shape=(d, t), dtype=complex]\n STFT matrix\n\n rate : float > 0 [scalar]\n Speed-up factor: `rate > 1` is faster, `rate < 1` is slower.\n\n hop_length : int > 0 [scalar] or None\n The number of samples between successive columns of `D`.\n\n If None, defaults to `n_fft/4 = (D.shape[0]-1)/2`\n\n Returns\n -------\n D_stretched : np.ndarray [shape=(d, t / rate), dtype=complex]\n time-stretched STFT"} {"_id": "q_2117", "text": "Convert an amplitude spectrogram to dB-scaled spectrogram.\n\n This is equivalent to ``power_to_db(S**2)``, but is provided for convenience.\n\n Parameters\n ----------\n S : np.ndarray\n input amplitude\n\n ref : scalar or callable\n If scalar, the amplitude `abs(S)` is scaled relative to `ref`:\n `20 * log10(S / ref)`.\n Zeros in the output correspond to positions where `S == ref`.\n\n If callable, the reference value is computed as `ref(S)`.\n\n amin : float > 0 [scalar]\n minimum threshold for `S` and `ref`\n\n top_db : float >= 0 [scalar]\n threshold the output at `top_db` below the peak:\n ``max(20 * log10(S)) - top_db``\n\n\n Returns\n -------\n S_db : np.ndarray\n ``S`` measured in dB\n\n See Also\n --------\n power_to_db, db_to_amplitude\n\n Notes\n -----\n This function caches at level 30."} {"_id": "q_2118", "text": "Helper function to retrieve a magnitude spectrogram.\n\n This is primarily used in feature extraction functions that can operate on\n either audio time-series or spectrogram input.\n\n\n Parameters\n ----------\n y : None or np.ndarray [ndim=1]\n If provided, an audio time series\n\n S : None or np.ndarray\n Spectrogram input, optional\n\n n_fft : int > 0\n STFT window size\n\n hop_length : int > 0\n STFT hop length\n\n power : float > 0\n Exponent for the magnitude spectrogram,\n e.g., 1 for energy, 2 for power, etc.\n\n win_length : int <= n_fft [scalar]\n Each frame of audio is windowed by `window()`.\n The window will be of length `win_length` and then padded\n with zeros to match `n_fft`.\n\n If unspecified, defaults to ``win_length = n_fft``.\n\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n - a window specification (string, tuple, or number);\n see `scipy.signal.get_window`\n - a window function, such as `scipy.signal.hanning`\n - a vector or array of length `n_fft`\n\n .. see also:: `filters.get_window`\n\n center : boolean\n - If `True`, the signal `y` is padded so that frame\n `t` is centered at `y[t * hop_length]`.\n - If `False`, then frame `t` begins at `y[t * hop_length]`\n\n pad_mode : string\n If `center=True`, the padding mode to use at the edges of the signal.\n By default, STFT uses reflection padding.\n\n\n Returns\n -------\n S_out : np.ndarray [dtype=np.float32]\n - If `S` is provided as input, then `S_out == S`\n - Else, `S_out = |stft(y, ...)|**power`\n\n n_fft : int > 0\n - If `S` is provided, then `n_fft` is inferred from `S`\n - Else, copied from input"} {"_id": "q_2119", "text": "HPSS beat tracking\n\n :parameters:\n - input_file : str\n Path to input audio file (wav, mp3, m4a, flac, etc.)\n\n - output_file : str\n Path to save beat event timestamps as a CSV file"} {"_id": "q_2120", "text": "Filtering by nearest-neighbors.\n\n Each data point (e.g, spectrogram column) is replaced\n by aggregating its nearest neighbors in feature space.\n\n This can be useful for de-noising a spectrogram or feature matrix.\n\n The non-local means method [1]_ can be recovered by providing a\n weighted recurrence matrix as input and specifying `aggregate=np.average`.\n\n Similarly, setting `aggregate=np.median` produces sparse de-noising\n as in REPET-SIM [2]_.\n\n .. [1] Buades, A., Coll, B., & Morel, J. M.\n (2005, June). A non-local algorithm for image denoising.\n In Computer Vision and Pattern Recognition, 2005.\n CVPR 2005. IEEE Computer Society Conference on (Vol. 2, pp. 60-65). IEEE.\n\n .. [2] Rafii, Z., & Pardo, B.\n (2012, October). \"Music/Voice Separation Using the Similarity Matrix.\"\n International Society for Music Information Retrieval Conference, 2012.\n\n Parameters\n ----------\n S : np.ndarray\n The input data (spectrogram) to filter\n\n rec : (optional) scipy.sparse.spmatrix or np.ndarray\n Optionally, a pre-computed nearest-neighbor matrix\n as provided by `librosa.segment.recurrence_matrix`\n\n aggregate : function\n aggregation function (default: `np.mean`)\n\n If `aggregate=np.average`, then a weighted average is\n computed according to the (per-row) weights in `rec`.\n\n For all other aggregation functions, all neighbors\n are treated equally.\n\n\n axis : int\n The axis along which to filter (by default, columns)\n\n kwargs\n Additional keyword arguments provided to\n `librosa.segment.recurrence_matrix` if `rec` is not provided\n\n Returns\n -------\n S_filtered : np.ndarray\n The filtered data\n\n Raises\n ------\n ParameterError\n if `rec` is provided and its shape is incompatible with `S`.\n\n See also\n --------\n decompose\n hpss\n librosa.segment.recurrence_matrix\n\n\n Notes\n -----\n This function caches at level 30.\n\n\n Examples\n --------\n\n De-noise a chromagram by non-local median filtering.\n By default this would use euclidean distance to select neighbors,\n but this can be overridden directly by setting the `metric` parameter.\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(),\n ... offset=30, duration=10)\n >>> chroma = librosa.feature.chroma_cqt(y=y, sr=sr)\n >>> chroma_med = librosa.decompose.nn_filter(chroma,\n ... aggregate=np.median,\n ... metric='cosine')\n\n To use non-local means, provide an affinity matrix and `aggregate=np.average`.\n\n >>> rec = librosa.segment.recurrence_matrix(chroma, mode='affinity',\n ... metric='cosine', sparse=True)\n >>> chroma_nlm = librosa.decompose.nn_filter(chroma, rec=rec,\n ... aggregate=np.average)\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure(figsize=(10, 8))\n >>> plt.subplot(5, 1, 1)\n >>> librosa.display.specshow(chroma, y_axis='chroma')\n >>> plt.colorbar()\n >>> plt.title('Unfiltered')\n >>> plt.subplot(5, 1, 2)\n >>> librosa.display.specshow(chroma_med, y_axis='chroma')\n >>> plt.colorbar()\n >>> plt.title('Median-filtered')\n >>> plt.subplot(5, 1, 3)\n >>> librosa.display.specshow(chroma_nlm, y_axis='chroma')\n >>> plt.colorbar()\n >>> plt.title('Non-local means')\n >>> plt.subplot(5, 1, 4)\n >>> librosa.display.specshow(chroma - chroma_med,\n ... y_axis='chroma')\n >>> plt.colorbar()\n >>> plt.title('Original - median')\n >>> plt.subplot(5, 1, 5)\n >>> librosa.display.specshow(chroma - chroma_nlm,\n ... y_axis='chroma', x_axis='time')\n >>> plt.colorbar()\n >>> plt.title('Original - NLM')\n >>> plt.tight_layout()"} {"_id": "q_2121", "text": "Nearest-neighbor filter helper function.\n\n This is an internal function, not for use outside of the decompose module.\n\n It applies the nearest-neighbor filter to S, assuming that the first index\n corresponds to observations.\n\n Parameters\n ----------\n R_data, R_indices, R_ptr : np.ndarrays\n The `data`, `indices`, and `indptr` of a scipy.sparse matrix\n\n S : np.ndarray\n The observation data to filter\n\n aggregate : callable\n The aggregation operator\n\n\n Returns\n -------\n S_out : np.ndarray like S\n The filtered data array"} {"_id": "q_2122", "text": "Create a Filterbank matrix to combine FFT bins into Mel-frequency bins\n\n Parameters\n ----------\n sr : number > 0 [scalar]\n sampling rate of the incoming signal\n\n n_fft : int > 0 [scalar]\n number of FFT components\n\n n_mels : int > 0 [scalar]\n number of Mel bands to generate\n\n fmin : float >= 0 [scalar]\n lowest frequency (in Hz)\n\n fmax : float >= 0 [scalar]\n highest frequency (in Hz).\n If `None`, use `fmax = sr / 2.0`\n\n htk : bool [scalar]\n use HTK formula instead of Slaney\n\n norm : {None, 1, np.inf} [scalar]\n if 1, divide the triangular mel weights by the width of the mel band\n (area normalization). Otherwise, leave all the triangles aiming for\n a peak value of 1.0\n\n dtype : np.dtype\n The data type of the output basis.\n By default, uses 32-bit (single-precision) floating point.\n\n Returns\n -------\n M : np.ndarray [shape=(n_mels, 1 + n_fft/2)]\n Mel transform matrix\n\n Notes\n -----\n This function caches at level 10.\n\n Examples\n --------\n >>> melfb = librosa.filters.mel(22050, 2048)\n >>> melfb\n array([[ 0. , 0.016, ..., 0. , 0. ],\n [ 0. , 0. , ..., 0. , 0. ],\n ...,\n [ 0. , 0. , ..., 0. , 0. ],\n [ 0. , 0. , ..., 0. , 0. ]])\n\n\n Clip the maximum frequency to 8KHz\n\n >>> librosa.filters.mel(22050, 2048, fmax=8000)\n array([[ 0. , 0.02, ..., 0. , 0. ],\n [ 0. , 0. , ..., 0. , 0. ],\n ...,\n [ 0. , 0. , ..., 0. , 0. ],\n [ 0. , 0. , ..., 0. , 0. ]])\n\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> librosa.display.specshow(melfb, x_axis='linear')\n >>> plt.ylabel('Mel filter')\n >>> plt.title('Mel filter bank')\n >>> plt.colorbar()\n >>> plt.tight_layout()"} {"_id": "q_2123", "text": "r'''Construct a constant-Q basis.\n\n This uses the filter bank described by [1]_.\n\n .. [1] McVicar, Matthew.\n \"A machine learning approach to automatic chord extraction.\"\n Dissertation, University of Bristol. 2013.\n\n\n Parameters\n ----------\n sr : number > 0 [scalar]\n Audio sampling rate\n\n fmin : float > 0 [scalar]\n Minimum frequency bin. Defaults to `C1 ~= 32.70`\n\n n_bins : int > 0 [scalar]\n Number of frequencies. Defaults to 7 octaves (84 bins).\n\n bins_per_octave : int > 0 [scalar]\n Number of bins per octave\n\n tuning : float in `[-0.5, +0.5)` [scalar]\n Tuning deviation from A440 in fractions of a bin\n\n window : string, tuple, number, or function\n Windowing function to apply to filters.\n\n filter_scale : float > 0 [scalar]\n Scale of filter windows.\n Small values (<1) use shorter windows for higher temporal resolution.\n\n pad_fft : boolean\n Center-pad all filters up to the nearest integral power of 2.\n\n By default, padding is done with zeros, but this can be overridden\n by setting the `mode=` field in *kwargs*.\n\n norm : {inf, -inf, 0, float > 0}\n Type of norm to use for basis function normalization.\n See librosa.util.normalize\n\n dtype : np.dtype\n The data type of the output basis.\n By default, uses 64-bit (single precision) complex floating point.\n\n kwargs : additional keyword arguments\n Arguments to `np.pad()` when `pad==True`.\n\n Returns\n -------\n filters : np.ndarray, `len(filters) == n_bins`\n `filters[i]` is `i`\\ th time-domain CQT basis filter\n\n lengths : np.ndarray, `len(lengths) == n_bins`\n The (fractional) length of each filter\n\n Notes\n -----\n This function caches at level 10.\n\n See Also\n --------\n constant_q_lengths\n librosa.core.cqt\n librosa.util.normalize\n\n\n Examples\n --------\n Use a shorter window for each filter\n\n >>> basis, lengths = librosa.filters.constant_q(22050, filter_scale=0.5)\n\n Plot one octave of filters in time and frequency\n\n >>> import matplotlib.pyplot as plt\n >>> basis, lengths = librosa.filters.constant_q(22050)\n >>> plt.figure(figsize=(10, 6))\n >>> plt.subplot(2, 1, 1)\n >>> notes = librosa.midi_to_note(np.arange(24, 24 + len(basis)))\n >>> for i, (f, n) in enumerate(zip(basis, notes[:12])):\n ... f_scale = librosa.util.normalize(f) / 2\n ... plt.plot(i + f_scale.real)\n ... plt.plot(i + f_scale.imag, linestyle=':')\n >>> plt.axis('tight')\n >>> plt.yticks(np.arange(len(notes[:12])), notes[:12])\n >>> plt.ylabel('CQ filters')\n >>> plt.title('CQ filters (one octave, time domain)')\n >>> plt.xlabel('Time (samples at 22050 Hz)')\n >>> plt.legend(['Real', 'Imaginary'], frameon=True, framealpha=0.8)\n >>> plt.subplot(2, 1, 2)\n >>> F = np.abs(np.fft.fftn(basis, axes=[-1]))\n >>> # Keep only the positive frequencies\n >>> F = F[:, :(1 + F.shape[1] // 2)]\n >>> librosa.display.specshow(F, x_axis='linear')\n >>> plt.yticks(np.arange(len(notes))[::12], notes[::12])\n >>> plt.ylabel('CQ filters')\n >>> plt.title('CQ filter magnitudes (frequency domain)')\n >>> plt.tight_layout()"} {"_id": "q_2124", "text": "Convert a Constant-Q basis to Chroma.\n\n\n Parameters\n ----------\n n_input : int > 0 [scalar]\n Number of input components (CQT bins)\n\n bins_per_octave : int > 0 [scalar]\n How many bins per octave in the CQT\n\n n_chroma : int > 0 [scalar]\n Number of output bins (per octave) in the chroma\n\n fmin : None or float > 0\n Center frequency of the first constant-Q channel.\n Default: 'C1' ~= 32.7 Hz\n\n window : None or np.ndarray\n If provided, the cq_to_chroma filter bank will be\n convolved with `window`.\n\n base_c : bool\n If True, the first chroma bin will start at 'C'\n If False, the first chroma bin will start at 'A'\n\n dtype : np.dtype\n The data type of the output basis.\n By default, uses 32-bit (single-precision) floating point.\n\n\n Returns\n -------\n cq_to_chroma : np.ndarray [shape=(n_chroma, n_input)]\n Transformation matrix: `Chroma = np.dot(cq_to_chroma, CQT)`\n\n Raises\n ------\n ParameterError\n If `n_input` is not an integer multiple of `n_chroma`\n\n Notes\n -----\n This function caches at level 10.\n\n Examples\n --------\n Get a CQT, and wrap bins to chroma\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> CQT = np.abs(librosa.cqt(y, sr=sr))\n >>> chroma_map = librosa.filters.cq_to_chroma(CQT.shape[0])\n >>> chromagram = chroma_map.dot(CQT)\n >>> # Max-normalize each time step\n >>> chromagram = librosa.util.normalize(chromagram, axis=0)\n\n >>> import matplotlib.pyplot as plt\n >>> plt.subplot(3, 1, 1)\n >>> librosa.display.specshow(librosa.amplitude_to_db(CQT,\n ... ref=np.max),\n ... y_axis='cqt_note')\n >>> plt.title('CQT Power')\n >>> plt.colorbar()\n >>> plt.subplot(3, 1, 2)\n >>> librosa.display.specshow(chromagram, y_axis='chroma')\n >>> plt.title('Chroma (wrapped CQT)')\n >>> plt.colorbar()\n >>> plt.subplot(3, 1, 3)\n >>> chroma = librosa.feature.chroma_stft(y=y, sr=sr)\n >>> librosa.display.specshow(chroma, y_axis='chroma', x_axis='time')\n >>> plt.title('librosa.feature.chroma_stft')\n >>> plt.colorbar()\n >>> plt.tight_layout()"} {"_id": "q_2125", "text": "Get the equivalent noise bandwidth of a window function.\n\n\n Parameters\n ----------\n window : callable or string\n A window function, or the name of a window function.\n Examples:\n - scipy.signal.hann\n - 'boxcar'\n\n n : int > 0\n The number of coefficients to use in estimating the\n window bandwidth\n\n Returns\n -------\n bandwidth : float\n The equivalent noise bandwidth (in FFT bins) of the\n given window function\n\n Notes\n -----\n This function caches at level 10.\n\n See Also\n --------\n get_window"} {"_id": "q_2126", "text": "Compute a window function.\n\n This is a wrapper for `scipy.signal.get_window` that additionally\n supports callable or pre-computed windows.\n\n Parameters\n ----------\n window : string, tuple, number, callable, or list-like\n The window specification:\n\n - If string, it's the name of the window function (e.g., `'hann'`)\n - If tuple, it's the name of the window function and any parameters\n (e.g., `('kaiser', 4.0)`)\n - If numeric, it is treated as the beta parameter of the `'kaiser'`\n window, as in `scipy.signal.get_window`.\n - If callable, it's a function that accepts one integer argument\n (the window length)\n - If list-like, it's a pre-computed window of the correct length `Nx`\n\n Nx : int > 0\n The length of the window\n\n fftbins : bool, optional\n If True (default), create a periodic window for use with FFT\n If False, create a symmetric window for filter design applications.\n\n Returns\n -------\n get_window : np.ndarray\n A window of length `Nx` and type `window`\n\n See Also\n --------\n scipy.signal.get_window\n\n Notes\n -----\n This function caches at level 10.\n\n Raises\n ------\n ParameterError\n If `window` is supplied as a vector of length != `n_fft`,\n or is otherwise mis-specified."} {"_id": "q_2127", "text": "r'''Helper function for generating center frequency and sample rate pairs.\n\n This function will return center frequency and corresponding sample rates\n to obtain similar pitch filterbank settings as described in [1]_.\n Instead of starting with MIDI pitch `A0`, we start with `C0`.\n\n .. [1] M\u00fcller, Meinard.\n \"Information Retrieval for Music and Motion.\"\n Springer Verlag. 2007.\n\n\n Parameters\n ----------\n tuning : float in `[-0.5, +0.5)` [scalar]\n Tuning deviation from A440, measure as a fraction of the equally\n tempered semitone (1/12 of an octave).\n\n Returns\n -------\n center_freqs : np.ndarray [shape=(n,), dtype=float]\n Center frequencies of the filter kernels.\n Also defines the number of filters in the filterbank.\n\n sample_rates : np.ndarray [shape=(n,), dtype=float]\n Sample rate for each filter, used for multirate filterbank.\n\n Notes\n -----\n This function caches at level 10.\n\n\n See Also\n --------\n librosa.filters.semitone_filterbank\n librosa.filters._multirate_fb"} {"_id": "q_2128", "text": "Helper function for window sum-square calculation."} {"_id": "q_2129", "text": "Compute the sum-square envelope of a window function at a given hop length.\n\n This is used to estimate modulation effects induced by windowing observations\n in short-time fourier transforms.\n\n Parameters\n ----------\n window : string, tuple, number, callable, or list-like\n Window specification, as in `get_window`\n\n n_frames : int > 0\n The number of analysis frames\n\n hop_length : int > 0\n The number of samples to advance between frames\n\n win_length : [optional]\n The length of the window function. By default, this matches `n_fft`.\n\n n_fft : int > 0\n The length of each analysis frame.\n\n dtype : np.dtype\n The data type of the output\n\n Returns\n -------\n wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))`\n The sum-squared envelope of the window function\n\n Examples\n --------\n For a fixed frame length (2048), compare modulation effects for a Hann window\n at different hop lengths:\n\n >>> n_frames = 50\n >>> wss_256 = librosa.filters.window_sumsquare('hann', n_frames, hop_length=256)\n >>> wss_512 = librosa.filters.window_sumsquare('hann', n_frames, hop_length=512)\n >>> wss_1024 = librosa.filters.window_sumsquare('hann', n_frames, hop_length=1024)\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> plt.subplot(3,1,1)\n >>> plt.plot(wss_256)\n >>> plt.title('hop_length=256')\n >>> plt.subplot(3,1,2)\n >>> plt.plot(wss_512)\n >>> plt.title('hop_length=512')\n >>> plt.subplot(3,1,3)\n >>> plt.plot(wss_1024)\n >>> plt.title('hop_length=1024')\n >>> plt.tight_layout()"} {"_id": "q_2130", "text": "Compute the spectral centroid.\n\n Each frame of a magnitude spectrogram is normalized and treated as a\n distribution over frequency bins, from which the mean (centroid) is\n extracted per frame.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)] or None\n audio time series\n\n sr : number > 0 [scalar]\n audio sampling rate of `y`\n\n S : np.ndarray [shape=(d, t)] or None\n (optional) spectrogram magnitude\n\n n_fft : int > 0 [scalar]\n FFT window size\n\n hop_length : int > 0 [scalar]\n hop length for STFT. See `librosa.core.stft` for details.\n\n freq : None or np.ndarray [shape=(d,) or shape=(d, t)]\n Center frequencies for spectrogram bins.\n If `None`, then FFT bin center frequencies are used.\n Otherwise, it can be a single array of `d` center frequencies,\n or a matrix of center frequencies as constructed by\n `librosa.core.ifgram`\n\n win_length : int <= n_fft [scalar]\n Each frame of audio is windowed by `window()`.\n The window will be of length `win_length` and then padded\n with zeros to match `n_fft`.\n\n If unspecified, defaults to ``win_length = n_fft``.\n\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n - a window specification (string, tuple, or number);\n see `scipy.signal.get_window`\n - a window function, such as `scipy.signal.hanning`\n - a vector or array of length `n_fft`\n\n .. see also:: `filters.get_window`\n\n center : boolean\n - If `True`, the signal `y` is padded so that frame\n `t` is centered at `y[t * hop_length]`.\n - If `False`, then frame `t` begins at `y[t * hop_length]`\n\n pad_mode : string\n If `center=True`, the padding mode to use at the edges of the signal.\n By default, STFT uses reflection padding.\n\n\n Returns\n -------\n centroid : np.ndarray [shape=(1, t)]\n centroid frequencies\n\n See Also\n --------\n librosa.core.stft\n Short-time Fourier Transform\n\n librosa.core.ifgram\n Instantaneous-frequency spectrogram\n\n Examples\n --------\n From time-series input:\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> cent = librosa.feature.spectral_centroid(y=y, sr=sr)\n >>> cent\n array([[ 4382.894, 626.588, ..., 5037.07 , 5413.398]])\n\n From spectrogram input:\n\n >>> S, phase = librosa.magphase(librosa.stft(y=y))\n >>> librosa.feature.spectral_centroid(S=S)\n array([[ 4382.894, 626.588, ..., 5037.07 , 5413.398]])\n\n Using variable bin center frequencies:\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> if_gram, D = librosa.ifgram(y)\n >>> librosa.feature.spectral_centroid(S=np.abs(D), freq=if_gram)\n array([[ 4420.719, 625.769, ..., 5011.86 , 5221.492]])\n\n Plot the result\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> plt.subplot(2, 1, 1)\n >>> plt.semilogy(cent.T, label='Spectral centroid')\n >>> plt.ylabel('Hz')\n >>> plt.xticks([])\n >>> plt.xlim([0, cent.shape[-1]])\n >>> plt.legend()\n >>> plt.subplot(2, 1, 2)\n >>> librosa.display.specshow(librosa.amplitude_to_db(S, ref=np.max),\n ... y_axis='log', x_axis='time')\n >>> plt.title('log Power spectrogram')\n >>> plt.tight_layout()"} {"_id": "q_2131", "text": "Compute roll-off frequency.\n\n The roll-off frequency is defined for each frame as the center frequency\n for a spectrogram bin such that at least roll_percent (0.85 by default)\n of the energy of the spectrum in this frame is contained in this bin and\n the bins below. This can be used to, e.g., approximate the maximum (or\n minimum) frequency by setting roll_percent to a value close to 1 (or 0).\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)] or None\n audio time series\n\n sr : number > 0 [scalar]\n audio sampling rate of `y`\n\n S : np.ndarray [shape=(d, t)] or None\n (optional) spectrogram magnitude\n\n n_fft : int > 0 [scalar]\n FFT window size\n\n hop_length : int > 0 [scalar]\n hop length for STFT. See `librosa.core.stft` for details.\n\n win_length : int <= n_fft [scalar]\n Each frame of audio is windowed by `window()`.\n The window will be of length `win_length` and then padded\n with zeros to match `n_fft`.\n\n If unspecified, defaults to ``win_length = n_fft``.\n\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n - a window specification (string, tuple, or number);\n see `scipy.signal.get_window`\n - a window function, such as `scipy.signal.hanning`\n - a vector or array of length `n_fft`\n\n .. see also:: `filters.get_window`\n\n center : boolean\n - If `True`, the signal `y` is padded so that frame\n `t` is centered at `y[t * hop_length]`.\n - If `False`, then frame `t` begins at `y[t * hop_length]`\n\n pad_mode : string\n If `center=True`, the padding mode to use at the edges of the signal.\n By default, STFT uses reflection padding.\n\n freq : None or np.ndarray [shape=(d,) or shape=(d, t)]\n Center frequencies for spectrogram bins.\n If `None`, then FFT bin center frequencies are used.\n Otherwise, it can be a single array of `d` center frequencies,\n\n .. note:: `freq` is assumed to be sorted in increasing order\n\n roll_percent : float [0 < roll_percent < 1]\n Roll-off percentage.\n\n Returns\n -------\n rolloff : np.ndarray [shape=(1, t)]\n roll-off frequency for each frame\n\n\n Examples\n --------\n From time-series input\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> # Approximate maximum frequencies with roll_percent=0.85 (default)\n >>> rolloff = librosa.feature.spectral_rolloff(y=y, sr=sr)\n >>> rolloff\n array([[ 8376.416, 968.994, ..., 8925.513, 9108.545]])\n >>> # Approximate minimum frequencies with roll_percent=0.1\n >>> rolloff = librosa.feature.spectral_rolloff(y=y, sr=sr, roll_percent=0.1)\n >>> rolloff\n array([[ 75.36621094, 64.59960938, 64.59960938, ..., 75.36621094,\n 75.36621094, 64.59960938]])\n\n\n From spectrogram input\n\n >>> S, phase = librosa.magphase(librosa.stft(y))\n >>> librosa.feature.spectral_rolloff(S=S, sr=sr)\n array([[ 8376.416, 968.994, ..., 8925.513, 9108.545]])\n\n >>> # With a higher roll percentage:\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> librosa.feature.spectral_rolloff(y=y, sr=sr, roll_percent=0.95)\n array([[ 10012.939, 3003.882, ..., 10034.473, 10077.539]])\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> plt.subplot(2, 1, 1)\n >>> plt.semilogy(rolloff.T, label='Roll-off frequency')\n >>> plt.ylabel('Hz')\n >>> plt.xticks([])\n >>> plt.xlim([0, rolloff.shape[-1]])\n >>> plt.legend()\n >>> plt.subplot(2, 1, 2)\n >>> librosa.display.specshow(librosa.amplitude_to_db(S, ref=np.max),\n ... y_axis='log', x_axis='time')\n >>> plt.title('log Power spectrogram')\n >>> plt.tight_layout()"} {"_id": "q_2132", "text": "Compute spectral flatness\n\n Spectral flatness (or tonality coefficient) is a measure to\n quantify how much noise-like a sound is, as opposed to being\n tone-like [1]_. A high spectral flatness (closer to 1.0)\n indicates the spectrum is similar to white noise.\n It is often converted to decibel.\n\n .. [1] Dubnov, Shlomo \"Generalization of spectral flatness\n measure for non-gaussian linear processes\"\n IEEE Signal Processing Letters, 2004, Vol. 11.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)] or None\n audio time series\n\n S : np.ndarray [shape=(d, t)] or None\n (optional) pre-computed spectrogram magnitude\n\n n_fft : int > 0 [scalar]\n FFT window size\n\n hop_length : int > 0 [scalar]\n hop length for STFT. See `librosa.core.stft` for details.\n\n win_length : int <= n_fft [scalar]\n Each frame of audio is windowed by `window()`.\n The window will be of length `win_length` and then padded\n with zeros to match `n_fft`.\n\n If unspecified, defaults to ``win_length = n_fft``.\n\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n - a window specification (string, tuple, or number);\n see `scipy.signal.get_window`\n - a window function, such as `scipy.signal.hanning`\n - a vector or array of length `n_fft`\n\n .. see also:: `filters.get_window`\n\n center : boolean\n - If `True`, the signal `y` is padded so that frame\n `t` is centered at `y[t * hop_length]`.\n - If `False`, then frame `t` begins at `y[t * hop_length]`\n\n pad_mode : string\n If `center=True`, the padding mode to use at the edges of the signal.\n By default, STFT uses reflection padding.\n\n amin : float > 0 [scalar]\n minimum threshold for `S` (=added noise floor for numerical stability)\n\n power : float > 0 [scalar]\n Exponent for the magnitude spectrogram.\n e.g., 1 for energy, 2 for power, etc.\n Power spectrogram is usually used for computing spectral flatness.\n\n Returns\n -------\n flatness : np.ndarray [shape=(1, t)]\n spectral flatness for each frame.\n The returned value is in [0, 1] and often converted to dB scale.\n\n\n Examples\n --------\n From time-series input\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> flatness = librosa.feature.spectral_flatness(y=y)\n >>> flatness\n array([[ 1.00000e+00, 5.82299e-03, 5.64624e-04, ..., 9.99063e-01,\n 1.00000e+00, 1.00000e+00]], dtype=float32)\n\n From spectrogram input\n\n >>> S, phase = librosa.magphase(librosa.stft(y))\n >>> librosa.feature.spectral_flatness(S=S)\n array([[ 1.00000e+00, 5.82299e-03, 5.64624e-04, ..., 9.99063e-01,\n 1.00000e+00, 1.00000e+00]], dtype=float32)\n\n From power spectrogram input\n\n >>> S, phase = librosa.magphase(librosa.stft(y))\n >>> S_power = S ** 2\n >>> librosa.feature.spectral_flatness(S=S_power, power=1.0)\n array([[ 1.00000e+00, 5.82299e-03, 5.64624e-04, ..., 9.99063e-01,\n 1.00000e+00, 1.00000e+00]], dtype=float32)"} {"_id": "q_2133", "text": "Get coefficients of fitting an nth-order polynomial to the columns\n of a spectrogram.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)] or None\n audio time series\n\n sr : number > 0 [scalar]\n audio sampling rate of `y`\n\n S : np.ndarray [shape=(d, t)] or None\n (optional) spectrogram magnitude\n\n n_fft : int > 0 [scalar]\n FFT window size\n\n hop_length : int > 0 [scalar]\n hop length for STFT. See `librosa.core.stft` for details.\n\n win_length : int <= n_fft [scalar]\n Each frame of audio is windowed by `window()`.\n The window will be of length `win_length` and then padded\n with zeros to match `n_fft`.\n\n If unspecified, defaults to ``win_length = n_fft``.\n\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n - a window specification (string, tuple, or number);\n see `scipy.signal.get_window`\n - a window function, such as `scipy.signal.hanning`\n - a vector or array of length `n_fft`\n\n .. see also:: `filters.get_window`\n\n center : boolean\n - If `True`, the signal `y` is padded so that frame\n `t` is centered at `y[t * hop_length]`.\n - If `False`, then frame `t` begins at `y[t * hop_length]`\n\n pad_mode : string\n If `center=True`, the padding mode to use at the edges of the signal.\n By default, STFT uses reflection padding.\n\n order : int > 0\n order of the polynomial to fit\n\n freq : None or np.ndarray [shape=(d,) or shape=(d, t)]\n Center frequencies for spectrogram bins.\n If `None`, then FFT bin center frequencies are used.\n Otherwise, it can be a single array of `d` center frequencies,\n or a matrix of center frequencies as constructed by\n `librosa.core.ifgram`\n\n Returns\n -------\n coefficients : np.ndarray [shape=(order+1, t)]\n polynomial coefficients for each frame.\n\n `coeffecients[0]` corresponds to the highest degree (`order`),\n\n `coefficients[1]` corresponds to the next highest degree (`order-1`),\n\n down to the constant term `coefficients[order]`.\n\n Examples\n --------\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> S = np.abs(librosa.stft(y))\n\n Fit a degree-0 polynomial (constant) to each frame\n\n >>> p0 = librosa.feature.poly_features(S=S, order=0)\n\n Fit a linear polynomial to each frame\n\n >>> p1 = librosa.feature.poly_features(S=S, order=1)\n\n Fit a quadratic to each frame\n\n >>> p2 = librosa.feature.poly_features(S=S, order=2)\n\n Plot the results for comparison\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure(figsize=(8, 8))\n >>> ax = plt.subplot(4,1,1)\n >>> plt.plot(p2[2], label='order=2', alpha=0.8)\n >>> plt.plot(p1[1], label='order=1', alpha=0.8)\n >>> plt.plot(p0[0], label='order=0', alpha=0.8)\n >>> plt.xticks([])\n >>> plt.ylabel('Constant')\n >>> plt.legend()\n >>> plt.subplot(4,1,2, sharex=ax)\n >>> plt.plot(p2[1], label='order=2', alpha=0.8)\n >>> plt.plot(p1[0], label='order=1', alpha=0.8)\n >>> plt.xticks([])\n >>> plt.ylabel('Linear')\n >>> plt.subplot(4,1,3, sharex=ax)\n >>> plt.plot(p2[0], label='order=2', alpha=0.8)\n >>> plt.xticks([])\n >>> plt.ylabel('Quadratic')\n >>> plt.subplot(4,1,4, sharex=ax)\n >>> librosa.display.specshow(librosa.amplitude_to_db(S, ref=np.max),\n ... y_axis='log')\n >>> plt.tight_layout()"} {"_id": "q_2134", "text": "Compute the zero-crossing rate of an audio time series.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n Audio time series\n\n frame_length : int > 0\n Length of the frame over which to compute zero crossing rates\n\n hop_length : int > 0\n Number of samples to advance for each frame\n\n center : bool\n If `True`, frames are centered by padding the edges of `y`.\n This is similar to the padding in `librosa.core.stft`,\n but uses edge-value copies instead of reflection.\n\n kwargs : additional keyword arguments\n See `librosa.core.zero_crossings`\n\n .. note:: By default, the `pad` parameter is set to `False`, which\n differs from the default specified by\n `librosa.core.zero_crossings`.\n\n Returns\n -------\n zcr : np.ndarray [shape=(1, t)]\n `zcr[0, i]` is the fraction of zero crossings in the\n `i` th frame\n\n See Also\n --------\n librosa.core.zero_crossings\n Compute zero-crossings in a time-series\n\n Examples\n --------\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> librosa.feature.zero_crossing_rate(y)\n array([[ 0.134, 0.139, ..., 0.387, 0.322]])"} {"_id": "q_2135", "text": "Compute a chromagram from a waveform or power spectrogram.\n\n This implementation is derived from `chromagram_E` [1]_\n\n .. [1] Ellis, Daniel P.W. \"Chroma feature analysis and synthesis\"\n 2007/04/21\n http://labrosa.ee.columbia.edu/matlab/chroma-ansyn/\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)] or None\n audio time series\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n S : np.ndarray [shape=(d, t)] or None\n power spectrogram\n\n norm : float or None\n Column-wise normalization.\n See `librosa.util.normalize` for details.\n\n If `None`, no normalization is performed.\n\n n_fft : int > 0 [scalar]\n FFT window size if provided `y, sr` instead of `S`\n\n hop_length : int > 0 [scalar]\n hop length if provided `y, sr` instead of `S`\n\n win_length : int <= n_fft [scalar]\n Each frame of audio is windowed by `window()`.\n The window will be of length `win_length` and then padded\n with zeros to match `n_fft`.\n\n If unspecified, defaults to ``win_length = n_fft``.\n\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n - a window specification (string, tuple, or number);\n see `scipy.signal.get_window`\n - a window function, such as `scipy.signal.hanning`\n - a vector or array of length `n_fft`\n\n .. see also:: `filters.get_window`\n\n center : boolean\n - If `True`, the signal `y` is padded so that frame\n `t` is centered at `y[t * hop_length]`.\n - If `False`, then frame `t` begins at `y[t * hop_length]`\n\n pad_mode : string\n If `center=True`, the padding mode to use at the edges of the signal.\n By default, STFT uses reflection padding.\n\n\n tuning : float in `[-0.5, 0.5)` [scalar] or None.\n Deviation from A440 tuning in fractional bins (cents).\n If `None`, it is automatically estimated.\n\n kwargs : additional keyword arguments\n Arguments to parameterize chroma filters.\n See `librosa.filters.chroma` for details.\n\n Returns\n -------\n chromagram : np.ndarray [shape=(n_chroma, t)]\n Normalized energy for each chroma bin at each frame.\n\n See Also\n --------\n librosa.filters.chroma\n Chroma filter bank construction\n librosa.util.normalize\n Vector normalization\n\n Examples\n --------\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> librosa.feature.chroma_stft(y=y, sr=sr)\n array([[ 0.974, 0.881, ..., 0.925, 1. ],\n [ 1. , 0.841, ..., 0.882, 0.878],\n ...,\n [ 0.658, 0.985, ..., 0.878, 0.764],\n [ 0.969, 0.92 , ..., 0.974, 0.915]])\n\n Use an energy (magnitude) spectrum instead of power spectrogram\n\n >>> S = np.abs(librosa.stft(y))\n >>> chroma = librosa.feature.chroma_stft(S=S, sr=sr)\n >>> chroma\n array([[ 0.884, 0.91 , ..., 0.861, 0.858],\n [ 0.963, 0.785, ..., 0.968, 0.896],\n ...,\n [ 0.871, 1. , ..., 0.928, 0.829],\n [ 1. , 0.982, ..., 0.93 , 0.878]])\n\n Use a pre-computed power spectrogram with a larger frame\n\n >>> S = np.abs(librosa.stft(y, n_fft=4096))**2\n >>> chroma = librosa.feature.chroma_stft(S=S, sr=sr)\n >>> chroma\n array([[ 0.685, 0.477, ..., 0.961, 0.986],\n [ 0.674, 0.452, ..., 0.952, 0.926],\n ...,\n [ 0.844, 0.575, ..., 0.934, 0.869],\n [ 0.793, 0.663, ..., 0.964, 0.972]])\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure(figsize=(10, 4))\n >>> librosa.display.specshow(chroma, y_axis='chroma', x_axis='time')\n >>> plt.colorbar()\n >>> plt.title('Chromagram')\n >>> plt.tight_layout()"} {"_id": "q_2136", "text": "r'''Constant-Q chromagram\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n audio time series\n\n sr : number > 0\n sampling rate of `y`\n\n C : np.ndarray [shape=(d, t)] [Optional]\n a pre-computed constant-Q spectrogram\n\n hop_length : int > 0\n number of samples between successive chroma frames\n\n fmin : float > 0\n minimum frequency to analyze in the CQT.\n Default: 'C1' ~= 32.7 Hz\n\n norm : int > 0, +-np.inf, or None\n Column-wise normalization of the chromagram.\n\n threshold : float\n Pre-normalization energy threshold. Values below the\n threshold are discarded, resulting in a sparse chromagram.\n\n tuning : float\n Deviation (in cents) from A440 tuning\n\n n_chroma : int > 0\n Number of chroma bins to produce\n\n n_octaves : int > 0\n Number of octaves to analyze above `fmin`\n\n window : None or np.ndarray\n Optional window parameter to `filters.cq_to_chroma`\n\n bins_per_octave : int > 0\n Number of bins per octave in the CQT.\n Default: matches `n_chroma`\n\n cqt_mode : ['full', 'hybrid']\n Constant-Q transform mode\n\n Returns\n -------\n chromagram : np.ndarray [shape=(n_chroma, t)]\n The output chromagram\n\n See Also\n --------\n librosa.util.normalize\n librosa.core.cqt\n librosa.core.hybrid_cqt\n chroma_stft\n\n Examples\n --------\n Compare a long-window STFT chromagram to the CQT chromagram\n\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(),\n ... offset=10, duration=15)\n >>> chroma_stft = librosa.feature.chroma_stft(y=y, sr=sr,\n ... n_chroma=12, n_fft=4096)\n >>> chroma_cq = librosa.feature.chroma_cqt(y=y, sr=sr)\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> plt.subplot(2,1,1)\n >>> librosa.display.specshow(chroma_stft, y_axis='chroma')\n >>> plt.title('chroma_stft')\n >>> plt.colorbar()\n >>> plt.subplot(2,1,2)\n >>> librosa.display.specshow(chroma_cq, y_axis='chroma', x_axis='time')\n >>> plt.title('chroma_cqt')\n >>> plt.colorbar()\n >>> plt.tight_layout()"} {"_id": "q_2137", "text": "Compute a mel-scaled spectrogram.\n\n If a spectrogram input `S` is provided, then it is mapped directly onto\n the mel basis `mel_f` by `mel_f.dot(S)`.\n\n If a time-series input `y, sr` is provided, then its magnitude spectrogram\n `S` is first computed, and then mapped onto the mel scale by\n `mel_f.dot(S**power)`. By default, `power=2` operates on a power spectrum.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)] or None\n audio time-series\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n S : np.ndarray [shape=(d, t)]\n spectrogram\n\n n_fft : int > 0 [scalar]\n length of the FFT window\n\n hop_length : int > 0 [scalar]\n number of samples between successive frames.\n See `librosa.core.stft`\n\n win_length : int <= n_fft [scalar]\n Each frame of audio is windowed by `window()`.\n The window will be of length `win_length` and then padded\n with zeros to match `n_fft`.\n\n If unspecified, defaults to ``win_length = n_fft``.\n\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n - a window specification (string, tuple, or number);\n see `scipy.signal.get_window`\n - a window function, such as `scipy.signal.hanning`\n - a vector or array of length `n_fft`\n\n .. see also:: `filters.get_window`\n\n center : boolean\n - If `True`, the signal `y` is padded so that frame\n `t` is centered at `y[t * hop_length]`.\n - If `False`, then frame `t` begins at `y[t * hop_length]`\n\n pad_mode : string\n If `center=True`, the padding mode to use at the edges of the signal.\n By default, STFT uses reflection padding.\n\n power : float > 0 [scalar]\n Exponent for the magnitude melspectrogram.\n e.g., 1 for energy, 2 for power, etc.\n\n kwargs : additional keyword arguments\n Mel filter bank parameters.\n See `librosa.filters.mel` for details.\n\n Returns\n -------\n S : np.ndarray [shape=(n_mels, t)]\n Mel spectrogram\n\n See Also\n --------\n librosa.filters.mel\n Mel filter bank construction\n\n librosa.core.stft\n Short-time Fourier Transform\n\n\n Examples\n --------\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> librosa.feature.melspectrogram(y=y, sr=sr)\n array([[ 2.891e-07, 2.548e-03, ..., 8.116e-09, 5.633e-09],\n [ 1.986e-07, 1.162e-02, ..., 9.332e-08, 6.716e-09],\n ...,\n [ 3.668e-09, 2.029e-08, ..., 3.208e-09, 2.864e-09],\n [ 2.561e-10, 2.096e-09, ..., 7.543e-10, 6.101e-10]])\n\n Using a pre-computed power spectrogram\n\n >>> D = np.abs(librosa.stft(y))**2\n >>> S = librosa.feature.melspectrogram(S=D)\n\n >>> # Passing through arguments to the Mel filters\n >>> S = librosa.feature.melspectrogram(y=y, sr=sr, n_mels=128,\n ... fmax=8000)\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure(figsize=(10, 4))\n >>> librosa.display.specshow(librosa.power_to_db(S,\n ... ref=np.max),\n ... y_axis='mel', fmax=8000,\n ... x_axis='time')\n >>> plt.colorbar(format='%+2.0f dB')\n >>> plt.title('Mel spectrogram')\n >>> plt.tight_layout()"} {"_id": "q_2138", "text": "Jaccard similarity between two intervals\n\n Parameters\n ----------\n int_a, int_b : np.ndarrays, shape=(2,)\n\n Returns\n -------\n Jaccard similarity between intervals"} {"_id": "q_2139", "text": "Numba-accelerated interval matching algorithm."} {"_id": "q_2140", "text": "Match one set of time intervals to another.\n\n This can be useful for tasks such as mapping beat timings\n to segments.\n\n Each element `[a, b]` of `intervals_from` is matched to the\n element `[c, d]` of `intervals_to` which maximizes the\n Jaccard similarity between the intervals:\n\n `max(0, |min(b, d) - max(a, c)|) / |max(d, b) - min(a, c)|`\n\n In `strict=True` mode, if there is no interval with positive\n intersection with `[a,b]`, an exception is thrown.\n\n In `strict=False` mode, any interval `[a, b]` that has no\n intersection with any element of `intervals_to` is instead\n matched to the interval `[c, d]` which minimizes\n\n `min(|b - c|, |a - d|)`\n\n that is, the disjoint interval `[c, d]` with a boundary closest\n to `[a, b]`.\n\n .. note:: An element of `intervals_to` may be matched to multiple\n entries of `intervals_from`.\n\n Parameters\n ----------\n intervals_from : np.ndarray [shape=(n, 2)]\n The time range for source intervals.\n The `i` th interval spans time `intervals_from[i, 0]`\n to `intervals_from[i, 1]`.\n `intervals_from[0, 0]` should be 0, `intervals_from[-1, 1]`\n should be the track duration.\n\n intervals_to : np.ndarray [shape=(m, 2)]\n Analogous to `intervals_from`.\n\n strict : bool\n If `True`, intervals can only match if they intersect.\n If `False`, disjoint intervals can match.\n\n Returns\n -------\n interval_mapping : np.ndarray [shape=(n,)]\n For each interval in `intervals_from`, the\n corresponding interval in `intervals_to`.\n\n See Also\n --------\n match_events\n\n Raises\n ------\n ParameterError\n If either array of input intervals is not the correct shape\n\n If `strict=True` and some element of `intervals_from` is disjoint from\n every element of `intervals_to`.\n\n Examples\n --------\n >>> ints_from = np.array([[3, 5], [1, 4], [4, 5]])\n >>> ints_to = np.array([[0, 2], [1, 3], [4, 5], [6, 7]])\n >>> librosa.util.match_intervals(ints_from, ints_to)\n array([2, 1, 2], dtype=uint32)\n >>> # [3, 5] => [4, 5] (ints_to[2])\n >>> # [1, 4] => [1, 3] (ints_to[1])\n >>> # [4, 5] => [4, 5] (ints_to[2])\n\n The reverse matching of the above is not possible in `strict` mode\n because `[6, 7]` is disjoint from all intervals in `ints_from`.\n With `strict=False`, we get the following:\n >>> librosa.util.match_intervals(ints_to, ints_from, strict=False)\n array([1, 1, 2, 2], dtype=uint32)\n >>> # [0, 2] => [1, 4] (ints_from[1])\n >>> # [1, 3] => [1, 4] (ints_from[1])\n >>> # [4, 5] => [4, 5] (ints_from[2])\n >>> # [6, 7] => [4, 5] (ints_from[2])"} {"_id": "q_2141", "text": "Match one set of events to another.\n\n This is useful for tasks such as matching beats to the nearest\n detected onsets, or frame-aligned events to the nearest zero-crossing.\n\n .. note:: A target event may be matched to multiple source events.\n\n Examples\n --------\n >>> # Sources are multiples of 7\n >>> s_from = np.arange(0, 100, 7)\n >>> s_from\n array([ 0, 7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 91,\n 98])\n >>> # Targets are multiples of 10\n >>> s_to = np.arange(0, 100, 10)\n >>> s_to\n array([ 0, 10, 20, 30, 40, 50, 60, 70, 80, 90])\n >>> # Find the matching\n >>> idx = librosa.util.match_events(s_from, s_to)\n >>> idx\n array([0, 1, 1, 2, 3, 3, 4, 5, 6, 6, 7, 8, 8, 9, 9])\n >>> # Print each source value to its matching target\n >>> zip(s_from, s_to[idx])\n [(0, 0), (7, 10), (14, 10), (21, 20), (28, 30), (35, 30),\n (42, 40), (49, 50), (56, 60), (63, 60), (70, 70), (77, 80),\n (84, 80), (91, 90), (98, 90)]\n\n Parameters\n ----------\n events_from : ndarray [shape=(n,)]\n Array of events (eg, times, sample or frame indices) to match from.\n\n events_to : ndarray [shape=(m,)]\n Array of events (eg, times, sample or frame indices) to\n match against.\n\n left : bool\n right : bool\n If `False`, then matched events cannot be to the left (or right)\n of source events.\n\n Returns\n -------\n event_mapping : np.ndarray [shape=(n,)]\n For each event in `events_from`, the corresponding event\n index in `events_to`.\n\n `event_mapping[i] == arg min |events_from[i] - events_to[:]|`\n\n See Also\n --------\n match_intervals\n\n Raises\n ------\n ParameterError\n If either array of input events is not the correct shape"} {"_id": "q_2142", "text": "Populate a harmonic tensor from a time-frequency representation.\n\n Parameters\n ----------\n harmonic_out : np.ndarray, shape=(len(h_range), X.shape)\n The output array to store harmonics\n\n X : np.ndarray\n The input energy\n\n freqs : np.ndarray, shape=(x.shape[axis])\n The frequency values corresponding to x's elements along the\n chosen axis.\n\n h_range : list-like, non-negative\n Harmonics to compute. The first harmonic (1) corresponds to `x`\n itself.\n Values less than one (e.g., 1/2) correspond to sub-harmonics.\n\n kind : str\n Interpolation type. See `scipy.interpolate.interp1d`.\n\n fill_value : float\n The value to fill when extrapolating beyond the observed\n frequency range.\n\n axis : int\n The axis along which to compute harmonics\n\n See Also\n --------\n harmonics\n scipy.interpolate.interp1d\n\n\n Examples\n --------\n Estimate the harmonics of a time-averaged tempogram\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(),\n ... duration=15, offset=30)\n >>> # Compute the time-varying tempogram and average over time\n >>> tempi = np.mean(librosa.feature.tempogram(y=y, sr=sr), axis=1)\n >>> # We'll measure the first five harmonics\n >>> h_range = [1, 2, 3, 4, 5]\n >>> f_tempo = librosa.tempo_frequencies(len(tempi), sr=sr)\n >>> # Build the harmonic tensor\n >>> t_harmonics = librosa.interp_harmonics(tempi, f_tempo, h_range)\n >>> print(t_harmonics.shape)\n (5, 384)\n\n >>> # And plot the results\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> librosa.display.specshow(t_harmonics, x_axis='tempo', sr=sr)\n >>> plt.yticks(0.5 + np.arange(len(h_range)),\n ... ['{:.3g}'.format(_) for _ in h_range])\n >>> plt.ylabel('Harmonic')\n >>> plt.xlabel('Tempo (BPM)')\n >>> plt.tight_layout()\n\n We can also compute frequency harmonics for spectrograms.\n To calculate subharmonic energy, use values < 1.\n\n >>> h_range = [1./3, 1./2, 1, 2, 3, 4]\n >>> S = np.abs(librosa.stft(y))\n >>> fft_freqs = librosa.fft_frequencies(sr=sr)\n >>> S_harm = librosa.interp_harmonics(S, fft_freqs, h_range, axis=0)\n >>> print(S_harm.shape)\n (6, 1025, 646)\n\n >>> plt.figure()\n >>> for i, _sh in enumerate(S_harm, 1):\n ... plt.subplot(3,2,i)\n ... librosa.display.specshow(librosa.amplitude_to_db(_sh,\n ... ref=S.max()),\n ... sr=sr, y_axis='log')\n ... plt.title('h={:.3g}'.format(h_range[i-1]))\n ... plt.yticks([])\n >>> plt.tight_layout()"} {"_id": "q_2143", "text": "Load an audio file as a floating point time series.\n\n Audio will be automatically resampled to the given rate\n (default `sr=22050`).\n\n To preserve the native sampling rate of the file, use `sr=None`.\n\n Parameters\n ----------\n path : string, int, or file-like object\n path to the input file.\n\n Any codec supported by `soundfile` or `audioread` will work.\n\n If the codec is supported by `soundfile`, then `path` can also be\n an open file descriptor (int), or any object implementing Python's\n file interface.\n\n If the codec is not supported by `soundfile` (e.g., MP3), then only\n string file paths are supported.\n\n sr : number > 0 [scalar]\n target sampling rate\n\n 'None' uses the native sampling rate\n\n mono : bool\n convert signal to mono\n\n offset : float\n start reading after this time (in seconds)\n\n duration : float\n only load up to this much audio (in seconds)\n\n dtype : numeric type\n data type of `y`\n\n res_type : str\n resample type (see note)\n\n .. note::\n By default, this uses `resampy`'s high-quality mode ('kaiser_best').\n\n For alternative resampling modes, see `resample`\n\n .. note::\n `audioread` may truncate the precision of the audio data to 16 bits.\n\n See https://librosa.github.io/librosa/ioformats.html for alternate\n loading methods.\n\n\n Returns\n -------\n y : np.ndarray [shape=(n,) or (2, n)]\n audio time series\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n\n Examples\n --------\n >>> # Load an ogg vorbis file\n >>> filename = librosa.util.example_audio_file()\n >>> y, sr = librosa.load(filename)\n >>> y\n array([ -4.756e-06, -6.020e-06, ..., -1.040e-06, 0.000e+00], dtype=float32)\n >>> sr\n 22050\n\n >>> # Load a file and resample to 11 KHz\n >>> filename = librosa.util.example_audio_file()\n >>> y, sr = librosa.load(filename, sr=11025)\n >>> y\n array([ -2.077e-06, -2.928e-06, ..., -4.395e-06, 0.000e+00], dtype=float32)\n >>> sr\n 11025\n\n >>> # Load 5 seconds of a file, starting 15 seconds in\n >>> filename = librosa.util.example_audio_file()\n >>> y, sr = librosa.load(filename, offset=15.0, duration=5.0)\n >>> y\n array([ 0.069, 0.1 , ..., -0.101, 0. ], dtype=float32)\n >>> sr\n 22050"} {"_id": "q_2144", "text": "Resample a time series from orig_sr to target_sr\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,) or shape=(2, n)]\n audio time series. Can be mono or stereo.\n\n orig_sr : number > 0 [scalar]\n original sampling rate of `y`\n\n target_sr : number > 0 [scalar]\n target sampling rate\n\n res_type : str\n resample type (see note)\n\n .. note::\n By default, this uses `resampy`'s high-quality mode ('kaiser_best').\n\n To use a faster method, set `res_type='kaiser_fast'`.\n\n To use `scipy.signal.resample`, set `res_type='fft'` or `res_type='scipy'`.\n\n To use `scipy.signal.resample_poly`, set `res_type='polyphase'`.\n\n .. note::\n When using `res_type='polyphase'`, only integer sampling rates are\n supported.\n\n fix : bool\n adjust the length of the resampled signal to be of size exactly\n `ceil(target_sr * len(y) / orig_sr)`\n\n scale : bool\n Scale the resampled signal so that `y` and `y_hat` have approximately\n equal total energy.\n\n kwargs : additional keyword arguments\n If `fix==True`, additional keyword arguments to pass to\n `librosa.util.fix_length`.\n\n Returns\n -------\n y_hat : np.ndarray [shape=(n * target_sr / orig_sr,)]\n `y` resampled from `orig_sr` to `target_sr`\n\n Raises\n ------\n ParameterError\n If `res_type='polyphase'` and `orig_sr` or `target_sr` are not both\n integer-valued.\n\n See Also\n --------\n librosa.util.fix_length\n scipy.signal.resample\n resampy.resample\n\n Notes\n -----\n This function caches at level 20.\n\n Examples\n --------\n Downsample from 22 KHz to 8 KHz\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(), sr=22050)\n >>> y_8k = librosa.resample(y, sr, 8000)\n >>> y.shape, y_8k.shape\n ((1355168,), (491671,))"} {"_id": "q_2145", "text": "Returns a signal with the signal `click` placed at each specified time\n\n Parameters\n ----------\n times : np.ndarray or None\n times to place clicks, in seconds\n\n frames : np.ndarray or None\n frame indices to place clicks\n\n sr : number > 0\n desired sampling rate of the output signal\n\n hop_length : int > 0\n if positions are specified by `frames`, the number of samples between frames.\n\n click_freq : float > 0\n frequency (in Hz) of the default click signal. Default is 1KHz.\n\n click_duration : float > 0\n duration (in seconds) of the default click signal. Default is 100ms.\n\n click : np.ndarray or None\n optional click signal sample to use instead of the default blip.\n\n length : int > 0\n desired number of samples in the output signal\n\n\n Returns\n -------\n click_signal : np.ndarray\n Synthesized click signal\n\n\n Raises\n ------\n ParameterError\n - If neither `times` nor `frames` are provided.\n - If any of `click_freq`, `click_duration`, or `length` are out of range.\n\n\n Examples\n --------\n >>> # Sonify detected beat events\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> tempo, beats = librosa.beat.beat_track(y=y, sr=sr)\n >>> y_beats = librosa.clicks(frames=beats, sr=sr)\n\n >>> # Or generate a signal of the same length as y\n >>> y_beats = librosa.clicks(frames=beats, sr=sr, length=len(y))\n\n >>> # Or use timing instead of frame indices\n >>> times = librosa.frames_to_time(beats, sr=sr)\n >>> y_beat_times = librosa.clicks(times=times, sr=sr)\n\n >>> # Or with a click frequency of 880Hz and a 500ms sample\n >>> y_beat_times880 = librosa.clicks(times=times, sr=sr,\n ... click_freq=880, click_duration=0.5)\n\n Display click waveform next to the spectrogram\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> S = librosa.feature.melspectrogram(y=y, sr=sr)\n >>> ax = plt.subplot(2,1,2)\n >>> librosa.display.specshow(librosa.power_to_db(S, ref=np.max),\n ... x_axis='time', y_axis='mel')\n >>> plt.subplot(2,1,1, sharex=ax)\n >>> librosa.display.waveplot(y_beat_times, sr=sr, label='Beat clicks')\n >>> plt.legend()\n >>> plt.xlim(15, 30)\n >>> plt.tight_layout()"} {"_id": "q_2146", "text": "Returns a pure tone signal. The signal generated is a cosine wave.\n\n Parameters\n ----------\n frequency : float > 0\n frequency\n\n sr : number > 0\n desired sampling rate of the output signal\n\n length : int > 0\n desired number of samples in the output signal. When both `duration` and `length` are defined,\n `length` would take priority.\n\n duration : float > 0\n desired duration in seconds. When both `duration` and `length` are defined, `length` would take priority.\n\n phi : float or None\n phase offset, in radians. If unspecified, defaults to `-np.pi * 0.5`.\n\n\n Returns\n -------\n tone_signal : np.ndarray [shape=(length,), dtype=float64]\n Synthesized pure sine tone signal\n\n\n Raises\n ------\n ParameterError\n - If `frequency` is not provided.\n - If neither `length` nor `duration` are provided.\n\n\n Examples\n --------\n >>> # Generate a pure sine tone A4\n >>> tone = librosa.tone(440, duration=1)\n\n >>> # Or generate the same signal using `length`\n >>> tone = librosa.tone(440, sr=22050, length=22050)\n\n Display spectrogram\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> S = librosa.feature.melspectrogram(y=tone)\n >>> librosa.display.specshow(librosa.power_to_db(S, ref=np.max),\n ... x_axis='time', y_axis='mel')"} {"_id": "q_2147", "text": "Returns a chirp signal that goes from frequency `fmin` to frequency `fmax`\n\n Parameters\n ----------\n fmin : float > 0\n initial frequency\n\n fmax : float > 0\n final frequency\n\n sr : number > 0\n desired sampling rate of the output signal\n\n length : int > 0\n desired number of samples in the output signal.\n When both `duration` and `length` are defined, `length` would take priority.\n\n duration : float > 0\n desired duration in seconds.\n When both `duration` and `length` are defined, `length` would take priority.\n\n linear : boolean\n - If `True`, use a linear sweep, i.e., frequency changes linearly with time\n - If `False`, use a exponential sweep.\n Default is `False`.\n\n phi : float or None\n phase offset, in radians.\n If unspecified, defaults to `-np.pi * 0.5`.\n\n\n Returns\n -------\n chirp_signal : np.ndarray [shape=(length,), dtype=float64]\n Synthesized chirp signal\n\n\n Raises\n ------\n ParameterError\n - If either `fmin` or `fmax` are not provided.\n - If neither `length` nor `duration` are provided.\n\n\n See Also\n --------\n scipy.signal.chirp\n\n\n Examples\n --------\n >>> # Generate a exponential chirp from A4 to A5\n >>> exponential_chirp = librosa.chirp(440, 880, duration=1)\n\n >>> # Or generate the same signal using `length`\n >>> exponential_chirp = librosa.chirp(440, 880, sr=22050, length=22050)\n\n >>> # Or generate a linear chirp instead\n >>> linear_chirp = librosa.chirp(440, 880, duration=1, linear=True)\n\n Display spectrogram for both exponential and linear chirps\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> S_exponential = librosa.feature.melspectrogram(y=exponential_chirp)\n >>> ax = plt.subplot(2,1,1)\n >>> librosa.display.specshow(librosa.power_to_db(S_exponential, ref=np.max),\n ... x_axis='time', y_axis='mel')\n >>> plt.subplot(2,1,2, sharex=ax)\n >>> S_linear = librosa.feature.melspectrogram(y=linear_chirp)\n >>> librosa.display.specshow(librosa.power_to_db(S_linear, ref=np.max),\n ... x_axis='time', y_axis='mel')\n >>> plt.tight_layout()"} {"_id": "q_2148", "text": "Phase-vocoder time stretch demo function.\n\n :parameters:\n - input_file : str\n path to input audio\n - output_file : str\n path to save output (wav)\n - speed : float > 0\n speed up by this factor"} {"_id": "q_2149", "text": "Argparse function to get the program parameters"} {"_id": "q_2150", "text": "HPSS demo function.\n\n :parameters:\n - input_file : str\n path to input audio\n - output_harmonic : str\n path to save output harmonic (wav)\n - output_percussive : str\n path to save output harmonic (wav)"} {"_id": "q_2151", "text": "r'''Dynamic programming beat tracker.\n\n Beats are detected in three stages, following the method of [1]_:\n 1. Measure onset strength\n 2. Estimate tempo from onset correlation\n 3. Pick peaks in onset strength approximately consistent with estimated\n tempo\n\n .. [1] Ellis, Daniel PW. \"Beat tracking by dynamic programming.\"\n Journal of New Music Research 36.1 (2007): 51-60.\n http://labrosa.ee.columbia.edu/projects/beattrack/\n\n\n Parameters\n ----------\n\n y : np.ndarray [shape=(n,)] or None\n audio time series\n\n sr : number > 0 [scalar]\n sampling rate of `y`\n\n onset_envelope : np.ndarray [shape=(n,)] or None\n (optional) pre-computed onset strength envelope.\n\n hop_length : int > 0 [scalar]\n number of audio samples between successive `onset_envelope` values\n\n start_bpm : float > 0 [scalar]\n initial guess for the tempo estimator (in beats per minute)\n\n tightness : float [scalar]\n tightness of beat distribution around tempo\n\n trim : bool [scalar]\n trim leading/trailing beats with weak onsets\n\n bpm : float [scalar]\n (optional) If provided, use `bpm` as the tempo instead of\n estimating it from `onsets`.\n\n units : {'frames', 'samples', 'time'}\n The units to encode detected beat events in.\n By default, 'frames' are used.\n\n\n Returns\n -------\n\n tempo : float [scalar, non-negative]\n estimated global tempo (in beats per minute)\n\n beats : np.ndarray [shape=(m,)]\n estimated beat event locations in the specified units\n (default is frame indices)\n\n .. note::\n If no onset strength could be detected, beat_tracker estimates 0 BPM\n and returns an empty list.\n\n\n Raises\n ------\n ParameterError\n if neither `y` nor `onset_envelope` are provided\n\n or if `units` is not one of 'frames', 'samples', or 'time'\n\n See Also\n --------\n librosa.onset.onset_strength\n\n\n Examples\n --------\n Track beats using time series input\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n\n >>> tempo, beats = librosa.beat.beat_track(y=y, sr=sr)\n >>> tempo\n 64.599609375\n\n\n Print the first 20 beat frames\n\n >>> beats[:20]\n array([ 320, 357, 397, 436, 480, 525, 569, 609, 658,\n 698, 737, 777, 817, 857, 896, 936, 976, 1016,\n 1055, 1095])\n\n\n Or print them as timestamps\n\n >>> librosa.frames_to_time(beats[:20], sr=sr)\n array([ 7.43 , 8.29 , 9.218, 10.124, 11.146, 12.19 ,\n 13.212, 14.141, 15.279, 16.208, 17.113, 18.042,\n 18.971, 19.9 , 20.805, 21.734, 22.663, 23.591,\n 24.497, 25.426])\n\n\n Track beats using a pre-computed onset envelope\n\n >>> onset_env = librosa.onset.onset_strength(y, sr=sr,\n ... aggregate=np.median)\n >>> tempo, beats = librosa.beat.beat_track(onset_envelope=onset_env,\n ... sr=sr)\n >>> tempo\n 64.599609375\n >>> beats[:20]\n array([ 320, 357, 397, 436, 480, 525, 569, 609, 658,\n 698, 737, 777, 817, 857, 896, 936, 976, 1016,\n 1055, 1095])\n\n\n Plot the beat events against the onset strength envelope\n\n >>> import matplotlib.pyplot as plt\n >>> hop_length = 512\n >>> plt.figure(figsize=(8, 4))\n >>> times = librosa.frames_to_time(np.arange(len(onset_env)),\n ... sr=sr, hop_length=hop_length)\n >>> plt.plot(times, librosa.util.normalize(onset_env),\n ... label='Onset strength')\n >>> plt.vlines(times[beats], 0, 1, alpha=0.5, color='r',\n ... linestyle='--', label='Beats')\n >>> plt.legend(frameon=True, framealpha=0.75)\n >>> # Limit the plot to a 15-second window\n >>> plt.xlim(15, 30)\n >>> plt.gca().xaxis.set_major_formatter(librosa.display.TimeFormatter())\n >>> plt.tight_layout()"} {"_id": "q_2152", "text": "Construct the local score for an onset envlope and given period"} {"_id": "q_2153", "text": "Get the last beat from the cumulative score array"} {"_id": "q_2154", "text": "Convert a recurrence matrix into a lag matrix.\n\n `lag[i, j] == rec[i+j, j]`\n\n Parameters\n ----------\n rec : np.ndarray, or scipy.sparse.spmatrix [shape=(n, n)]\n A (binary) recurrence matrix, as returned by `recurrence_matrix`\n\n pad : bool\n If False, `lag` matrix is square, which is equivalent to\n assuming that the signal repeats itself indefinitely.\n\n If True, `lag` is padded with `n` zeros, which eliminates\n the assumption of repetition.\n\n axis : int\n The axis to keep as the `time` axis.\n The alternate axis will be converted to lag coordinates.\n\n Returns\n -------\n lag : np.ndarray\n The recurrence matrix in (lag, time) (if `axis=1`)\n or (time, lag) (if `axis=0`) coordinates\n\n Raises\n ------\n ParameterError : if `rec` is non-square\n\n See Also\n --------\n recurrence_matrix\n lag_to_recurrence\n\n Examples\n --------\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> mfccs = librosa.feature.mfcc(y=y, sr=sr)\n >>> recurrence = librosa.segment.recurrence_matrix(mfccs)\n >>> lag_pad = librosa.segment.recurrence_to_lag(recurrence, pad=True)\n >>> lag_nopad = librosa.segment.recurrence_to_lag(recurrence, pad=False)\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure(figsize=(8, 4))\n >>> plt.subplot(1, 2, 1)\n >>> librosa.display.specshow(lag_pad, x_axis='time', y_axis='lag')\n >>> plt.title('Lag (zero-padded)')\n >>> plt.subplot(1, 2, 2)\n >>> librosa.display.specshow(lag_nopad, x_axis='time')\n >>> plt.title('Lag (no padding)')\n >>> plt.tight_layout()"} {"_id": "q_2155", "text": "Filtering in the time-lag domain.\n\n This is primarily useful for adapting image filters to operate on\n `recurrence_to_lag` output.\n\n Using `timelag_filter` is equivalent to the following sequence of\n operations:\n\n >>> data_tl = librosa.segment.recurrence_to_lag(data)\n >>> data_filtered_tl = function(data_tl)\n >>> data_filtered = librosa.segment.lag_to_recurrence(data_filtered_tl)\n\n Parameters\n ----------\n function : callable\n The filtering function to wrap, e.g., `scipy.ndimage.median_filter`\n\n pad : bool\n Whether to zero-pad the structure feature matrix\n\n index : int >= 0\n If `function` accepts input data as a positional argument, it should be\n indexed by `index`\n\n\n Returns\n -------\n wrapped_function : callable\n A new filter function which applies in time-lag space rather than\n time-time space.\n\n\n Examples\n --------\n\n Apply a 5-bin median filter to the diagonal of a recurrence matrix\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> chroma = librosa.feature.chroma_cqt(y=y, sr=sr)\n >>> rec = librosa.segment.recurrence_matrix(chroma)\n >>> from scipy.ndimage import median_filter\n >>> diagonal_median = librosa.segment.timelag_filter(median_filter)\n >>> rec_filtered = diagonal_median(rec, size=(1, 3), mode='mirror')\n\n Or with affinity weights\n\n >>> rec_aff = librosa.segment.recurrence_matrix(chroma, mode='affinity')\n >>> rec_aff_fil = diagonal_median(rec_aff, size=(1, 3), mode='mirror')\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure(figsize=(8,8))\n >>> plt.subplot(2, 2, 1)\n >>> librosa.display.specshow(rec, y_axis='time')\n >>> plt.title('Raw recurrence matrix')\n >>> plt.subplot(2, 2, 2)\n >>> librosa.display.specshow(rec_filtered)\n >>> plt.title('Filtered recurrence matrix')\n >>> plt.subplot(2, 2, 3)\n >>> librosa.display.specshow(rec_aff, x_axis='time', y_axis='time',\n ... cmap='magma_r')\n >>> plt.title('Raw affinity matrix')\n >>> plt.subplot(2, 2, 4)\n >>> librosa.display.specshow(rec_aff_fil, x_axis='time',\n ... cmap='magma_r')\n >>> plt.title('Filtered affinity matrix')\n >>> plt.tight_layout()"} {"_id": "q_2156", "text": "Sub-divide a segmentation by feature clustering.\n\n Given a set of frame boundaries (`frames`), and a data matrix (`data`),\n each successive interval defined by `frames` is partitioned into\n `n_segments` by constrained agglomerative clustering.\n\n .. note::\n If an interval spans fewer than `n_segments` frames, then each\n frame becomes a sub-segment.\n\n Parameters\n ----------\n data : np.ndarray\n Data matrix to use in clustering\n\n frames : np.ndarray [shape=(n_boundaries,)], dtype=int, non-negative]\n Array of beat or segment boundaries, as provided by\n `librosa.beat.beat_track`,\n `librosa.onset.onset_detect`,\n or `agglomerative`.\n\n n_segments : int > 0\n Maximum number of frames to sub-divide each interval.\n\n axis : int\n Axis along which to apply the segmentation.\n By default, the last index (-1) is taken.\n\n Returns\n -------\n boundaries : np.ndarray [shape=(n_subboundaries,)]\n List of sub-divided segment boundaries\n\n See Also\n --------\n agglomerative : Temporal segmentation\n librosa.onset.onset_detect : Onset detection\n librosa.beat.beat_track : Beat tracking\n\n Notes\n -----\n This function caches at level 30.\n\n Examples\n --------\n Load audio, detect beat frames, and subdivide in twos by CQT\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(), duration=8)\n >>> tempo, beats = librosa.beat.beat_track(y=y, sr=sr, hop_length=512)\n >>> beat_times = librosa.frames_to_time(beats, sr=sr, hop_length=512)\n >>> cqt = np.abs(librosa.cqt(y, sr=sr, hop_length=512))\n >>> subseg = librosa.segment.subsegment(cqt, beats, n_segments=2)\n >>> subseg_t = librosa.frames_to_time(subseg, sr=sr, hop_length=512)\n >>> subseg\n array([ 0, 2, 4, 21, 23, 26, 43, 55, 63, 72, 83,\n 97, 102, 111, 122, 137, 142, 153, 162, 180, 182, 185,\n 202, 210, 221, 231, 241, 256, 261, 271, 281, 296, 301,\n 310, 320, 339, 341, 344, 361, 368, 382, 389, 401, 416,\n 420, 430, 436, 451, 456, 465, 476, 489, 496, 503, 515,\n 527, 535, 544, 553, 558, 571, 578, 590, 607, 609, 638])\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> librosa.display.specshow(librosa.amplitude_to_db(cqt,\n ... ref=np.max),\n ... y_axis='cqt_hz', x_axis='time')\n >>> lims = plt.gca().get_ylim()\n >>> plt.vlines(beat_times, lims[0], lims[1], color='lime', alpha=0.9,\n ... linewidth=2, label='Beats')\n >>> plt.vlines(subseg_t, lims[0], lims[1], color='linen', linestyle='--',\n ... linewidth=1.5, alpha=0.5, label='Sub-beats')\n >>> plt.legend(frameon=True, shadow=True)\n >>> plt.title('CQT + Beat and sub-beat markers')\n >>> plt.tight_layout()"} {"_id": "q_2157", "text": "Bottom-up temporal segmentation.\n\n Use a temporally-constrained agglomerative clustering routine to partition\n `data` into `k` contiguous segments.\n\n Parameters\n ----------\n data : np.ndarray\n data to cluster\n\n k : int > 0 [scalar]\n number of segments to produce\n\n clusterer : sklearn.cluster.AgglomerativeClustering, optional\n An optional AgglomerativeClustering object.\n If `None`, a constrained Ward object is instantiated.\n\n axis : int\n axis along which to cluster.\n By default, the last axis (-1) is chosen.\n\n Returns\n -------\n boundaries : np.ndarray [shape=(k,)]\n left-boundaries (frame numbers) of detected segments. This\n will always include `0` as the first left-boundary.\n\n See Also\n --------\n sklearn.cluster.AgglomerativeClustering\n\n Examples\n --------\n Cluster by chroma similarity, break into 20 segments\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(), duration=15)\n >>> chroma = librosa.feature.chroma_cqt(y=y, sr=sr)\n >>> bounds = librosa.segment.agglomerative(chroma, 20)\n >>> bound_times = librosa.frames_to_time(bounds, sr=sr)\n >>> bound_times\n array([ 0. , 1.672, 2.322, 2.624, 3.251, 3.506,\n 4.18 , 5.387, 6.014, 6.293, 6.943, 7.198,\n 7.848, 9.033, 9.706, 9.961, 10.635, 10.89 ,\n 11.54 , 12.539])\n\n Plot the segmentation over the chromagram\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure()\n >>> librosa.display.specshow(chroma, y_axis='chroma', x_axis='time')\n >>> plt.vlines(bound_times, 0, chroma.shape[0], color='linen', linestyle='--',\n ... linewidth=2, alpha=0.9, label='Segment boundaries')\n >>> plt.axis('tight')\n >>> plt.legend(frameon=True, shadow=True)\n >>> plt.title('Power spectrogram')\n >>> plt.tight_layout()"} {"_id": "q_2158", "text": "Multi-angle path enhancement for self- and cross-similarity matrices.\n\n This function convolves multiple diagonal smoothing filters with a self-similarity (or\n recurrence) matrix R, and aggregates the result by an element-wise maximum.\n\n Technically, the output is a matrix R_smooth such that\n\n `R_smooth[i, j] = max_theta (R * filter_theta)[i, j]`\n\n where `*` denotes 2-dimensional convolution, and `filter_theta` is a smoothing filter at\n orientation theta.\n\n This is intended to provide coherent temporal smoothing of self-similarity matrices\n when there are changes in tempo.\n\n Smoothing filters are generated at evenly spaced orientations between min_ratio and\n max_ratio.\n\n This function is inspired by the multi-angle path enhancement of [1]_, but differs by\n modeling tempo differences in the space of similarity matrices rather than re-sampling\n the underlying features prior to generating the self-similarity matrix.\n\n .. [1] M\u00fcller, Meinard and Frank Kurth.\n \"Enhancing similarity matrices for music audio analysis.\"\n 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings.\n Vol. 5. IEEE, 2006.\n\n .. note:: if using recurrence_matrix to construct the input similarity matrix, be sure to include the main\n diagonal by setting `self=True`. Otherwise, the diagonal will be suppressed, and this is likely to\n produce discontinuities which will pollute the smoothing filter response.\n\n Parameters\n ----------\n R : np.ndarray\n The self- or cross-similarity matrix to be smoothed.\n Note: sparse inputs are not supported.\n\n n : int > 0\n The length of the smoothing filter\n\n window : window specification\n The type of smoothing filter to use. See `filters.get_window` for more information\n on window specification formats.\n\n max_ratio : float > 0\n The maximum tempo ratio to support\n\n min_ratio : float > 0\n The minimum tempo ratio to support.\n If not provided, it will default to `1/max_ratio`\n\n n_filters : int >= 1\n The number of different smoothing filters to use, evenly spaced\n between `min_ratio` and `max_ratio`.\n\n If `min_ratio = 1/max_ratio` (the default), using an odd number\n of filters will ensure that the main diagonal (ratio=1) is included.\n\n zero_mean : bool\n By default, the smoothing filters are non-negative and sum to one (i.e. are averaging\n filters).\n\n If `zero_mean=True`, then the smoothing filters are made to sum to zero by subtracting\n a constant value from the non-diagonal coordinates of the filter. This is primarily\n useful for suppressing blocks while enhancing diagonals.\n\n clip : bool\n If True, the smoothed similarity matrix will be thresholded at 0, and will not contain\n negative entries.\n\n kwargs : additional keyword arguments\n Additional arguments to pass to `scipy.ndimage.convolve`\n\n\n Returns\n -------\n R_smooth : np.ndarray, shape=R.shape\n The smoothed self- or cross-similarity matrix\n\n See Also\n --------\n filters.diagonal_filter\n recurrence_matrix\n\n\n Examples\n --------\n Use a 51-frame diagonal smoothing filter to enhance paths in a recurrence matrix\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file(), duration=30)\n >>> chroma = librosa.feature.chroma_cqt(y=y, sr=sr)\n >>> rec = librosa.segment.recurrence_matrix(chroma, mode='affinity', self=True)\n >>> rec_smooth = librosa.segment.path_enhance(rec, 51, window='hann', n_filters=7)\n\n Plot the recurrence matrix before and after smoothing\n\n >>> import matplotlib.pyplot as plt\n >>> plt.figure(figsize=(8, 4))\n >>> plt.subplot(1,2,1)\n >>> librosa.display.specshow(rec, x_axis='time', y_axis='time')\n >>> plt.title('Unfiltered recurrence')\n >>> plt.subplot(1,2,2)\n >>> librosa.display.specshow(rec_smooth, x_axis='time', y_axis='time')\n >>> plt.title('Multi-angle enhanced recurrence')\n >>> plt.tight_layout()"} {"_id": "q_2159", "text": "Slice a time series into overlapping frames.\n\n This implementation uses low-level stride manipulation to avoid\n redundant copies of the time series data.\n\n Parameters\n ----------\n y : np.ndarray [shape=(n,)]\n Time series to frame. Must be one-dimensional and contiguous\n in memory.\n\n frame_length : int > 0 [scalar]\n Length of the frame in samples\n\n hop_length : int > 0 [scalar]\n Number of samples to hop between frames\n\n Returns\n -------\n y_frames : np.ndarray [shape=(frame_length, N_FRAMES)]\n An array of frames sampled from `y`:\n `y_frames[i, j] == y[j * hop_length + i]`\n\n Raises\n ------\n ParameterError\n If `y` is not contiguous in memory, not an `np.ndarray`, or\n not one-dimensional. See `np.ascontiguous()` for details.\n\n If `hop_length < 1`, frames cannot advance.\n\n If `len(y) < frame_length`.\n\n Examples\n --------\n Extract 2048-sample frames from `y` with a hop of 64 samples per frame\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> librosa.util.frame(y, frame_length=2048, hop_length=64)\n array([[ -9.216e-06, 7.710e-06, ..., -2.117e-06, -4.362e-07],\n [ 2.518e-06, -6.294e-06, ..., -1.775e-05, -6.365e-06],\n ...,\n [ -7.429e-04, 5.173e-03, ..., 1.105e-05, -5.074e-06],\n [ 2.169e-03, 4.867e-03, ..., 3.666e-06, -5.571e-06]], dtype=float32)"} {"_id": "q_2160", "text": "Ensure that an input value is integer-typed.\n This is primarily useful for ensuring integrable-valued\n array indices.\n\n Parameters\n ----------\n x : number\n A scalar value to be cast to int\n\n cast : function [optional]\n A function to modify `x` before casting.\n Default: `np.floor`\n\n Returns\n -------\n x_int : int\n `x_int = int(cast(x))`\n\n Raises\n ------\n ParameterError\n If `cast` is provided and is not callable."} {"_id": "q_2161", "text": "Normalize an array along a chosen axis.\n\n Given a norm (described below) and a target axis, the input\n array is scaled so that\n\n `norm(S, axis=axis) == 1`\n\n For example, `axis=0` normalizes each column of a 2-d array\n by aggregating over the rows (0-axis).\n Similarly, `axis=1` normalizes each row of a 2-d array.\n\n This function also supports thresholding small-norm slices:\n any slice (i.e., row or column) with norm below a specified\n `threshold` can be left un-normalized, set to all-zeros, or\n filled with uniform non-zero values that normalize to 1.\n\n Note: the semantics of this function differ from\n `scipy.linalg.norm` in two ways: multi-dimensional arrays\n are supported, but matrix-norms are not.\n\n\n Parameters\n ----------\n S : np.ndarray\n The matrix to normalize\n\n norm : {np.inf, -np.inf, 0, float > 0, None}\n - `np.inf` : maximum absolute value\n - `-np.inf` : mininum absolute value\n - `0` : number of non-zeros (the support)\n - float : corresponding l_p norm\n See `scipy.linalg.norm` for details.\n - None : no normalization is performed\n\n axis : int [scalar]\n Axis along which to compute the norm.\n\n threshold : number > 0 [optional]\n Only the columns (or rows) with norm at least `threshold` are\n normalized.\n\n By default, the threshold is determined from\n the numerical precision of `S.dtype`.\n\n fill : None or bool\n If None, then columns (or rows) with norm below `threshold`\n are left as is.\n\n If False, then columns (rows) with norm below `threshold`\n are set to 0.\n\n If True, then columns (rows) with norm below `threshold`\n are filled uniformly such that the corresponding norm is 1.\n\n .. note:: `fill=True` is incompatible with `norm=0` because\n no uniform vector exists with l0 \"norm\" equal to 1.\n\n Returns\n -------\n S_norm : np.ndarray [shape=S.shape]\n Normalized array\n\n Raises\n ------\n ParameterError\n If `norm` is not among the valid types defined above\n\n If `S` is not finite\n\n If `fill=True` and `norm=0`\n\n See Also\n --------\n scipy.linalg.norm\n\n Notes\n -----\n This function caches at level 40.\n\n Examples\n --------\n >>> # Construct an example matrix\n >>> S = np.vander(np.arange(-2.0, 2.0))\n >>> S\n array([[-8., 4., -2., 1.],\n [-1., 1., -1., 1.],\n [ 0., 0., 0., 1.],\n [ 1., 1., 1., 1.]])\n >>> # Max (l-infinity)-normalize the columns\n >>> librosa.util.normalize(S)\n array([[-1. , 1. , -1. , 1. ],\n [-0.125, 0.25 , -0.5 , 1. ],\n [ 0. , 0. , 0. , 1. ],\n [ 0.125, 0.25 , 0.5 , 1. ]])\n >>> # Max (l-infinity)-normalize the rows\n >>> librosa.util.normalize(S, axis=1)\n array([[-1. , 0.5 , -0.25 , 0.125],\n [-1. , 1. , -1. , 1. ],\n [ 0. , 0. , 0. , 1. ],\n [ 1. , 1. , 1. , 1. ]])\n >>> # l1-normalize the columns\n >>> librosa.util.normalize(S, norm=1)\n array([[-0.8 , 0.667, -0.5 , 0.25 ],\n [-0.1 , 0.167, -0.25 , 0.25 ],\n [ 0. , 0. , 0. , 0.25 ],\n [ 0.1 , 0.167, 0.25 , 0.25 ]])\n >>> # l2-normalize the columns\n >>> librosa.util.normalize(S, norm=2)\n array([[-0.985, 0.943, -0.816, 0.5 ],\n [-0.123, 0.236, -0.408, 0.5 ],\n [ 0. , 0. , 0. , 0.5 ],\n [ 0.123, 0.236, 0.408, 0.5 ]])\n\n >>> # Thresholding and filling\n >>> S[:, -1] = 1e-308\n >>> S\n array([[ -8.000e+000, 4.000e+000, -2.000e+000,\n 1.000e-308],\n [ -1.000e+000, 1.000e+000, -1.000e+000,\n 1.000e-308],\n [ 0.000e+000, 0.000e+000, 0.000e+000,\n 1.000e-308],\n [ 1.000e+000, 1.000e+000, 1.000e+000,\n 1.000e-308]])\n\n >>> # By default, small-norm columns are left untouched\n >>> librosa.util.normalize(S)\n array([[ -1.000e+000, 1.000e+000, -1.000e+000,\n 1.000e-308],\n [ -1.250e-001, 2.500e-001, -5.000e-001,\n 1.000e-308],\n [ 0.000e+000, 0.000e+000, 0.000e+000,\n 1.000e-308],\n [ 1.250e-001, 2.500e-001, 5.000e-001,\n 1.000e-308]])\n >>> # Small-norm columns can be zeroed out\n >>> librosa.util.normalize(S, fill=False)\n array([[-1. , 1. , -1. , 0. ],\n [-0.125, 0.25 , -0.5 , 0. ],\n [ 0. , 0. , 0. , 0. ],\n [ 0.125, 0.25 , 0.5 , 0. ]])\n >>> # Or set to constant with unit-norm\n >>> librosa.util.normalize(S, fill=True)\n array([[-1. , 1. , -1. , 1. ],\n [-0.125, 0.25 , -0.5 , 1. ],\n [ 0. , 0. , 0. , 1. ],\n [ 0.125, 0.25 , 0.5 , 1. ]])\n >>> # With an l1 norm instead of max-norm\n >>> librosa.util.normalize(S, norm=1, fill=True)\n array([[-0.8 , 0.667, -0.5 , 0.25 ],\n [-0.1 , 0.167, -0.25 , 0.25 ],\n [ 0. , 0. , 0. , 0.25 ],\n [ 0.1 , 0.167, 0.25 , 0.25 ]])"} {"_id": "q_2162", "text": "Uses a flexible heuristic to pick peaks in a signal.\n\n A sample n is selected as an peak if the corresponding x[n]\n fulfills the following three conditions:\n\n 1. `x[n] == max(x[n - pre_max:n + post_max])`\n 2. `x[n] >= mean(x[n - pre_avg:n + post_avg]) + delta`\n 3. `n - previous_n > wait`\n\n where `previous_n` is the last sample picked as a peak (greedily).\n\n This implementation is based on [1]_ and [2]_.\n\n .. [1] Boeck, Sebastian, Florian Krebs, and Markus Schedl.\n \"Evaluating the Online Capabilities of Onset Detection Methods.\" ISMIR.\n 2012.\n\n .. [2] https://github.com/CPJKU/onset_detection/blob/master/onset_program.py\n\n\n Parameters\n ----------\n x : np.ndarray [shape=(n,)]\n input signal to peak picks from\n\n pre_max : int >= 0 [scalar]\n number of samples before `n` over which max is computed\n\n post_max : int >= 1 [scalar]\n number of samples after `n` over which max is computed\n\n pre_avg : int >= 0 [scalar]\n number of samples before `n` over which mean is computed\n\n post_avg : int >= 1 [scalar]\n number of samples after `n` over which mean is computed\n\n delta : float >= 0 [scalar]\n threshold offset for mean\n\n wait : int >= 0 [scalar]\n number of samples to wait after picking a peak\n\n Returns\n -------\n peaks : np.ndarray [shape=(n_peaks,), dtype=int]\n indices of peaks in `x`\n\n Raises\n ------\n ParameterError\n If any input lies outside its defined range\n\n Examples\n --------\n >>> y, sr = librosa.load(librosa.util.example_audio_file(), duration=15)\n >>> onset_env = librosa.onset.onset_strength(y=y, sr=sr,\n ... hop_length=512,\n ... aggregate=np.median)\n >>> peaks = librosa.util.peak_pick(onset_env, 3, 3, 3, 5, 0.5, 10)\n >>> peaks\n array([ 4, 23, 73, 102, 142, 162, 182, 211, 261, 301, 320,\n 331, 348, 368, 382, 396, 411, 431, 446, 461, 476, 491,\n 510, 525, 536, 555, 570, 590, 609, 625, 639])\n\n >>> import matplotlib.pyplot as plt\n >>> times = librosa.frames_to_time(np.arange(len(onset_env)),\n ... sr=sr, hop_length=512)\n >>> plt.figure()\n >>> ax = plt.subplot(2, 1, 2)\n >>> D = librosa.stft(y)\n >>> librosa.display.specshow(librosa.amplitude_to_db(D, ref=np.max),\n ... y_axis='log', x_axis='time')\n >>> plt.subplot(2, 1, 1, sharex=ax)\n >>> plt.plot(times, onset_env, alpha=0.8, label='Onset strength')\n >>> plt.vlines(times[peaks], 0,\n ... onset_env.max(), color='r', alpha=0.8,\n ... label='Selected peaks')\n >>> plt.legend(frameon=True, framealpha=0.8)\n >>> plt.axis('tight')\n >>> plt.tight_layout()"} {"_id": "q_2163", "text": "Convert an integer buffer to floating point values.\n This is primarily useful when loading integer-valued wav data\n into numpy arrays.\n\n See Also\n --------\n buf_to_float\n\n Parameters\n ----------\n x : np.ndarray [dtype=int]\n The integer-valued data buffer\n\n n_bytes : int [1, 2, 4]\n The number of bytes per sample in `x`\n\n dtype : numeric type\n The target output type (default: 32-bit float)\n\n Returns\n -------\n x_float : np.ndarray [dtype=float]\n The input data buffer cast to floating point"} {"_id": "q_2164", "text": "Synchronous aggregation of a multi-dimensional array between boundaries\n\n .. note::\n In order to ensure total coverage, boundary points may be added\n to `idx`.\n\n If synchronizing a feature matrix against beat tracker output, ensure\n that frame index numbers are properly aligned and use the same hop length.\n\n Parameters\n ----------\n data : np.ndarray\n multi-dimensional array of features\n\n idx : iterable of ints or slices\n Either an ordered array of boundary indices, or\n an iterable collection of slice objects.\n\n\n aggregate : function\n aggregation function (default: `np.mean`)\n\n pad : boolean\n If `True`, `idx` is padded to span the full range `[0, data.shape[axis]]`\n\n axis : int\n The axis along which to aggregate data\n\n Returns\n -------\n data_sync : ndarray\n `data_sync` will have the same dimension as `data`, except that the `axis`\n coordinate will be reduced according to `idx`.\n\n For example, a 2-dimensional `data` with `axis=-1` should satisfy\n\n `data_sync[:, i] = aggregate(data[:, idx[i-1]:idx[i]], axis=-1)`\n\n Raises\n ------\n ParameterError\n If the index set is not of consistent type (all slices or all integers)\n\n Notes\n -----\n This function caches at level 40.\n\n Examples\n --------\n Beat-synchronous CQT spectra\n\n >>> y, sr = librosa.load(librosa.util.example_audio_file())\n >>> tempo, beats = librosa.beat.beat_track(y=y, sr=sr, trim=False)\n >>> C = np.abs(librosa.cqt(y=y, sr=sr))\n >>> beats = librosa.util.fix_frames(beats, x_max=C.shape[1])\n\n By default, use mean aggregation\n\n >>> C_avg = librosa.util.sync(C, beats)\n\n Use median-aggregation instead of mean\n\n >>> C_med = librosa.util.sync(C, beats,\n ... aggregate=np.median)\n\n Or sub-beat synchronization\n\n >>> sub_beats = librosa.segment.subsegment(C, beats)\n >>> sub_beats = librosa.util.fix_frames(sub_beats, x_max=C.shape[1])\n >>> C_med_sub = librosa.util.sync(C, sub_beats, aggregate=np.median)\n\n\n Plot the results\n\n >>> import matplotlib.pyplot as plt\n >>> beat_t = librosa.frames_to_time(beats, sr=sr)\n >>> subbeat_t = librosa.frames_to_time(sub_beats, sr=sr)\n >>> plt.figure()\n >>> plt.subplot(3, 1, 1)\n >>> librosa.display.specshow(librosa.amplitude_to_db(C,\n ... ref=np.max),\n ... x_axis='time')\n >>> plt.title('CQT power, shape={}'.format(C.shape))\n >>> plt.subplot(3, 1, 2)\n >>> librosa.display.specshow(librosa.amplitude_to_db(C_med,\n ... ref=np.max),\n ... x_coords=beat_t, x_axis='time')\n >>> plt.title('Beat synchronous CQT power, '\n ... 'shape={}'.format(C_med.shape))\n >>> plt.subplot(3, 1, 3)\n >>> librosa.display.specshow(librosa.amplitude_to_db(C_med_sub,\n ... ref=np.max),\n ... x_coords=subbeat_t, x_axis='time')\n >>> plt.title('Sub-beat synchronous CQT power, '\n ... 'shape={}'.format(C_med_sub.shape))\n >>> plt.tight_layout()"} {"_id": "q_2165", "text": "Robustly compute a softmask operation.\n\n `M = X**power / (X**power + X_ref**power)`\n\n\n Parameters\n ----------\n X : np.ndarray\n The (non-negative) input array corresponding to the positive mask elements\n\n X_ref : np.ndarray\n The (non-negative) array of reference or background elements.\n Must have the same shape as `X`.\n\n power : number > 0 or np.inf\n If finite, returns the soft mask computed in a numerically stable way\n\n If infinite, returns a hard (binary) mask equivalent to `X > X_ref`.\n Note: for hard masks, ties are always broken in favor of `X_ref` (`mask=0`).\n\n\n split_zeros : bool\n If `True`, entries where `X` and X`_ref` are both small (close to 0)\n will receive mask values of 0.5.\n\n Otherwise, the mask is set to 0 for these entries.\n\n\n Returns\n -------\n mask : np.ndarray, shape=`X.shape`\n The output mask array\n\n Raises\n ------\n ParameterError\n If `X` and `X_ref` have different shapes.\n\n If `X` or `X_ref` are negative anywhere\n\n If `power <= 0`\n\n Examples\n --------\n\n >>> X = 2 * np.ones((3, 3))\n >>> X_ref = np.vander(np.arange(3.0))\n >>> X\n array([[ 2., 2., 2.],\n [ 2., 2., 2.],\n [ 2., 2., 2.]])\n >>> X_ref\n array([[ 0., 0., 1.],\n [ 1., 1., 1.],\n [ 4., 2., 1.]])\n >>> librosa.util.softmask(X, X_ref, power=1)\n array([[ 1. , 1. , 0.667],\n [ 0.667, 0.667, 0.667],\n [ 0.333, 0.5 , 0.667]])\n >>> librosa.util.softmask(X_ref, X, power=1)\n array([[ 0. , 0. , 0.333],\n [ 0.333, 0.333, 0.333],\n [ 0.667, 0.5 , 0.333]])\n >>> librosa.util.softmask(X, X_ref, power=2)\n array([[ 1. , 1. , 0.8],\n [ 0.8, 0.8, 0.8],\n [ 0.2, 0.5, 0.8]])\n >>> librosa.util.softmask(X, X_ref, power=4)\n array([[ 1. , 1. , 0.941],\n [ 0.941, 0.941, 0.941],\n [ 0.059, 0.5 , 0.941]])\n >>> librosa.util.softmask(X, X_ref, power=100)\n array([[ 1.000e+00, 1.000e+00, 1.000e+00],\n [ 1.000e+00, 1.000e+00, 1.000e+00],\n [ 7.889e-31, 5.000e-01, 1.000e+00]])\n >>> librosa.util.softmask(X, X_ref, power=np.inf)\n array([[ True, True, True],\n [ True, True, True],\n [False, False, True]], dtype=bool)"} {"_id": "q_2166", "text": "Compute the tiny-value corresponding to an input's data type.\n\n This is the smallest \"usable\" number representable in `x`'s\n data type (e.g., float32).\n\n This is primarily useful for determining a threshold for\n numerical underflow in division or multiplication operations.\n\n Parameters\n ----------\n x : number or np.ndarray\n The array to compute the tiny-value for.\n All that matters here is `x.dtype`.\n\n Returns\n -------\n tiny_value : float\n The smallest positive usable number for the type of `x`.\n If `x` is integer-typed, then the tiny value for `np.float32`\n is returned instead.\n\n See Also\n --------\n numpy.finfo\n\n Examples\n --------\n\n For a standard double-precision floating point number:\n\n >>> librosa.util.tiny(1.0)\n 2.2250738585072014e-308\n\n Or explicitly as double-precision\n\n >>> librosa.util.tiny(np.asarray(1e-5, dtype=np.float64))\n 2.2250738585072014e-308\n\n Or complex numbers\n\n >>> librosa.util.tiny(1j)\n 2.2250738585072014e-308\n\n Single-precision floating point:\n\n >>> librosa.util.tiny(np.asarray(1e-5, dtype=np.float32))\n 1.1754944e-38\n\n Integer\n\n >>> librosa.util.tiny(5)\n 1.1754944e-38"} {"_id": "q_2167", "text": "Read the frame images from a directory and join them as a video\n\n Args:\n frame_dir (str): The directory containing video frames.\n video_file (str): Output filename.\n fps (float): FPS of the output video.\n fourcc (str): Fourcc of the output video, this should be compatible\n with the output file type.\n filename_tmpl (str): Filename template with the index as the variable.\n start (int): Starting frame index.\n end (int): Ending frame index.\n show_progress (bool): Whether to show a progress bar."} {"_id": "q_2168", "text": "Get frame by index.\n\n Args:\n frame_id (int): Index of the expected frame, 0-based.\n\n Returns:\n ndarray or None: Return the frame if successful, otherwise None."} {"_id": "q_2169", "text": "Track the progress of tasks execution with a progress bar.\n\n Tasks are done with a simple for-loop.\n\n Args:\n func (callable): The function to be applied to each task.\n tasks (list or tuple[Iterable, int]): A list of tasks or\n (tasks, total num).\n bar_width (int): Width of progress bar.\n\n Returns:\n list: The task results."} {"_id": "q_2170", "text": "Track the progress of parallel task execution with a progress bar.\n\n The built-in :mod:`multiprocessing` module is used for process pools and\n tasks are done with :func:`Pool.map` or :func:`Pool.imap_unordered`.\n\n Args:\n func (callable): The function to be applied to each task.\n tasks (list or tuple[Iterable, int]): A list of tasks or\n (tasks, total num).\n nproc (int): Process (worker) number.\n initializer (None or callable): Refer to :class:`multiprocessing.Pool`\n for details.\n initargs (None or tuple): Refer to :class:`multiprocessing.Pool` for\n details.\n chunksize (int): Refer to :class:`multiprocessing.Pool` for details.\n bar_width (int): Width of progress bar.\n skip_first (bool): Whether to skip the first sample for each worker\n when estimating fps, since the initialization step may takes\n longer.\n keep_order (bool): If True, :func:`Pool.imap` is used, otherwise\n :func:`Pool.imap_unordered` is used.\n\n Returns:\n list: The task results."} {"_id": "q_2171", "text": "Clip bboxes to fit the image shape.\n\n Args:\n bboxes (ndarray): Shape (..., 4*k)\n img_shape (tuple): (height, width) of the image.\n\n Returns:\n ndarray: Clipped bboxes."} {"_id": "q_2172", "text": "Crop image patches.\n\n 3 steps: scale the bboxes -> clip bboxes -> crop and pad.\n\n Args:\n img (ndarray): Image to be cropped.\n bboxes (ndarray): Shape (k, 4) or (4, ), location of cropped bboxes.\n scale (float, optional): Scale ratio of bboxes, the default value\n 1.0 means no padding.\n pad_fill (number or list): Value to be filled for padding, None for\n no padding.\n\n Returns:\n list or ndarray: The cropped image patches."} {"_id": "q_2173", "text": "Pad an image to a certain shape.\n\n Args:\n img (ndarray): Image to be padded.\n shape (tuple): Expected padding shape.\n pad_val (number or sequence): Values to be filled in padding areas.\n\n Returns:\n ndarray: The padded image."} {"_id": "q_2174", "text": "Pad an image to ensure each edge to be multiple to some number.\n\n Args:\n img (ndarray): Image to be padded.\n divisor (int): Padded image edges will be multiple to divisor.\n pad_val (number or sequence): Same as :func:`impad`.\n\n Returns:\n ndarray: The padded image."} {"_id": "q_2175", "text": "Rescale a size by a ratio.\n\n Args:\n size (tuple): w, h.\n scale (float): Scaling factor.\n\n Returns:\n tuple[int]: scaled size."} {"_id": "q_2176", "text": "Resize image to a given size.\n\n Args:\n img (ndarray): The input image.\n size (tuple): Target (w, h).\n return_scale (bool): Whether to return `w_scale` and `h_scale`.\n interpolation (str): Interpolation method, accepted values are\n \"nearest\", \"bilinear\", \"bicubic\", \"area\", \"lanczos\".\n\n Returns:\n tuple or ndarray: (`resized_img`, `w_scale`, `h_scale`) or\n `resized_img`."} {"_id": "q_2177", "text": "Resize image to the same size of a given image.\n\n Args:\n img (ndarray): The input image.\n dst_img (ndarray): The target image.\n return_scale (bool): Whether to return `w_scale` and `h_scale`.\n interpolation (str): Same as :func:`resize`.\n\n Returns:\n tuple or ndarray: (`resized_img`, `w_scale`, `h_scale`) or\n `resized_img`."} {"_id": "q_2178", "text": "Resize image while keeping the aspect ratio.\n\n Args:\n img (ndarray): The input image.\n scale (float or tuple[int]): The scaling factor or maximum size.\n If it is a float number, then the image will be rescaled by this\n factor, else if it is a tuple of 2 integers, then the image will\n be rescaled as large as possible within the scale.\n return_scale (bool): Whether to return the scaling factor besides the\n rescaled image.\n interpolation (str): Same as :func:`resize`.\n\n Returns:\n ndarray: The rescaled image."} {"_id": "q_2179", "text": "Register a handler for some file extensions.\n\n Args:\n handler (:obj:`BaseFileHandler`): Handler to be registered.\n file_formats (str or list[str]): File formats to be handled by this\n handler."} {"_id": "q_2180", "text": "Get priority value.\n\n Args:\n priority (int or str or :obj:`Priority`): Priority.\n\n Returns:\n int: The priority value."} {"_id": "q_2181", "text": "Draw bboxes on an image.\n\n Args:\n img (str or ndarray): The image to be displayed.\n bboxes (list or ndarray): A list of ndarray of shape (k, 4).\n colors (list[str or tuple or Color]): A list of colors.\n top_k (int): Plot the first k bboxes only if set positive.\n thickness (int): Thickness of lines.\n show (bool): Whether to show the image.\n win_name (str): The window name.\n wait_time (int): Value of waitKey param.\n out_file (str, optional): The filename to write the image."} {"_id": "q_2182", "text": "Read an optical flow map.\n\n Args:\n flow_or_path (ndarray or str): A flow map or filepath.\n quantize (bool): whether to read quantized pair, if set to True,\n remaining args will be passed to :func:`dequantize_flow`.\n concat_axis (int): The axis that dx and dy are concatenated,\n can be either 0 or 1. Ignored if quantize is False.\n\n Returns:\n ndarray: Optical flow represented as a (h, w, 2) numpy array"} {"_id": "q_2183", "text": "Recover from quantized flow.\n\n Args:\n dx (ndarray): Quantized dx.\n dy (ndarray): Quantized dy.\n max_val (float): Maximum value used when quantizing.\n denorm (bool): Whether to multiply flow values with width/height.\n\n Returns:\n ndarray: Dequantized flow."} {"_id": "q_2184", "text": "Load state_dict to a module.\n\n This method is modified from :meth:`torch.nn.Module.load_state_dict`.\n Default value for ``strict`` is set to ``False`` and the message for\n param mismatch will be shown even if strict is False.\n\n Args:\n module (Module): Module that receives the state_dict.\n state_dict (OrderedDict): Weights.\n strict (bool): whether to strictly enforce that the keys\n in :attr:`state_dict` match the keys returned by this module's\n :meth:`~torch.nn.Module.state_dict` function. Default: ``False``.\n logger (:obj:`logging.Logger`, optional): Logger to log the error\n message. If not specified, print function will be used."} {"_id": "q_2185", "text": "Copy a model state_dict to cpu.\n\n Args:\n state_dict (OrderedDict): Model weights on GPU.\n\n Returns:\n OrderedDict: Model weights on GPU."} {"_id": "q_2186", "text": "Init the optimizer.\n\n Args:\n optimizer (dict or :obj:`~torch.optim.Optimizer`): Either an\n optimizer object or a dict used for constructing the optimizer.\n\n Returns:\n :obj:`~torch.optim.Optimizer`: An optimizer object.\n\n Examples:\n >>> optimizer = dict(type='SGD', lr=0.01, momentum=0.9)\n >>> type(runner.init_optimizer(optimizer))\n "} {"_id": "q_2187", "text": "Get current learning rates.\n\n Returns:\n list: Current learning rate of all param groups."} {"_id": "q_2188", "text": "Register a hook into the hook list.\n\n Args:\n hook (:obj:`Hook`): The hook to be registered.\n priority (int or str or :obj:`Priority`): Hook priority.\n Lower value means higher priority."} {"_id": "q_2189", "text": "Register default hooks for training.\n\n Default hooks include:\n\n - LrUpdaterHook\n - OptimizerStepperHook\n - CheckpointSaverHook\n - IterTimerHook\n - LoggerHook(s)"} {"_id": "q_2190", "text": "Convert a video with ffmpeg.\n\n This provides a general api to ffmpeg, the executed command is::\n\n `ffmpeg -y -i `\n\n Options(kwargs) are mapped to ffmpeg commands with the following rules:\n\n - key=val: \"-key val\"\n - key=True: \"-key\"\n - key=False: \"\"\n\n Args:\n in_file (str): Input video filename.\n out_file (str): Output video filename.\n pre_options (str): Options appears before \"-i \".\n print_cmd (bool): Whether to print the final ffmpeg command."} {"_id": "q_2191", "text": "Resize a video.\n\n Args:\n in_file (str): Input video filename.\n out_file (str): Output video filename.\n size (tuple): Expected size (w, h), eg, (320, 240) or (320, -1).\n ratio (tuple or float): Expected resize ratio, (2, 0.5) means\n (w*2, h*0.5).\n keep_ar (bool): Whether to keep original aspect ratio.\n log_level (str): Logging level of ffmpeg.\n print_cmd (bool): Whether to print the final ffmpeg command."} {"_id": "q_2192", "text": "Load a text file and parse the content as a dict.\n\n Each line of the text file will be two or more columns splited by\n whitespaces or tabs. The first column will be parsed as dict keys, and\n the following columns will be parsed as dict values.\n\n Args:\n filename(str): Filename.\n key_type(type): Type of the dict's keys. str is user by default and\n type conversion will be performed if specified.\n\n Returns:\n dict: The parsed contents."} {"_id": "q_2193", "text": "3x3 convolution with padding"} {"_id": "q_2194", "text": "Read an image.\n\n Args:\n img_or_path (ndarray or str): Either a numpy array or image path.\n If it is a numpy array (loaded image), then it will be returned\n as is.\n flag (str): Flags specifying the color type of a loaded image,\n candidates are `color`, `grayscale` and `unchanged`.\n\n Returns:\n ndarray: Loaded image array."} {"_id": "q_2195", "text": "Read an image from bytes.\n\n Args:\n content (bytes): Image bytes got from files or other streams.\n flag (str): Same as :func:`imread`.\n\n Returns:\n ndarray: Loaded image array."} {"_id": "q_2196", "text": "Write image to file\n\n Args:\n img (ndarray): Image array to be written.\n file_path (str): Image file path.\n params (None or list): Same as opencv's :func:`imwrite` interface.\n auto_mkdir (bool): If the parent folder of `file_path` does not exist,\n whether to create it automatically.\n\n Returns:\n bool: Successful or not."} {"_id": "q_2197", "text": "Convert a BGR image to grayscale image.\n\n Args:\n img (ndarray): The input image.\n keepdim (bool): If False (by default), then return the grayscale image\n with 2 dims, otherwise 3 dims.\n\n Returns:\n ndarray: The converted grayscale image."} {"_id": "q_2198", "text": "Convert a grayscale image to BGR image.\n\n Args:\n img (ndarray or str): The input image.\n\n Returns:\n ndarray: The converted BGR image."} {"_id": "q_2199", "text": "Cast elements of an iterable object into some type.\n\n Args:\n inputs (Iterable): The input object.\n dst_type (type): Destination type.\n return_type (type, optional): If specified, the output object will be\n converted to this type, otherwise an iterator.\n\n Returns:\n iterator or specified type: The converted object."} {"_id": "q_2200", "text": "Check whether it is a sequence of some type.\n\n Args:\n seq (Sequence): The sequence to be checked.\n expected_type (type): Expected type of sequence items.\n seq_type (type, optional): Expected sequence type.\n\n Returns:\n bool: Whether the sequence is valid."} {"_id": "q_2201", "text": "Slice a list into several sub lists by a list of given length.\n\n Args:\n in_list (list): The list to be sliced.\n lens(int or list): The expected length of each out list.\n\n Returns:\n list: A list of sliced list."} {"_id": "q_2202", "text": "Average latest n values or all values"} {"_id": "q_2203", "text": "Scatters tensor across multiple GPUs."} {"_id": "q_2204", "text": "Add check points in a single line.\n\n This method is suitable for running a task on a list of items. A timer will\n be registered when the method is called for the first time.\n\n :Example:\n\n >>> import time\n >>> import mmcv\n >>> for i in range(1, 6):\n >>> # simulate a code block\n >>> time.sleep(i)\n >>> mmcv.check_time('task1')\n 2.000\n 3.000\n 4.000\n 5.000\n\n Args:\n timer_id (str): Timer identifier."} {"_id": "q_2205", "text": "Time since the last checking.\n\n Either :func:`since_start` or :func:`since_last_check` is a checking\n operation.\n\n Returns (float): Time in seconds."} {"_id": "q_2206", "text": "Show optical flow.\n\n Args:\n flow (ndarray or str): The optical flow to be displayed.\n win_name (str): The window name.\n wait_time (int): Value of waitKey param."} {"_id": "q_2207", "text": "Convert flow map to RGB image.\n\n Args:\n flow (ndarray): Array of optical flow.\n color_wheel (ndarray or None): Color wheel used to map flow field to\n RGB colorspace. Default color wheel will be used if not specified.\n unknown_thr (str): Values above this threshold will be marked as\n unknown and thus ignored.\n\n Returns:\n ndarray: RGB image that can be visualized."} {"_id": "q_2208", "text": "Build a color wheel.\n\n Args:\n bins(list or tuple, optional): Specify the number of bins for each\n color range, corresponding to six ranges: red -> yellow,\n yellow -> green, green -> cyan, cyan -> blue, blue -> magenta,\n magenta -> red. [15, 6, 4, 11, 13, 6] is used for default\n (see Middlebury).\n\n Returns:\n ndarray: Color wheel of shape (total_bins, 3)."} {"_id": "q_2209", "text": "Scatter inputs to target gpus.\n\n The only difference from original :func:`scatter` is to add support for\n :type:`~mmcv.parallel.DataContainer`."} {"_id": "q_2210", "text": "Scatter with support for kwargs dictionary"} {"_id": "q_2211", "text": "Decide whether a particular character needs to be quoted.\n\n The 'quotetabs' flag indicates whether embedded tabs and spaces should be\n quoted. Note that line-ending tabs and spaces are always encoded, as per\n RFC 1521."} {"_id": "q_2212", "text": "Quote a single character."} {"_id": "q_2213", "text": "Read 'input', apply quoted-printable encoding, and write to 'output'.\n\n 'input' and 'output' are files with readline() and write() methods.\n The 'quotetabs' flag indicates whether embedded tabs and spaces should be\n quoted. Note that line-ending tabs and spaces are always encoded, as per\n RFC 1521.\n The 'header' flag indicates whether we are encoding spaces as _ as per\n RFC 1522."} {"_id": "q_2214", "text": "Get the integer value of a hexadecimal number."} {"_id": "q_2215", "text": "Encode a string using Base64.\n\n s is the string to encode. Optional altchars must be a string of at least\n length 2 (additional characters are ignored) which specifies an\n alternative alphabet for the '+' and '/' characters. This allows an\n application to e.g. generate url or filesystem safe Base64 strings.\n\n The encoded string is returned."} {"_id": "q_2216", "text": "Decode a Base64 encoded string.\n\n s is the string to decode. Optional altchars must be a string of at least\n length 2 (additional characters are ignored) which specifies the\n alternative alphabet used instead of the '+' and '/' characters.\n\n The decoded string is returned. A TypeError is raised if s is\n incorrectly padded. Characters that are neither in the normal base-64\n alphabet nor the alternative alphabet are discarded prior to the padding\n check."} {"_id": "q_2217", "text": "Encode a string using Base32.\n\n s is the string to encode. The encoded string is returned."} {"_id": "q_2218", "text": "Decode a Base32 encoded string.\n\n s is the string to decode. Optional casefold is a flag specifying whether\n a lowercase alphabet is acceptable as input. For security purposes, the\n default is False.\n\n RFC 3548 allows for optional mapping of the digit 0 (zero) to the letter O\n (oh), and for optional mapping of the digit 1 (one) to either the letter I\n (eye) or letter L (el). The optional argument map01 when not None,\n specifies which letter the digit 1 should be mapped to (when map01 is not\n None, the digit 0 is always mapped to the letter O). For security\n purposes the default is None, so that 0 and 1 are not allowed in the\n input.\n\n The decoded string is returned. A TypeError is raised if s were\n incorrectly padded or if there are non-alphabet characters present in the\n string."} {"_id": "q_2219", "text": "Decode a Base16 encoded string.\n\n s is the string to decode. Optional casefold is a flag specifying whether\n a lowercase alphabet is acceptable as input. For security purposes, the\n default is False.\n\n The decoded string is returned. A TypeError is raised if s is\n incorrectly padded or if there are non-alphabet characters present in the\n string."} {"_id": "q_2220", "text": "Encode a file."} {"_id": "q_2221", "text": "Returns a zero-length range located just after the end of this range."} {"_id": "q_2222", "text": "Returns a zero-based column number of the beginning of this range."} {"_id": "q_2223", "text": "Returns the line number of the beginning of this range."} {"_id": "q_2224", "text": "Returns the lines of source code containing the entirety of this range."} {"_id": "q_2225", "text": "An AST comparison function. Returns ``True`` if all fields in\n ``left`` are equal to fields in ``right``; if ``compare_locs`` is\n true, all locations should match as well."} {"_id": "q_2226", "text": "Convert a 32-bit or 64-bit integer created\n by float_pack into a Python float."} {"_id": "q_2227", "text": "Convert a Python float x into a 64-bit unsigned integer\n with the same byte representation."} {"_id": "q_2228", "text": "A context manager that appends ``note`` to every diagnostic processed by\n this engine."} {"_id": "q_2229", "text": "Format a list of traceback entry tuples for printing.\n\n Given a list of tuples as returned by extract_tb() or\n extract_stack(), return a list of strings ready for printing.\n Each string in the resulting list corresponds to the item with the\n same index in the argument list. Each string ends in a newline;\n the strings may contain internal newlines as well, for those items\n whose source text line is not None."} {"_id": "q_2230", "text": "Print up to 'limit' stack trace entries from the traceback 'tb'.\n\n If 'limit' is omitted or None, all entries are printed. If 'file'\n is omitted or None, the output goes to sys.stderr; otherwise\n 'file' should be an open file or file-like object with a write()\n method."} {"_id": "q_2231", "text": "Print exception up to 'limit' stack trace entries from 'tb' to 'file'.\n\n This differs from print_tb() in the following ways: (1) if\n traceback is not None, it prints a header \"Traceback (most recent\n call last):\"; (2) it prints the exception type and value after the\n stack trace; (3) if type is SyntaxError and value has the\n appropriate format, it prints the line where the syntax error\n occurred with a caret on the next line indicating the approximate\n position of the error."} {"_id": "q_2232", "text": "Format a stack trace and the exception information.\n\n The arguments have the same meaning as the corresponding arguments\n to print_exception(). The return value is a list of strings, each\n ending in a newline and some containing internal newlines. When\n these lines are concatenated and printed, exactly the same text is\n printed as does print_exception()."} {"_id": "q_2233", "text": "x, random=random.random -> shuffle list x in place; return None.\n\n Optional arg random is a 0-argument function returning a random\n float in [0.0, 1.0); by default, the standard random.random."} {"_id": "q_2234", "text": "Return a list of slot names for a given class.\n\n This needs to find slots defined by the class and its bases, so we\n can't simply return the __slots__ attribute. We must walk down\n the Method Resolution Order and concatenate the __slots__ of each\n class found there. (This assumes classes don't modify their\n __slots__ attribute to misrepresent their slots after the class is\n defined.)"} {"_id": "q_2235", "text": "Convert a cmp= function into a key= function"} {"_id": "q_2236", "text": "Read header lines.\n\n Read header lines up to the entirely blank line that terminates them.\n The (normally blank) line that ends the headers is skipped, but not\n included in the returned list. If a non-header line ends the headers,\n (which is an error), an attempt is made to backspace over it; it is\n never included in the returned list.\n\n The variable self.status is set to the empty string if all went well,\n otherwise it is an error message. The variable self.headers is a\n completely uninterpreted list of lines contained in the header (so\n printing them will reproduce the header exactly as it appears in the\n file)."} {"_id": "q_2237", "text": "Determine whether a given line is a legal header.\n\n This method should return the header name, suitably canonicalized.\n You may override this method in order to use Message parsing on tagged\n data in RFC 2822-like formats with special header formats."} {"_id": "q_2238", "text": "Get the first header line matching name.\n\n This is similar to getallmatchingheaders, but it returns only the\n first matching header (and its continuation lines)."} {"_id": "q_2239", "text": "Get all values for a header.\n\n This returns a list of values for headers given more than once; each\n value in the result list is stripped in the same way as the result of\n getheader(). If the header is not given, return an empty list."} {"_id": "q_2240", "text": "Get a list of addresses from a header.\n\n Retrieves a list of addresses from a header, where each address is a\n tuple as returned by getaddr(). Scans all named headers, so it works\n properly with multiple To: or Cc: headers for example."} {"_id": "q_2241", "text": "Parse up to the start of the next address."} {"_id": "q_2242", "text": "Get the complete domain name from an address."} {"_id": "q_2243", "text": "Parse a sequence of RFC 2822 phrases.\n\n A phrase is a sequence of words, which are in turn either RFC 2822\n atoms or quoted-strings. Phrases are canonicalized by squeezing all\n runs of continuous whitespace into one space."} {"_id": "q_2244", "text": "year, month -> number of days in that month in that year."} {"_id": "q_2245", "text": "year, month, day -> ordinal, considering 01-Jan-0001 as day 1."} {"_id": "q_2246", "text": "Return a new date with new values for the specified fields."} {"_id": "q_2247", "text": "Return a 3-tuple containing ISO year, week number, and weekday.\n\n The first ISO week of the year is the (Mon-Sun) week\n containing the year's first Thursday; everything else derives\n from that.\n\n The first week is 1; Monday is 1 ... Sunday is 7.\n\n ISO calendar algorithm taken from\n http://www.phys.uu.nl/~vgent/calendar/isocalendar.htm"} {"_id": "q_2248", "text": "Return the timezone name.\n\n Note that the name is 100% informational -- there's no requirement that\n it mean anything in particular. For example, \"GMT\", \"UTC\", \"-500\",\n \"-5:00\", \"EDT\", \"US/Eastern\", \"America/New York\" are all valid replies."} {"_id": "q_2249", "text": "Return a new time with new values for the specified fields."} {"_id": "q_2250", "text": "Return the time part, with same tzinfo."} {"_id": "q_2251", "text": "Return a new datetime with new values for the specified fields."} {"_id": "q_2252", "text": "Same as a + b, for a and b sequences."} {"_id": "q_2253", "text": "Return the first index of b in a."} {"_id": "q_2254", "text": "Same as a += b, for a and b sequences."} {"_id": "q_2255", "text": "Return the string obtained by replacing the leftmost\n non-overlapping occurrences of the pattern in string by the\n replacement repl. repl can be either a string or a callable;\n if a string, backslash escapes in it are processed. If it is\n a callable, it's passed the match object and must return\n a replacement string to be used."} {"_id": "q_2256", "text": "Split the source string by the occurrences of the pattern,\n returning a list containing the resulting substrings."} {"_id": "q_2257", "text": "Return a list of all non-overlapping matches in the string.\n\n If one or more groups are present in the pattern, return a\n list of groups; this will be a list of tuples if the pattern\n has more than one group.\n\n Empty matches are included in the result."} {"_id": "q_2258", "text": "Decode uuencoded file"} {"_id": "q_2259", "text": "Return number of `ch` characters at the start of `line`.\n\n Example:\n\n >>> _count_leading(' abc', ' ')\n 3"} {"_id": "q_2260", "text": "r\"\"\"\n Compare two sequences of lines; generate the delta as a unified diff.\n\n Unified diffs are a compact way of showing line changes and a few\n lines of context. The number of context lines is set by 'n' which\n defaults to three.\n\n By default, the diff control lines (those with ---, +++, or @@) are\n created with a trailing newline. This is helpful so that inputs\n created from file.readlines() result in diffs that are suitable for\n file.writelines() since both the inputs and outputs have trailing\n newlines.\n\n For inputs that do not have trailing newlines, set the lineterm\n argument to \"\" so that the output will be uniformly newline free.\n\n The unidiff format normally has a header for filenames and modification\n times. Any or all of these may be specified using strings for\n 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'.\n The modification times are normally expressed in the ISO 8601 format.\n\n Example:\n\n >>> for line in unified_diff('one two three four'.split(),\n ... 'zero one tree four'.split(), 'Original', 'Current',\n ... '2005-01-26 23:30:50', '2010-04-02 10:20:52',\n ... lineterm=''):\n ... print line # doctest: +NORMALIZE_WHITESPACE\n --- Original 2005-01-26 23:30:50\n +++ Current 2010-04-02 10:20:52\n @@ -1,4 +1,4 @@\n +zero\n one\n -two\n -three\n +tree\n four"} {"_id": "q_2261", "text": "r\"\"\"\n Compare two sequences of lines; generate the delta as a context diff.\n\n Context diffs are a compact way of showing line changes and a few\n lines of context. The number of context lines is set by 'n' which\n defaults to three.\n\n By default, the diff control lines (those with *** or ---) are\n created with a trailing newline. This is helpful so that inputs\n created from file.readlines() result in diffs that are suitable for\n file.writelines() since both the inputs and outputs have trailing\n newlines.\n\n For inputs that do not have trailing newlines, set the lineterm\n argument to \"\" so that the output will be uniformly newline free.\n\n The context diff format normally has a header for filenames and\n modification times. Any or all of these may be specified using\n strings for 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'.\n The modification times are normally expressed in the ISO 8601 format.\n If not specified, the strings default to blanks.\n\n Example:\n\n >>> print ''.join(context_diff('one\\ntwo\\nthree\\nfour\\n'.splitlines(1),\n ... 'zero\\none\\ntree\\nfour\\n'.splitlines(1), 'Original', 'Current')),\n *** Original\n --- Current\n ***************\n *** 1,4 ****\n one\n ! two\n ! three\n four\n --- 1,4 ----\n + zero\n one\n ! tree\n four"} {"_id": "q_2262", "text": "Make a new Match object from a sequence or iterable"} {"_id": "q_2263", "text": "Return list of triples describing matching subsequences.\n\n Each triple is of the form (i, j, n), and means that\n a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in\n i and in j. New in Python 2.5, it's also guaranteed that if\n (i, j, n) and (i', j', n') are adjacent triples in the list, and\n the second is not the last triple in the list, then i+n != i' or\n j+n != j'. IOW, adjacent triples never describe adjacent equal\n blocks.\n\n The last triple is a dummy, (len(a), len(b), 0), and is the only\n triple with n==0.\n\n >>> s = SequenceMatcher(None, \"abxcd\", \"abcd\")\n >>> s.get_matching_blocks()\n [Match(a=0, b=0, size=2), Match(a=3, b=2, size=2), Match(a=5, b=4, size=0)]"} {"_id": "q_2264", "text": "r\"\"\"\n Compare two sequences of lines; generate the resulting delta.\n\n Each sequence must contain individual single-line strings ending with\n newlines. Such sequences can be obtained from the `readlines()` method\n of file-like objects. The delta generated also consists of newline-\n terminated strings, ready to be printed as-is via the writeline()\n method of a file-like object.\n\n Example:\n\n >>> print ''.join(Differ().compare('one\\ntwo\\nthree\\n'.splitlines(1),\n ... 'ore\\ntree\\nemu\\n'.splitlines(1))),\n - one\n ? ^\n + ore\n ? ^\n - two\n - three\n ? -\n + tree\n + emu"} {"_id": "q_2265", "text": "r\"\"\"\n Format \"?\" output and deal with leading tabs.\n\n Example:\n\n >>> d = Differ()\n >>> results = d._qformat('\\tabcDefghiJkl\\n', '\\tabcdefGhijkl\\n',\n ... ' ^ ^ ^ ', ' ^ ^ ^ ')\n >>> for line in results: print repr(line)\n ...\n '- \\tabcDefghiJkl\\n'\n '? \\t ^ ^ ^\\n'\n '+ \\tabcdefGhijkl\\n'\n '? \\t ^ ^ ^\\n'"} {"_id": "q_2266", "text": "Returns HTML file of side by side comparison with change highlights\n\n Arguments:\n fromlines -- list of \"from\" lines\n tolines -- list of \"to\" lines\n fromdesc -- \"from\" file column header string\n todesc -- \"to\" file column header string\n context -- set to True for contextual differences (defaults to False\n which shows full differences).\n numlines -- number of context lines. When context is set True,\n controls number of lines displayed before and after the change.\n When context is False, controls the number of lines to place\n the \"next\" link anchors before the next change (so click of\n \"next\" link jumps to just before the change)."} {"_id": "q_2267", "text": "Builds list of text lines by splitting text lines at wrap point\n\n This function will determine if the input text line needs to be\n wrapped (split) into separate lines. If so, the first wrap point\n will be determined and the first line appended to the output\n text line list. This function is used recursively to handle\n the second part of the split line to further split it."} {"_id": "q_2268", "text": "Collects mdiff output into separate lists\n\n Before storing the mdiff from/to data into a list, it is converted\n into a single line of text with HTML markup."} {"_id": "q_2269", "text": "Create unique anchor prefixes"} {"_id": "q_2270", "text": "Returns HTML table of side by side comparison with change highlights\n\n Arguments:\n fromlines -- list of \"from\" lines\n tolines -- list of \"to\" lines\n fromdesc -- \"from\" file column header string\n todesc -- \"to\" file column header string\n context -- set to True for contextual differences (defaults to False\n which shows full differences).\n numlines -- number of context lines. When context is set True,\n controls number of lines displayed before and after the change.\n When context is False, controls the number of lines to place\n the \"next\" link anchors before the next change (so click of\n \"next\" link jumps to just before the change)."} {"_id": "q_2271", "text": "Create and return a benchmark that runs work_func p times in parallel."} {"_id": "q_2272", "text": "List directory contents, using cache."} {"_id": "q_2273", "text": "Format a Python o into a pretty-printed representation."} {"_id": "q_2274", "text": "A decorator returning a function that first runs ``inner_rule`` and then, if its\n return value is not None, maps that value using ``mapper``.\n\n If the value being mapped is a tuple, it is expanded into multiple arguments.\n\n Similar to attaching semantic actions to rules in traditional parser generators."} {"_id": "q_2275", "text": "A rule that accepts a sequence of tokens satisfying ``rules`` and returns a tuple\n containing their return values, or None if the first rule was not satisfied."} {"_id": "q_2276", "text": "A rule that accepts token of kind ``newline`` and returns an empty list."} {"_id": "q_2277", "text": "Join a base URL and a possibly relative URL to form an absolute\n interpretation of the latter."} {"_id": "q_2278", "text": "Removes any existing fragment from URL.\n\n Returns a tuple of the defragmented URL and the fragment. If\n the URL contained no fragments, the second element is the\n empty string."} {"_id": "q_2279", "text": "Return a new SplitResult object replacing specified fields with new values"} {"_id": "q_2280", "text": "Test whether a path is a regular file"} {"_id": "q_2281", "text": "Return true if the pathname refers to an existing directory."} {"_id": "q_2282", "text": "Given a list of pathnames, returns the longest common leading component"} {"_id": "q_2283", "text": "Wrap a single paragraph of text, returning a list of wrapped lines.\n\n Reformat the single paragraph in 'text' so it fits in lines of no\n more than 'width' columns, and return a list of wrapped lines. By\n default, tabs in 'text' are expanded with string.expandtabs(), and\n all other whitespace characters (including newline) are converted to\n space. See TextWrapper class for available keyword args to customize\n wrapping behaviour."} {"_id": "q_2284", "text": "Fill a single paragraph of text, returning a new string.\n\n Reformat the single paragraph in 'text' to fit in lines of no more\n than 'width' columns, and return a new string containing the entire\n wrapped paragraph. As with wrap(), tabs are expanded and other\n whitespace characters converted to space. See TextWrapper class for\n available keyword args to customize wrapping behaviour."} {"_id": "q_2285", "text": "Remove any common leading whitespace from every line in `text`.\n\n This can be used to make triple-quoted strings line up with the left\n edge of the display, while still presenting them in the source code\n in indented form.\n\n Note that tabs and spaces are both treated as whitespace, but they\n are not equal: the lines \" hello\" and \"\\\\thello\" are\n considered to have no common leading whitespace. (This behaviour is\n new in Python 2.5; older versions of this module incorrectly\n expanded tabs before searching for common leading whitespace.)"} {"_id": "q_2286", "text": "Transform a list of characters into a list of longs."} {"_id": "q_2287", "text": "Initialize the message-digest and set all fields to zero."} {"_id": "q_2288", "text": "Shallow copy operation on arbitrary Python objects.\n\n See the module's __doc__ string for more info."} {"_id": "q_2289", "text": "Deep copy operation on arbitrary Python objects.\n\n See the module's __doc__ string for more info."} {"_id": "q_2290", "text": "Keeps a reference to the object x in the memo.\n\n Because we remember objects by their id, we have\n to assure that possibly temporary objects are kept\n alive by referencing them.\n We store a reference at the id of the memo, which should\n normally not be used unless someone tries to deepcopy\n the memo itself..."} {"_id": "q_2291", "text": "Issue a deprecation warning for Python 3.x related changes.\n\n Warnings are omitted unless Python is started with the -3 option."} {"_id": "q_2292", "text": "Hook to write a warning to a file; replace if you like."} {"_id": "q_2293", "text": "Issue a warning, or maybe ignore it or raise an exception."} {"_id": "q_2294", "text": "Compute the hash value of a set.\n\n Note that we don't define __hash__: not all sets are hashable.\n But if you define a hashable set type, its __hash__ should\n call this function.\n\n This must be compatible __eq__.\n\n All sets ought to compare equal if they contain the same\n elements, regardless of how they are implemented, and\n regardless of the order of the elements; so there's not much\n freedom for __eq__ or __hash__. We match the algorithm used\n by the built-in frozenset type."} {"_id": "q_2295", "text": "Remove an element. If not a member, raise a KeyError."} {"_id": "q_2296", "text": "Return the popped value. Raise KeyError if empty."} {"_id": "q_2297", "text": "Release a lock, decrementing the recursion level.\n\n If after the decrement it is zero, reset the lock to unlocked (not owned\n by any thread), and if any other threads are blocked waiting for the\n lock to become unlocked, allow exactly one of them to proceed. If after\n the decrement the recursion level is still nonzero, the lock remains\n locked and owned by the calling thread.\n\n Only call this method when the calling thread owns the lock. A\n RuntimeError is raised if this method is called when the lock is\n unlocked.\n\n There is no return value."} {"_id": "q_2298", "text": "Wait until notified or until a timeout occurs.\n\n If the calling thread has not acquired the lock when this method is\n called, a RuntimeError is raised.\n\n This method releases the underlying lock, and then blocks until it is\n awakened by a notify() or notifyAll() call for the same condition\n variable in another thread, or until the optional timeout occurs. Once\n awakened or timed out, it re-acquires the lock and returns.\n\n When the timeout argument is present and not None, it should be a\n floating point number specifying a timeout for the operation in seconds\n (or fractions thereof).\n\n When the underlying lock is an RLock, it is not released using its\n release() method, since this may not actually unlock the lock when it\n was acquired multiple times recursively. Instead, an internal interface\n of the RLock class is used, which really unlocks it even when it has\n been recursively acquired several times. Another internal interface is\n then used to restore the recursion level when the lock is reacquired."} {"_id": "q_2299", "text": "Wake up one or more threads waiting on this condition, if any.\n\n If the calling thread has not acquired the lock when this method is\n called, a RuntimeError is raised.\n\n This method wakes up at most n of the threads waiting for the condition\n variable; it is a no-op if no threads are waiting."} {"_id": "q_2300", "text": "Acquire a semaphore, decrementing the internal counter by one.\n\n When invoked without arguments: if the internal counter is larger than\n zero on entry, decrement it by one and return immediately. If it is zero\n on entry, block, waiting until some other thread has called release() to\n make it larger than zero. This is done with proper interlocking so that\n if multiple acquire() calls are blocked, release() will wake exactly one\n of them up. The implementation may pick one at random, so the order in\n which blocked threads are awakened should not be relied on. There is no\n return value in this case.\n\n When invoked with blocking set to true, do the same thing as when called\n without arguments, and return true.\n\n When invoked with blocking set to false, do not block. If a call without\n an argument would block, return false immediately; otherwise, do the\n same thing as when called without arguments, and return true."} {"_id": "q_2301", "text": "Block until the internal flag is true.\n\n If the internal flag is true on entry, return immediately. Otherwise,\n block until another thread calls set() to set the flag to true, or until\n the optional timeout occurs.\n\n When the timeout argument is present and not None, it should be a\n floating point number specifying a timeout for the operation in seconds\n (or fractions thereof).\n\n This method returns the internal flag on exit, so it will always return\n True except if a timeout is given and the operation times out."} {"_id": "q_2302", "text": "Start the thread's activity.\n\n It must be called at most once per thread object. It arranges for the\n object's run() method to be invoked in a separate thread of control.\n\n This method will raise a RuntimeError if called more than once on the\n same thread object."} {"_id": "q_2303", "text": "Method representing the thread's activity.\n\n You may override this method in a subclass. The standard run() method\n invokes the callable object passed to the object's constructor as the\n target argument, if any, with sequential and keyword arguments taken\n from the args and kwargs arguments, respectively."} {"_id": "q_2304", "text": "Wait until the thread terminates.\n\n This blocks the calling thread until the thread whose join() method is\n called terminates -- either normally or through an unhandled exception\n or until the optional timeout occurs.\n\n When the timeout argument is present and not None, it should be a\n floating point number specifying a timeout for the operation in seconds\n (or fractions thereof). As join() always returns None, you must call\n isAlive() after join() to decide whether a timeout happened -- if the\n thread is still alive, the join() call timed out.\n\n When the timeout argument is not present or None, the operation will\n block until the thread terminates.\n\n A thread can be join()ed many times.\n\n join() raises a RuntimeError if an attempt is made to join the current\n thread as that would cause a deadlock. It is also an error to join() a\n thread before it has been started and attempts to do so raises the same\n exception."} {"_id": "q_2305", "text": "quotetabs=True means that tab and space characters are always\n quoted.\n istext=False means that \\r and \\n are treated as regular characters\n header=True encodes space characters with '_' and requires\n real '_' characters to be quoted."} {"_id": "q_2306", "text": "Return a comma-separated list of option strings & metavariables."} {"_id": "q_2307", "text": "Update the option values from an arbitrary dictionary, but only\n use keys from dict that already have a corresponding attribute\n in self. Any keys in dict without a corresponding attribute\n are silently ignored."} {"_id": "q_2308", "text": "Insert item x in list a, and keep it sorted assuming a is sorted.\n\n If x is already in a, insert it to the right of the rightmost x.\n\n Optional args lo (default 0) and hi (default len(a)) bound the\n slice of a to be searched."} {"_id": "q_2309", "text": "Lock a mutex, call the function with supplied argument\n when it is acquired. If the mutex is already locked, place\n function and argument in the queue."} {"_id": "q_2310", "text": "Unlock a mutex. If the queue is not empty, call the next\n function with its argument."} {"_id": "q_2311", "text": "Return a clone object.\n\n Return a copy ('clone') of the md5 object. This can be used\n to efficiently compute the digests of strings that share\n a common initial substring."} {"_id": "q_2312", "text": "Return the string obtained by replacing the leftmost non-overlapping\n occurrences of pattern in string by the replacement repl."} {"_id": "q_2313", "text": "Split string by the occurrences of pattern."} {"_id": "q_2314", "text": "Creates a tuple of index pairs representing matched groups."} {"_id": "q_2315", "text": "Skips forward in a string as fast as possible using information from\n an optimization info block."} {"_id": "q_2316", "text": "Creates a new child context of this context and pushes it on the\n stack. pattern_offset is the offset off the current code position to\n start interpreting from."} {"_id": "q_2317", "text": "Checks whether a character matches set of arbitrary length. Assumes\n the code pointer is at the first member of the set."} {"_id": "q_2318", "text": "Remove the exponent by changing intpart and fraction."} {"_id": "q_2319", "text": "Return the subset of the list NAMES that match PAT"} {"_id": "q_2320", "text": "Put an item into the queue.\n\n If optional args 'block' is true and 'timeout' is None (the default),\n block if necessary until a free slot is available. If 'timeout' is\n a non-negative number, it blocks at most 'timeout' seconds and raises\n the Full exception if no free slot was available within that time.\n Otherwise ('block' is false), put an item on the queue if a free slot\n is immediately available, else raise the Full exception ('timeout'\n is ignored in that case)."} {"_id": "q_2321", "text": "Combine multiple context managers into a single nested context manager.\n\n This function has been deprecated in favour of the multiple manager form\n of the with statement.\n\n The one advantage of this function over the multiple manager form of the\n with statement is that argument unpacking allows it to be\n used with a variable number of context managers as follows:\n\n with nested(*managers):\n do_something()"} {"_id": "q_2322", "text": "Read and decodes JSON response."} {"_id": "q_2323", "text": "Process coroutine callback function"} {"_id": "q_2324", "text": "For crawling multiple urls"} {"_id": "q_2325", "text": "Init a Request class for crawling html"} {"_id": "q_2326", "text": "Actually start crawling."} {"_id": "q_2327", "text": "Returns the TensorFlow variables used by the baseline.\n\n Returns:\n List of variables"} {"_id": "q_2328", "text": "Creates a baseline from a specification dict."} {"_id": "q_2329", "text": "Iteration loop body of the conjugate gradient algorithm.\n\n Args:\n x: Current solution estimate $x_t$.\n iteration: Current iteration counter $t$.\n conjugate: Current conjugate $c_t$.\n residual: Current residual $r_t$.\n squared_residual: Current squared residual $r_t^2$.\n\n Returns:\n Updated arguments for next iteration."} {"_id": "q_2330", "text": "Returns the target optimizer arguments including the time, the list of variables to \n optimize, and various functions which the optimizer might require to perform an update \n step.\n\n Returns:\n Target optimizer arguments as dict."} {"_id": "q_2331", "text": "Creates an environment from a specification dict."} {"_id": "q_2332", "text": "Pass through rest role."} {"_id": "q_2333", "text": "Rendering table element. Wrap header and body in it.\n\n :param header: header part of the table.\n :param body: body part of the table."} {"_id": "q_2334", "text": "Worker Agent generator, receives an Agent class and creates a Worker Agent class that inherits from that Agent."} {"_id": "q_2335", "text": "Returns x, y from flat_position integer.\n\n Args:\n flat_position: flattened position integer\n\n Returns: x, y"} {"_id": "q_2336", "text": "Wait until there is a state."} {"_id": "q_2337", "text": "Creates an optimizer from a specification dict."} {"_id": "q_2338", "text": "Registers the saver operations to the graph in context."} {"_id": "q_2339", "text": "Saves this component's managed variables.\n\n Args:\n sess: The session for which to save the managed variables.\n save_path: The path to save data to.\n timestep: Optional, the timestep to append to the file name.\n\n Returns:\n Checkpoint path where the model was saved."} {"_id": "q_2340", "text": "Restores the values of the managed variables from disk location.\n\n Args:\n sess: The session for which to save the managed variables.\n save_path: The path used to save the data to."} {"_id": "q_2341", "text": "Process state.\n\n Args:\n tensor: tensor to process\n\n Returns: processed state"} {"_id": "q_2342", "text": "Shape of preprocessed state given original shape.\n\n Args:\n shape: original state shape\n\n Returns: processed state shape"} {"_id": "q_2343", "text": "Makes sure our optimizer is wrapped into the global_optimizer meta. This is only relevant for distributed RL."} {"_id": "q_2344", "text": "Creates and returns the TensorFlow operations for calculating the sequence of discounted cumulative rewards\n for a given sequence of single rewards.\n\n Example:\n single rewards = 2.0 1.0 0.0 0.5 1.0 -1.0\n terminal = False, False, False, False True False\n gamma = 0.95\n final_reward = 100.0 (only matters for last episode (r=-1.0) as this episode has no terminal signal)\n horizon=3\n output = 2.95 1.45 1.38 1.45 1.0 94.0\n\n Args:\n terminal: Tensor (bool) holding the is-terminal sequence. This sequence may contain more than one\n True value. If its very last element is False (not terminating), the given `final_reward` value\n is assumed to follow the last value in the single rewards sequence (see below).\n reward: Tensor (float) holding the sequence of single rewards. If the last element of `terminal` is False,\n an assumed last reward of the value of `final_reward` will be used.\n discount (float): The discount factor (gamma). By default, take the Model's discount factor.\n final_reward (float): Reward value to use if last episode in sequence does not terminate (terminal sequence\n ends with False). This value will be ignored if horizon == 1 or discount == 0.0.\n horizon (int): The length of the horizon (e.g. for n-step cumulative rewards in continuous tasks\n without terminal signals). Use 0 (default) for an infinite horizon. Note that horizon=1 leads to the\n exact same results as a discount factor of 0.0.\n\n Returns:\n Discounted cumulative reward tensor with the same shape as `reward`."} {"_id": "q_2345", "text": "Creates the TensorFlow operations for calculating the loss per batch instance.\n\n Args:\n states: Dict of state tensors.\n internals: Dict of prior internal state tensors.\n actions: Dict of action tensors.\n terminal: Terminal boolean tensor.\n reward: Reward tensor.\n next_states: Dict of successor state tensors.\n next_internals: List of posterior internal state tensors.\n update: Boolean tensor indicating whether this call happens during an update.\n reference: Optional reference tensor(s), in case of a comparative loss.\n\n Returns:\n Loss per instance tensor."} {"_id": "q_2346", "text": "Creates the TensorFlow operations for calculating the full loss of a batch.\n\n Args:\n states: Dict of state tensors.\n internals: List of prior internal state tensors.\n actions: Dict of action tensors.\n terminal: Terminal boolean tensor.\n reward: Reward tensor.\n next_states: Dict of successor state tensors.\n next_internals: List of posterior internal state tensors.\n update: Boolean tensor indicating whether this call happens during an update.\n reference: Optional reference tensor(s), in case of a comparative loss.\n\n Returns:\n Loss tensor."} {"_id": "q_2347", "text": "Creates the TensorFlow operations for performing an optimization update step based\n on the given input states and actions batch.\n\n Args:\n states: Dict of state tensors.\n internals: List of prior internal state tensors.\n actions: Dict of action tensors.\n terminal: Terminal boolean tensor.\n reward: Reward tensor.\n next_states: Dict of successor state tensors.\n next_internals: List of posterior internal state tensors.\n\n Returns:\n The optimization operation."} {"_id": "q_2348", "text": "Creates a distribution from a specification dict."} {"_id": "q_2349", "text": "Utility method for unbuffered observing where each tuple is inserted into TensorFlow via\n a single session call, thus avoiding race conditions in multi-threaded mode.\n\n Observe full experience tuplefrom the environment to learn from. Optionally pre-processes rewards\n Child classes should call super to get the processed reward\n EX: terminal, reward = super()...\n\n Args:\n states (any): One state (usually a value tuple) or dict of states if multiple states are expected.\n actions (any): One action (usually a value tuple) or dict of states if multiple actions are expected.\n internals (any): Internal list.\n terminal (bool): boolean indicating if the episode terminated after the observation.\n reward (float): scalar reward that resulted from executing the action."} {"_id": "q_2350", "text": "Returns a named tensor if available.\n\n Returns:\n valid: True if named tensor found, False otherwise\n tensor: If valid, will be a tensor, otherwise None"} {"_id": "q_2351", "text": "Stores a transition in replay memory.\n\n If the memory is full, the oldest entry is replaced."} {"_id": "q_2352", "text": "Change the priority of a leaf node"} {"_id": "q_2353", "text": "Change the priority of a leaf node."} {"_id": "q_2354", "text": "Similar to position++."} {"_id": "q_2355", "text": "Sample random element with priority greater than p."} {"_id": "q_2356", "text": "Sample minibatch of size batch_size."} {"_id": "q_2357", "text": "Computes priorities according to loss.\n\n Args:\n loss_per_instance:"} {"_id": "q_2358", "text": "Ends our server tcp connection."} {"_id": "q_2359", "text": "Determines whether action is available.\n That is, executing it would change the state."} {"_id": "q_2360", "text": "Determines whether action 'Left' is available."} {"_id": "q_2361", "text": "Execute action, add a new tile, update the score & return the reward."} {"_id": "q_2362", "text": "Executes action 'Left'."} {"_id": "q_2363", "text": "Adds a random tile to the grid. Assumes that it has empty fields."} {"_id": "q_2364", "text": "Creates the tf.train.Saver object and stores it in self.saver."} {"_id": "q_2365", "text": "Creates and returns a list of hooks to use in a session. Populates self.saver_directory.\n\n Returns: List of hooks to use in a session."} {"_id": "q_2366", "text": "Returns the tf op to fetch when unbuffered observations are passed in.\n\n Args:\n states (any): One state (usually a value tuple) or dict of states if multiple states are expected.\n actions (any): One action (usually a value tuple) or dict of states if multiple actions are expected.\n internals (any): Internal list.\n terminal (bool): boolean indicating if the episode terminated after the observation.\n reward (float): scalar reward that resulted from executing the action.\n\n Returns: Tf op to fetch when `observe()` is called."} {"_id": "q_2367", "text": "Returns the list of all of the components this model consists of that can be individually saved and restored.\n For instance the network or distribution.\n\n Returns:\n List of util.SavableComponent"} {"_id": "q_2368", "text": "Saves a component of this model to the designated location.\n\n Args:\n component_name: The component to save.\n save_path: The location to save to.\n Returns:\n Checkpoint path where the component was saved."} {"_id": "q_2369", "text": "Restores a component's parameters from a save location.\n\n Args:\n component_name: The component to restore.\n save_path: The save location."} {"_id": "q_2370", "text": "Return the state space. Might include subdicts if multiple states are\n available simultaneously.\n\n Returns: dict of state properties (shape and type)."} {"_id": "q_2371", "text": "Sanity checks an actions dict, used to define the action space for an MDP.\n Throws an error or warns if mismatches are found.\n\n Args:\n actions_spec (Union[None,dict]): The spec-dict to check (or None).\n\n Returns: Tuple of 1) the action space desc and 2) whether there is only one component in the action space."} {"_id": "q_2372", "text": "Handles the behaviour of visible bolts flying toward Marauders."} {"_id": "q_2373", "text": "Handles the behaviour of visible bolts flying toward the player."} {"_id": "q_2374", "text": "Launches a new bolt from a random Marauder."} {"_id": "q_2375", "text": "Creates and stores Network and Distribution objects.\n Generates and stores all template functions."} {"_id": "q_2376", "text": "Creates and returns the Distribution objects based on self.distributions_spec.\n\n Returns: Dict of distributions according to self.distributions_spec."} {"_id": "q_2377", "text": "Creates a memory from a specification dict."} {"_id": "q_2378", "text": "Initialization step preparing the arguments for the first iteration of the loop body.\n\n Args:\n x_init: Initial solution guess $x_0$.\n base_value: Value $f(x')$ at $x = x'$.\n target_value: Value $f(x_0)$ at $x = x_0$.\n estimated_improvement: Estimated value at $x = x_0$, $f(x')$ if None.\n\n Returns:\n Initial arguments for tf_step."} {"_id": "q_2379", "text": "Iteration loop body of the line search algorithm.\n\n Args:\n x: Current solution estimate $x_t$.\n iteration: Current iteration counter $t$.\n deltas: Current difference $x_t - x'$.\n improvement: Current improvement $(f(x_t) - f(x')) / v'$.\n last_improvement: Last improvement $(f(x_{t-1}) - f(x')) / v'$.\n estimated_improvement: Current estimated value $v'$.\n\n Returns:\n Updated arguments for next iteration."} {"_id": "q_2380", "text": "Render markdown formatted text to html.\n\n :param text: markdown formatted text content.\n :param escape: if set to False, all html tags will not be escaped.\n :param use_xhtml: output with xhtml tags.\n :param hard_wrap: if set to True, it will use the GFM line breaks feature.\n :param parse_block_html: parse text only in block level html.\n :param parse_inline_html: parse text only in inline level html."} {"_id": "q_2381", "text": "Parse setext heading."} {"_id": "q_2382", "text": "Grammar for hard wrap linebreak. You don't need to add two\n spaces at the end of a line."} {"_id": "q_2383", "text": "Rendering block level code. ``pre > code``.\n\n :param code: text content of the code block.\n :param lang: language of the given code."} {"_id": "q_2384", "text": "Rendering block level pure html content.\n\n :param html: text content of the html snippet."} {"_id": "q_2385", "text": "Rendering the ref anchor of a footnote.\n\n :param key: identity key for the footnote.\n :param index: the index count of current footnote."} {"_id": "q_2386", "text": "Rendering a footnote item.\n\n :param key: identity key for the footnote.\n :param text: text content of the footnote."} {"_id": "q_2387", "text": "Convert MetaParams into TF Summary Format and create summary_op.\n\n Returns:\n Merged TF Op for TEXT summary elements, should only be executed once to reduce data duplication."} {"_id": "q_2388", "text": "Creates the TensorFlow operations for calculating the baseline loss of a batch.\n\n Args:\n states: Dict of state tensors.\n internals: List of prior internal state tensors.\n reward: Reward tensor.\n update: Boolean tensor indicating whether this call happens during an update.\n reference: Optional reference tensor(s), in case of a comparative loss.\n\n Returns:\n Loss tensor."} {"_id": "q_2389", "text": "Creates the TensorFlow operations for performing an optimization step on the given variables, including\n actually changing the values of the variables.\n\n Args:\n time: Time tensor. Not used for this optimizer.\n variables: List of variables to optimize.\n **kwargs: \n fn_loss : loss function tensor to differentiate.\n\n Returns:\n List of delta tensors corresponding to the updates for each optimized variable."} {"_id": "q_2390", "text": "Constructs the extra Replay memory."} {"_id": "q_2391", "text": "Extends the q-model loss via the dqfd large-margin loss."} {"_id": "q_2392", "text": "Combines Q-loss and demo loss."} {"_id": "q_2393", "text": "Stores demonstrations in the demo memory."} {"_id": "q_2394", "text": "Performs a demonstration update by calling the demo optimization operation.\n Note that the batch data does not have to be fetched from the demo memory as this is now part of\n the TensorFlow operation of the demo update."} {"_id": "q_2395", "text": "Ensures tasks have an action key and strings are converted to python objects"} {"_id": "q_2396", "text": "Parses yaml as ansible.utils.parse_yaml but with linenumbers.\n\n The line numbers are stored in each node's LINE_NUMBER_KEY key."} {"_id": "q_2397", "text": "Add additional requirements from setup.cfg to file metadata_path"} {"_id": "q_2398", "text": "Convert an .egg-info directory into a .dist-info directory"} {"_id": "q_2399", "text": "Returns a message that includes a set of suggested actions and optional text.\n\n :Example:\n message = MessageFactory.suggested_actions([CardAction(title='a', type=ActionTypes.im_back, value='a'),\n CardAction(title='b', type=ActionTypes.im_back, value='b'),\n CardAction(title='c', type=ActionTypes.im_back, value='c')], 'Choose a color')\n await context.send_activity(message)\n\n :param actions:\n :param text:\n :param speak:\n :param input_hint:\n :return:"} {"_id": "q_2400", "text": "Returns a message that will display a single image or video to a user.\n\n :Example:\n message = MessageFactory.content_url('https://example.com/hawaii.jpg', 'image/jpeg',\n 'Hawaii Trip', 'A photo from our family vacation.')\n await context.send_activity(message)\n\n :param url:\n :param content_type:\n :param name:\n :param text:\n :param speak:\n :param input_hint:\n :return:"} {"_id": "q_2401", "text": "Read storeitems from storage.\n\n :param keys:\n :return dict:"} {"_id": "q_2402", "text": "Save storeitems to storage.\n\n :param changes:\n :return:"} {"_id": "q_2403", "text": "Return the sanitized key.\n\n Replace characters that are not allowed in keys in Cosmos.\n\n :param key:\n :return str:"} {"_id": "q_2404", "text": "Call the get or create methods."} {"_id": "q_2405", "text": "Return the database link.\n\n Check if the database exists or create the db.\n\n :param doc_client:\n :param id:\n :return str:"} {"_id": "q_2406", "text": "Fills the event properties and metrics for the QnaMessage event for telemetry.\n\n :return: A tuple of event data properties and metrics that will be sent to the BotTelemetryClient.track_event() method for the QnAMessage event. The properties and metrics returned the standard properties logged with any properties passed from the get_answers() method.\n\n :rtype: EventData"} {"_id": "q_2407", "text": "Returns the conversation reference for an activity. This can be saved as a plain old JSON\n object and then later used to message the user proactively.\n\n Usage Example:\n reference = TurnContext.get_conversation_reference(context.request)\n :param activity:\n :return:"} {"_id": "q_2408", "text": "Give the waterfall step a unique name"} {"_id": "q_2409", "text": "Determine if a number of Suggested Actions are supported by a Channel.\n\n Args:\n channel_id (str): The Channel to check the if Suggested Actions are supported in.\n button_cnt (int, optional): Defaults to 100. The number of Suggested Actions to check for the Channel.\n\n Returns:\n bool: True if the Channel supports the button_cnt total Suggested Actions, False if the Channel does not support that number of Suggested Actions."} {"_id": "q_2410", "text": "Determines if a given Auth header is from the Bot Framework Emulator\n\n :param auth_header: Bearer Token, in the 'Bearer [Long String]' Format.\n :type auth_header: str\n\n :return: True, if the token was issued by the Emulator. Otherwise, false."} {"_id": "q_2411", "text": "Returns an attachment for a hero card. Will raise a TypeError if 'card' argument is not a HeroCard.\n\n Hero cards tend to have one dominant full width image and the cards text & buttons can\n usually be found below the image.\n :return:"} {"_id": "q_2412", "text": "Return bool, True if succeed otherwise False."} {"_id": "q_2413", "text": "Reset to the default text color on console window.\n Return bool, True if succeed otherwise False."} {"_id": "q_2414", "text": "WindowFromPoint from Win32.\n Return int, a native window handle."} {"_id": "q_2415", "text": "keybd_event from Win32."} {"_id": "q_2416", "text": "PostMessage from Win32.\n Return bool, True if succeed otherwise False."} {"_id": "q_2417", "text": "SendMessage from Win32.\n Return int, the return value specifies the result of the message processing;\n it depends on the message sent."} {"_id": "q_2418", "text": "GetConsoleTitle from Win32.\n Return str."} {"_id": "q_2419", "text": "Check if desktop is locked.\n Return bool.\n Desktop is locked if press Win+L, Ctrl+Alt+Del or in remote desktop mode."} {"_id": "q_2420", "text": "Create Win32 struct `INPUT` for `SendInput`.\n Return `INPUT`."} {"_id": "q_2421", "text": "Create Win32 struct `KEYBDINPUT` for `SendInput`."} {"_id": "q_2422", "text": "Create Win32 struct `HARDWAREINPUT` for `SendInput`."} {"_id": "q_2423", "text": "Call IUIAutomation ElementFromPoint x,y. May return None if mouse is over cmd's title bar icon.\n Return `Control` subclass or None."} {"_id": "q_2424", "text": "Get a native handle from point x,y and call IUIAutomation.ElementFromHandle.\n Return `Control` subclass."} {"_id": "q_2425", "text": "Delete log file."} {"_id": "q_2426", "text": "Return `ctypes.Array`, an iterable array of int values in argb."} {"_id": "q_2427", "text": "Return list, a list of `Control` subclasses."} {"_id": "q_2428", "text": "Call native SetWindowText if control has a valid native handle."} {"_id": "q_2429", "text": "Determine whether current control is top level."} {"_id": "q_2430", "text": "Get the top level control which current control lays.\n If current control is top level, return self.\n If current control is root control, return None.\n Return `PaneControl` or `WindowControl` or None."} {"_id": "q_2431", "text": "Set top level window maximize."} {"_id": "q_2432", "text": "Move window to screen center."} {"_id": "q_2433", "text": "Set top level window active."} {"_id": "q_2434", "text": "For a composite instruction, reverse the order of sub-gates.\n\n This is done by recursively mirroring all sub-instructions.\n It does not invert any gate.\n\n Returns:\n Instruction: a fresh gate with sub-gates reversed"} {"_id": "q_2435", "text": "Invert this instruction.\n\n If the instruction is composite (i.e. has a definition),\n then its definition will be recursively inverted.\n\n Special instructions inheriting from Instruction can\n implement their own inverse (e.g. T and Tdg, Barrier, etc.)\n\n Returns:\n Instruction: a fresh instruction for the inverse\n\n Raises:\n QiskitError: if the instruction is not composite\n and an inverse has not been implemented for it."} {"_id": "q_2436", "text": "Add classical control on register classical and value val."} {"_id": "q_2437", "text": "Run all the passes on a QuantumCircuit\n\n Args:\n circuit (QuantumCircuit): circuit to transform via all the registered passes\n\n Returns:\n QuantumCircuit: Transformed circuit."} {"_id": "q_2438", "text": "Do a pass and its \"requires\".\n\n Args:\n pass_ (BasePass): Pass to do.\n dag (DAGCircuit): The dag on which the pass is ran.\n options (dict): PassManager options.\n Returns:\n DAGCircuit: The transformed dag in case of a transformation pass.\n The same input dag in case of an analysis pass.\n Raises:\n TranspilerError: If the pass is not a proper pass instance."} {"_id": "q_2439", "text": "Returns a list structure of the appended passes and its options.\n\n Returns (list): The appended passes."} {"_id": "q_2440", "text": "Fetches the passes added to this flow controller.\n\n Returns (dict): {'options': self.options, 'passes': [passes], 'type': type(self)}"} {"_id": "q_2441", "text": "Constructs a flow controller based on the partially evaluated controller arguments.\n\n Args:\n passes (list[BasePass]): passes to add to the flow controller.\n options (dict): PassManager options.\n **partial_controller (dict): Partially evaluated controller arguments in the form\n `{name:partial}`\n\n Raises:\n TranspilerError: When partial_controller is not well-formed.\n\n Returns:\n FlowController: A FlowController instance."} {"_id": "q_2442", "text": "Apply a single qubit gate to the qubit.\n\n Args:\n gate(str): the single qubit gate name\n params(list): the operation parameters op['params']\n Returns:\n tuple: a tuple of U gate parameters (theta, phi, lam)\n Raises:\n QiskitError: if the gate name is not valid"} {"_id": "q_2443", "text": "Get the matrix for a single qubit.\n\n Args:\n gate(str): the single qubit gate name\n params(list): the operation parameters op['params']\n Returns:\n array: A numpy array representing the matrix"} {"_id": "q_2444", "text": "Return the index string for Numpy.eignsum matrix-vector multiplication.\n\n The returned indices are to perform a matrix multiplication A.v where\n the matrix A is an M-qubit matrix, vector v is an N-qubit vector, and\n M <= N, and identity matrices are implied on the subsystems where A has no\n support on v.\n\n Args:\n gate_indices (list[int]): the indices of the right matrix subsystems\n to contract with the left matrix.\n number_of_qubits (int): the total number of qubits for the right matrix.\n\n Returns:\n str: An indices string for the Numpy.einsum function."} {"_id": "q_2445", "text": "Return the index string for Numpy.eignsum matrix multiplication.\n\n The returned indices are to perform a matrix multiplication A.v where\n the matrix A is an M-qubit matrix, matrix v is an N-qubit vector, and\n M <= N, and identity matrices are implied on the subsystems where A has no\n support on v.\n\n Args:\n gate_indices (list[int]): the indices of the right matrix subsystems\n to contract with the left matrix.\n number_of_qubits (int): the total number of qubits for the right matrix.\n\n Returns:\n tuple: (mat_left, mat_right, tens_in, tens_out) of index strings for\n that may be combined into a Numpy.einsum function string.\n\n Raises:\n QiskitError: if the total number of qubits plus the number of\n contracted indices is greater than 26."} {"_id": "q_2446", "text": "Build a ``DAGCircuit`` object from a ``QuantumCircuit``.\n\n Args:\n circuit (QuantumCircuit): the input circuit.\n\n Return:\n DAGCircuit: the DAG representing the input circuit."} {"_id": "q_2447", "text": "Function used to fit the decay cosine."} {"_id": "q_2448", "text": "Plot coherence data.\n\n Args:\n xdata\n ydata\n std_error\n fit\n fit_function\n xunit\n exp_str\n qubit_label\n Raises:\n ImportError: If matplotlib is not installed."} {"_id": "q_2449", "text": "Take the raw rb data and convert it into averages and std dev\n\n Args:\n raw_rb (numpy.array): m x n x l list where m is the number of seeds, n\n is the number of Clifford sequences and l is the number of qubits\n\n Return:\n numpy_array: 2 x n x l list where index 0 is the mean over seeds, 1 is\n the std dev overseeds"} {"_id": "q_2450", "text": "Plot randomized benchmarking data.\n\n Args:\n xdata (list): list of subsequence lengths\n ydatas (list): list of lists of survival probabilities for each\n sequence\n yavg (list): mean of the survival probabilities at each sequence\n length\n yerr (list): error of the survival\n fit (list): fit parameters\n survival_prob (callable): function that computes survival probability\n ax (Axes or None): plot axis (if passed in)\n show_plt (bool): display the plot.\n\n Raises:\n ImportError: If matplotlib is not installed."} {"_id": "q_2451", "text": "Validates the input to state visualization functions.\n\n Args:\n quantum_state (ndarray): Input state / density matrix.\n Returns:\n rho: A 2d numpy array for the density matrix.\n Raises:\n VisualizationError: Invalid input."} {"_id": "q_2452", "text": "Trim a PIL image and remove white space."} {"_id": "q_2453", "text": "Get the list of qubits drawing this gate would cover"} {"_id": "q_2454", "text": "Build an ``Instruction`` object from a ``QuantumCircuit``.\n\n The instruction is anonymous (not tied to a named quantum register),\n and so can be inserted into another circuit. The instruction will\n have the same string name as the circuit.\n\n Args:\n circuit (QuantumCircuit): the input circuit.\n\n Return:\n Instruction: an instruction equivalent to the action of the\n input circuit. Upon decomposition, this instruction will\n yield the components comprising the original circuit."} {"_id": "q_2455", "text": "Pick a convenient layout depending on the best matching\n qubit connectivity, and set the property `layout`.\n\n Args:\n dag (DAGCircuit): DAG to find layout for.\n\n Raises:\n TranspilerError: if dag wider than self.coupling_map"} {"_id": "q_2456", "text": "Computes the qubit mapping with the best connectivity.\n\n Args:\n n_qubits (int): Number of subset qubits to consider.\n\n Returns:\n ndarray: Array of qubits to use for best connectivity mapping."} {"_id": "q_2457", "text": "Apply barrier to circuit.\n If qargs is None, applies to all the qbits.\n Args is a list of QuantumRegister or single qubits.\n For QuantumRegister, applies barrier to all the qubits in that register."} {"_id": "q_2458", "text": "Process an Id or IndexedId node as a bit or register type.\n\n Return a list of tuples (Register,index)."} {"_id": "q_2459", "text": "Process a gate node.\n\n If opaque is True, process the node as an opaque gate node."} {"_id": "q_2460", "text": "Process a CNOT gate node."} {"_id": "q_2461", "text": "Process a measurement node."} {"_id": "q_2462", "text": "Process an if node."} {"_id": "q_2463", "text": "Create a DAG node out of a parsed AST op node.\n\n Args:\n name (str): operation name to apply to the dag.\n params (list): op parameters\n qargs (list(QuantumRegister, int)): qubits to attach to\n\n Raises:\n QiskitError: if encountering a non-basis opaque gate"} {"_id": "q_2464", "text": "Return duration of supplied channels.\n\n Args:\n *channels: Supplied channels"} {"_id": "q_2465", "text": "Return minimum start time for supplied channels.\n\n Args:\n *channels: Supplied channels"} {"_id": "q_2466", "text": "Return maximum start time for supplied channels.\n\n Args:\n *channels: Supplied channels"} {"_id": "q_2467", "text": "Iterable for flattening Schedule tree.\n\n Args:\n time: Shifted time due to parent\n\n Yields:\n Tuple[int, ScheduleComponent]: Tuple containing time `ScheduleComponent` starts\n at and the flattened `ScheduleComponent`."} {"_id": "q_2468", "text": "Include unknown fields after load.\n\n Unknown fields are added with no processing at all.\n\n Args:\n valid_data (dict or list): validated data returned by ``load()``.\n many (bool): if True, data and original_data are a list.\n original_data (dict or list): data passed to ``load()`` in the\n first place.\n\n Returns:\n dict: the same ``valid_data`` extended with the unknown attributes.\n\n Inspired by https://github.com/marshmallow-code/marshmallow/pull/595."} {"_id": "q_2469", "text": "Add validation after instantiation."} {"_id": "q_2470", "text": "Serialize the model into a Python dict of simple types.\n\n Note that this method requires that the model is bound with\n ``@bind_schema``."} {"_id": "q_2471", "text": "n-qubit QFT on q in circ."} {"_id": "q_2472", "text": "Partial trace over subsystems of multi-partite vector.\n\n Args:\n vec (vector_like): complex vector N\n trace_systems (list(int)): a list of subsystems (starting from 0) to\n trace over.\n dimensions (list(int)): a list of the dimensions of the subsystems.\n If this is not set it will assume all\n subsystems are qubits.\n reverse (bool): ordering of systems in operator.\n If True system-0 is the right most system in tensor product.\n If False system-0 is the left most system in tensor product.\n\n Returns:\n ndarray: A density matrix with the appropriate subsystems traced over."} {"_id": "q_2473", "text": "Devectorize a vectorized square matrix.\n\n Args:\n vectorized_mat (ndarray): a vectorized density matrix.\n method (str): the method of devectorization. Allowed values are\n - 'col' (default): flattens to column-major vector.\n - 'row': flattens to row-major vector.\n - 'pauli': flattens in the n-qubit Pauli basis.\n - 'pauli-weights': flattens in the n-qubit Pauli basis ordered by\n weight.\n\n Returns:\n ndarray: the resulting matrix.\n Raises:\n Exception: if input state is not a n-qubit state"} {"_id": "q_2474", "text": "Convert a Choi-matrix to a Pauli-basis superoperator.\n\n Note that this function assumes that the Choi-matrix\n is defined in the standard column-stacking convention\n and is normalized to have trace 1. For a channel E this\n is defined as: choi = (I \\\\otimes E)(bell_state).\n\n The resulting 'rauli' R acts on input states as\n |rho_out>_p = R.|rho_in>_p\n where |rho> = vectorize(rho, method='pauli') for order=1\n and |rho> = vectorize(rho, method='pauli_weights') for order=0.\n\n Args:\n choi (matrix): the input Choi-matrix.\n order (int): ordering of the Pauli group vector.\n order=1 (default) is standard lexicographic ordering.\n Eg: [II, IX, IY, IZ, XI, XX, XY,...]\n order=0 is ordered by weights.\n Eg. [II, IX, IY, IZ, XI, XY, XZ, XX, XY,...]\n\n Returns:\n np.array: A superoperator in the Pauli basis."} {"_id": "q_2475", "text": "Construct the outer product of two vectors.\n\n The second vector argument is optional, if absent the projector\n of the first vector will be returned.\n\n Args:\n vector1 (ndarray): the first vector.\n vector2 (ndarray): the (optional) second vector.\n\n Returns:\n np.array: The matrix |v1>..\"\n callback (callable): The callback that will be executed when an event is\n emitted."} {"_id": "q_2555", "text": "Emits an event if there are any subscribers.\n\n Args\n event (String): The event to be emitted\n args: Arguments linked with the event\n kwargs: Named arguments linked with the event"} {"_id": "q_2556", "text": "Unsubscribe the specific callback to the event.\n\n Args\n event (String): The event to unsubscribe\n callback (callable): The callback that won't be executed anymore\n\n Returns\n True: if we have successfully unsubscribed to the event\n False: if there's no callback previously registered"} {"_id": "q_2557", "text": "Call to create a circuit with gates that take the\n desired vector to zero.\n\n Returns:\n QuantumCircuit: circuit to take self.params vector to |00..0>"} {"_id": "q_2558", "text": "Checks if value has the format of a virtual qubit"} {"_id": "q_2559", "text": "Returns a copy of a Layout instance."} {"_id": "q_2560", "text": "Checks if the attribute name is in the list of attributes to protect. If so, raises\n TranspilerAccessError.\n\n Args:\n name (string): the attribute name to check\n\n Raises:\n TranspilerAccessError: when name is the list of attributes to protect."} {"_id": "q_2561", "text": "Run the StochasticSwap pass on `dag`.\n\n Args:\n dag (DAGCircuit): DAG to map.\n\n Returns:\n DAGCircuit: A mapped DAG.\n\n Raises:\n TranspilerError: if the coupling map or the layout are not\n compatible with the DAG"} {"_id": "q_2562", "text": "Provide a DAGCircuit for a new mapped layer.\n\n i (int) = layer number\n first_layer (bool) = True if this is the first layer in the\n circuit with any multi-qubit gates\n best_layout (Layout) = layout returned from _layer_permutation\n best_depth (int) = depth returned from _layer_permutation\n best_circuit (DAGCircuit) = swap circuit returned\n from _layer_permutation\n layer_list (list) = list of DAGCircuit objects for each layer,\n output of DAGCircuit layers() method\n\n Return a DAGCircuit object to append to the output DAGCircuit\n that the _mapper method is building."} {"_id": "q_2563", "text": "Return the Pauli group with 4^n elements.\n\n The phases have been removed.\n case 'weight' is ordered by Pauli weights and\n case 'tensor' is ordered by I,X,Y,Z counting lowest qubit fastest.\n\n Args:\n number_of_qubits (int): number of qubits\n case (str): determines ordering of group elements ('weight' or 'tensor')\n\n Returns:\n list: list of Pauli objects\n\n Raises:\n QiskitError: case is not 'weight' or 'tensor'\n QiskitError: number_of_qubits is larger than 4"} {"_id": "q_2564", "text": "Construct pauli from boolean array.\n\n Args:\n z (numpy.ndarray): boolean, z vector\n x (numpy.ndarray): boolean, x vector\n\n Returns:\n Pauli: self\n\n Raises:\n QiskitError: if z or x are None or the length of z and x are different."} {"_id": "q_2565", "text": "Convert to Operator object."} {"_id": "q_2566", "text": "Convert to Pauli circuit instruction."} {"_id": "q_2567", "text": "Update partial or entire x.\n\n Args:\n x (numpy.ndarray or list): to-be-updated x\n indices (numpy.ndarray or list or optional): to-be-updated qubit indices\n\n Returns:\n Pauli: self\n\n Raises:\n QiskitError: when updating whole x, the number of qubits must be the same."} {"_id": "q_2568", "text": "Append pauli at the end.\n\n Args:\n paulis (Pauli): the to-be-inserted or appended pauli\n pauli_labels (list[str]): the to-be-inserted or appended pauli label\n\n Returns:\n Pauli: self"} {"_id": "q_2569", "text": "Delete pauli at the indices.\n\n Args:\n indices(list[int]): the indices of to-be-deleted paulis\n\n Returns:\n Pauli: self"} {"_id": "q_2570", "text": "Generate single qubit pauli at index with pauli_label with length num_qubits.\n\n Args:\n num_qubits (int): the length of pauli\n index (int): the qubit index to insert the single qubii\n pauli_label (str): pauli\n\n Returns:\n Pauli: single qubit pauli"} {"_id": "q_2571", "text": "Simulate the outcome of measurement of a qubit.\n\n Args:\n qubit (int): the qubit to measure\n\n Return:\n tuple: pair (outcome, probability) where outcome is '0' or '1' and\n probability is the probability of the returned outcome."} {"_id": "q_2572", "text": "Generate memory samples from current statevector.\n\n Args:\n measure_params (list): List of (qubit, cmembit) values for\n measure instructions to sample.\n num_samples (int): The number of memory samples to generate.\n\n Returns:\n list: A list of memory values in hex format."} {"_id": "q_2573", "text": "Apply a measure instruction to a qubit.\n\n Args:\n qubit (int): qubit is the qubit measured.\n cmembit (int): is the classical memory bit to store outcome in.\n cregbit (int, optional): is the classical register bit to store outcome in."} {"_id": "q_2574", "text": "Apply a reset instruction to a qubit.\n\n Args:\n qubit (int): the qubit being rest\n\n This is done by doing a simulating a measurement\n outcome and projecting onto the outcome state while\n renormalizing."} {"_id": "q_2575", "text": "Validate an initial statevector"} {"_id": "q_2576", "text": "Set the initial statevector for simulation"} {"_id": "q_2577", "text": "Determine if measure sampling is allowed for an experiment\n\n Args:\n experiment (QobjExperiment): a qobj experiment."} {"_id": "q_2578", "text": "Run qobj asynchronously.\n\n Args:\n qobj (Qobj): payload of the experiment\n backend_options (dict): backend options\n\n Returns:\n BasicAerJob: derived from BaseJob\n\n Additional Information:\n backend_options: Is a dict of options for the backend. It may contain\n * \"initial_statevector\": vector_like\n\n The \"initial_statevector\" option specifies a custom initial\n initial statevector for the simulator to be used instead of the all\n zero state. This size of this vector must be correct for the number\n of qubits in all experiments in the qobj.\n\n Example::\n\n backend_options = {\n \"initial_statevector\": np.array([1, 0, 0, 1j]) / np.sqrt(2),\n }"} {"_id": "q_2579", "text": "Run experiments in qobj\n\n Args:\n job_id (str): unique id for the job.\n qobj (Qobj): job description\n\n Returns:\n Result: Result object"} {"_id": "q_2580", "text": "Semantic validations of the qobj which cannot be done via schemas."} {"_id": "q_2581", "text": "Validate an initial unitary matrix"} {"_id": "q_2582", "text": "Set the initial unitary for simulation"} {"_id": "q_2583", "text": "Return the current unitary in JSON Result spec format"} {"_id": "q_2584", "text": "Semantic validations of the qobj which cannot be done via schemas.\n Some of these may later move to backend schemas.\n 1. No shots\n 2. No measurements in the middle"} {"_id": "q_2585", "text": "Determine if obj is a bit"} {"_id": "q_2586", "text": "Pick a layout by assigning n circuit qubits to device qubits 0, .., n-1.\n\n Args:\n dag (DAGCircuit): DAG to find layout for.\n\n Raises:\n TranspilerError: if dag wider than self.coupling_map"} {"_id": "q_2587", "text": "Return maximum time of timeslots over all channels.\n\n Args:\n *channels: Channels over which to obtain stop time."} {"_id": "q_2588", "text": "Return a new TimeslotCollection shifted by `time`.\n\n Args:\n time: time to be shifted by"} {"_id": "q_2589", "text": "Report on GitHub that the specified branch is failing to build at\n the specified commit. The method will open an issue indicating that\n the branch is failing. If there is an issue already open, it will add a\n comment avoiding to report twice about the same failure.\n\n Args:\n branch (str): branch name to report about.\n commit (str): commit hash at which the build fails.\n infourl (str): URL with extra info about the failure such as the\n build logs."} {"_id": "q_2590", "text": "Sort rho data"} {"_id": "q_2591", "text": "Create a paulivec representation.\n\n Graphical representation of the input array.\n\n Args:\n rho (array): State vector or density matrix.\n figsize (tuple): Figure size in pixels.\n slider (bool): activate slider\n show_legend (bool): show legend of graph content"} {"_id": "q_2592", "text": "Apply RZZ to circuit."} {"_id": "q_2593", "text": "Apply Fredkin to circuit."} {"_id": "q_2594", "text": "Extract readout and CNOT errors and compute swap costs."} {"_id": "q_2595", "text": "Program graph has virtual qubits as nodes.\n Two nodes have an edge if the corresponding virtual qubits\n participate in a 2-qubit gate. The edge is weighted by the\n number of CNOTs between the pair."} {"_id": "q_2596", "text": "Select the best remaining hardware qubit for the next program qubit."} {"_id": "q_2597", "text": "Return a list of instructions for this CompositeGate.\n\n If the CompositeGate itself contains composites, call\n this method recursively."} {"_id": "q_2598", "text": "Invert this gate."} {"_id": "q_2599", "text": "Add controls to this gate."} {"_id": "q_2600", "text": "Add classical control register."} {"_id": "q_2601", "text": "Return True if operator is a unitary matrix."} {"_id": "q_2602", "text": "Return the conjugate of the operator."} {"_id": "q_2603", "text": "Return the transpose of the operator."} {"_id": "q_2604", "text": "Return the matrix power of the operator.\n\n Args:\n n (int): the power to raise the matrix to.\n\n Returns:\n BaseOperator: the n-times composed operator.\n\n Raises:\n QiskitError: if the input and output dimensions of the operator\n are not equal, or the power is not a positive integer."} {"_id": "q_2605", "text": "Return the tensor shape of the matrix operator"} {"_id": "q_2606", "text": "Convert a QuantumCircuit or Instruction to an Operator."} {"_id": "q_2607", "text": "Separate a bitstring according to the registers defined in the result header."} {"_id": "q_2608", "text": "Format an experiment result memory object for measurement level 1.\n\n Args:\n memory (list): Memory from experiment with `meas_level==1`. `avg` or\n `single` will be inferred from shape of result memory.\n\n Returns:\n np.ndarray: Measurement level 1 complex numpy array\n\n Raises:\n QiskitError: If the returned numpy array does not have 1 (avg) or 2 (single)\n indicies."} {"_id": "q_2609", "text": "Format an experiment result memory object for measurement level 2.\n\n Args:\n memory (list): Memory from experiment with `meas_level==2` and `memory==True`.\n header (dict): the experiment header dictionary containing\n useful information for postprocessing.\n\n Returns:\n list[str]: List of bitstrings"} {"_id": "q_2610", "text": "Format a single experiment result coming from backend to present\n to the Qiskit user.\n\n Args:\n counts (dict): counts histogram of multiple shots\n header (dict): the experiment header dictionary containing\n useful information for postprocessing.\n\n Returns:\n dict: a formatted counts"} {"_id": "q_2611", "text": "Format statevector coming from the backend to present to the Qiskit user.\n\n Args:\n vec (list): a list of [re, im] complex numbers.\n decimals (int): the number of decimals in the statevector.\n If None, no rounding is done.\n\n Returns:\n list[complex]: a list of python complex numbers."} {"_id": "q_2612", "text": "Format unitary coming from the backend to present to the Qiskit user.\n\n Args:\n mat (list[list]): a list of list of [re, im] complex numbers\n decimals (int): the number of decimals in the statevector.\n If None, no rounding is done.\n\n Returns:\n list[list[complex]]: a matrix of complex numbers"} {"_id": "q_2613", "text": "Submit the job to the backend for execution.\n\n Raises:\n QobjValidationError: if the JSON serialization of the Qobj passed\n during construction does not validate against the Qobj schema.\n\n JobError: if trying to re-submit the job."} {"_id": "q_2614", "text": "Gets the status of the job by querying the Python's future\n\n Returns:\n qiskit.providers.JobStatus: The current JobStatus\n\n Raises:\n JobError: If the future is in unexpected state\n concurrent.futures.TimeoutError: if timeout occurred."} {"_id": "q_2615", "text": "Whether `lo_freq` is within the `LoRange`.\n\n Args:\n lo_freq: LO frequency to be checked\n\n Returns:\n bool: True if lo_freq is included in this range, otherwise False"} {"_id": "q_2616", "text": "Create a bloch sphere representation.\n\n Graphical representation of the input array, using as much bloch\n spheres as qubit are required.\n\n Args:\n rho (array): State vector or density matrix\n figsize (tuple): Figure size in pixels."} {"_id": "q_2617", "text": "Expand all op nodes to the given basis.\n\n Args:\n dag(DAGCircuit): input dag\n\n Raises:\n QiskitError: if unable to unroll given the basis due to undefined\n decomposition rules (such as a bad basis) or excessive recursion.\n\n Returns:\n DAGCircuit: output unrolled dag"} {"_id": "q_2618", "text": "Create a Q sphere representation.\n\n Graphical representation of the input array, using a Q sphere for each\n eigenvalue.\n\n Args:\n rho (array): State vector or density matrix.\n figsize (tuple): Figure size in pixels."} {"_id": "q_2619", "text": "Return the lex index of a combination..\n\n Args:\n n (int): the total number of options .\n k (int): The number of elements.\n lst (list): list\n\n Returns:\n int: returns int index for lex order\n\n Raises:\n VisualizationError: if length of list is not equal to k"} {"_id": "q_2620", "text": "Returns the Instruction object corresponding to the op for the node else None"} {"_id": "q_2621", "text": "Generates zero-sampled `SamplePulse`.\n\n Args:\n duration: Duration of pulse. Must be greater than zero.\n name: Name of pulse."} {"_id": "q_2622", "text": "Generates square wave `SamplePulse`.\n\n Applies `left` sampling strategy to generate discrete pulse from continuous function.\n\n Args:\n duration: Duration of pulse. Must be greater than zero.\n amp: Pulse amplitude. Wave range is [-amp, amp].\n period: Pulse period, units of dt. If `None` defaults to single cycle.\n phase: Pulse phase.\n name: Name of pulse."} {"_id": "q_2623", "text": "Generates sawtooth wave `SamplePulse`.\n\n Args:\n duration: Duration of pulse. Must be greater than zero.\n amp: Pulse amplitude. Wave range is [-amp, amp].\n period: Pulse period, units of dt. If `None` defaults to single cycle.\n phase: Pulse phase.\n name: Name of pulse."} {"_id": "q_2624", "text": "Generates cosine wave `SamplePulse`.\n\n Applies `left` sampling strategy to generate discrete pulse from continuous function.\n\n Args:\n duration: Duration of pulse. Must be greater than zero.\n amp: Pulse amplitude.\n freq: Pulse frequency, units of 1/dt. If `None` defaults to single cycle.\n phase: Pulse phase.\n name: Name of pulse."} {"_id": "q_2625", "text": "Generates sine wave `SamplePulse`.\n\n Args:\n duration: Duration of pulse. Must be greater than zero.\n amp: Pulse amplitude.\n freq: Pulse frequency, units of 1/dt. If `None` defaults to single cycle.\n phase: Pulse phase.\n name: Name of pulse."} {"_id": "q_2626", "text": "r\"\"\"Generates unnormalized gaussian `SamplePulse`.\n\n Centered at `duration/2` and zeroed at `t=-1` to prevent large initial discontinuity.\n\n Applies `left` sampling strategy to generate discrete pulse from continuous function.\n\n Integrated area under curve is $\\Omega_g(amp, sigma) = amp \\times np.sqrt(2\\pi \\sigma^2)$\n\n Args:\n duration: Duration of pulse. Must be greater than zero.\n amp: Pulse amplitude at `duration/2`.\n sigma: Width (standard deviation) of pulse.\n name: Name of pulse."} {"_id": "q_2627", "text": "r\"\"\"Generates unnormalized gaussian derivative `SamplePulse`.\n\n Applies `left` sampling strategy to generate discrete pulse from continuous function.\n\n Args:\n duration: Duration of pulse. Must be greater than zero.\n amp: Pulse amplitude at `center`.\n sigma: Width (standard deviation) of pulse.\n name: Name of pulse."} {"_id": "q_2628", "text": "Generates gaussian square `SamplePulse`.\n\n Centered at `duration/2` and zeroed at `t=-1` and `t=duration+1` to prevent\n large initial/final discontinuities.\n\n Applies `left` sampling strategy to generate discrete pulse from continuous function.\n\n Args:\n duration: Duration of pulse. Must be greater than zero.\n amp: Pulse amplitude.\n sigma: Width (standard deviation) of gaussian rise/fall portion of the pulse.\n risefall: Number of samples over which pulse rise and fall happen. Width of\n square portion of pulse will be `duration-2*risefall`.\n name: Name of pulse."} {"_id": "q_2629", "text": "Compute distance."} {"_id": "q_2630", "text": "Print the node data, with indent."} {"_id": "q_2631", "text": "Rename a classical or quantum register throughout the circuit.\n\n regname = existing register name string\n newname = replacement register name string"} {"_id": "q_2632", "text": "Add all wires in a classical register."} {"_id": "q_2633", "text": "Add a qubit or bit to the circuit.\n\n Args:\n wire (tuple): (Register,int) containing a register instance and index\n This adds a pair of in and out nodes connected by an edge.\n\n Raises:\n DAGCircuitError: if trying to add duplicate wire"} {"_id": "q_2634", "text": "Add a new operation node to the graph and assign properties.\n\n Args:\n op (Instruction): the operation associated with the DAG node\n qargs (list): list of quantum wires to attach to.\n cargs (list): list of classical wires to attach to.\n condition (tuple or None): optional condition (ClassicalRegister, int)"} {"_id": "q_2635", "text": "Check that the wiremap is consistent.\n\n Check that the wiremap refers to valid wires and that\n those wires have consistent types.\n\n Args:\n wire_map (dict): map from (register,idx) in keymap to\n (register,idx) in valmap\n keymap (dict): a map whose keys are wire_map keys\n valmap (dict): a map whose keys are wire_map values\n\n Raises:\n DAGCircuitError: if wire_map not valid"} {"_id": "q_2636", "text": "Use the wire_map dict to change the condition tuple's creg name.\n\n Args:\n wire_map (dict): a map from wires to wires\n condition (tuple): (ClassicalRegister,int)\n Returns:\n tuple(ClassicalRegister,int): new condition"} {"_id": "q_2637", "text": "Apply the input circuit to the output of this circuit.\n\n The two bases must be \"compatible\" or an exception occurs.\n A subset of input qubits of the input circuit are mapped\n to a subset of output qubits of this circuit.\n\n Args:\n input_circuit (DAGCircuit): circuit to append\n edge_map (dict): map {(Register, int): (Register, int)}\n from the output wires of input_circuit to input wires\n of self.\n\n Raises:\n DAGCircuitError: if missing, duplicate or incosistent wire"} {"_id": "q_2638", "text": "Check that a list of wires is compatible with a node to be replaced.\n\n - no duplicate names\n - correct length for operation\n Raise an exception otherwise.\n\n Args:\n wires (list[register, index]): gives an order for (qu)bits\n in the input circuit that is replacing the node.\n node (DAGNode): a node in the dag\n\n Raises:\n DAGCircuitError: if check doesn't pass."} {"_id": "q_2639", "text": "Return predecessor and successor dictionaries.\n\n Args:\n node (DAGNode): reference to multi_graph node\n\n Returns:\n tuple(dict): tuple(predecessor_map, successor_map)\n These map from wire (Register, int) to predecessor (successor)\n nodes of n."} {"_id": "q_2640", "text": "Map all wires of the input circuit.\n\n Map all wires of the input circuit to predecessor and\n successor nodes in self, keyed on wires in self.\n\n Args:\n pred_map (dict): comes from _make_pred_succ_maps\n succ_map (dict): comes from _make_pred_succ_maps\n input_circuit (DAGCircuit): the input circuit\n wire_map (dict): the map from wires of input_circuit to wires of self\n\n Returns:\n tuple: full_pred_map, full_succ_map (dict, dict)\n\n Raises:\n DAGCircuitError: if more than one predecessor for output nodes"} {"_id": "q_2641", "text": "Yield nodes in topological order.\n\n Returns:\n generator(DAGNode): node in topological order"} {"_id": "q_2642", "text": "Get the list of \"op\" nodes in the dag.\n\n Args:\n op (Type): Instruction subclass op nodes to return. if op=None, return\n all op nodes.\n Returns:\n list[DAGNode]: the list of node ids containing the given op."} {"_id": "q_2643", "text": "Get the list of gate nodes in the dag.\n\n Returns:\n list: the list of node ids that represent gates."} {"_id": "q_2644", "text": "Get list of 2-qubit gates. Ignore snapshot, barriers, and the like."} {"_id": "q_2645", "text": "Returns set of the ancestors of a node as DAGNodes."} {"_id": "q_2646", "text": "Returns list of the successors of a node that are\n connected by a quantum edge as DAGNodes."} {"_id": "q_2647", "text": "Remove an operation node n.\n\n Add edges from predecessors to successors."} {"_id": "q_2648", "text": "Remove all of the descendant operation nodes of node."} {"_id": "q_2649", "text": "Remove all of the non-ancestors operation nodes of node."} {"_id": "q_2650", "text": "Yield a shallow view on a layer of this DAGCircuit for all d layers of this circuit.\n\n A layer is a circuit whose gates act on disjoint qubits, i.e.\n a layer has depth 1. The total number of layers equals the\n circuit depth d. The layers are indexed from 0 to d-1 with the\n earliest layer at index 0. The layers are constructed using a\n greedy algorithm. Each returned layer is a dict containing\n {\"graph\": circuit graph, \"partition\": list of qubit lists}.\n\n TODO: Gates that use the same cbits will end up in different\n layers as this is currently implemented. This may not be\n the desired behavior."} {"_id": "q_2651", "text": "Return a set of non-conditional runs of \"op\" nodes with the given names.\n\n For example, \"... h q[0]; cx q[0],q[1]; cx q[0],q[1]; h q[1]; ..\"\n would produce the tuple of cx nodes as an element of the set returned\n from a call to collect_runs([\"cx\"]). If instead the cx nodes were\n \"cx q[0],q[1]; cx q[1],q[0];\", the method would still return the\n pair in a tuple. The namelist can contain names that are not\n in the circuit's basis.\n\n Nodes must have only one successor to continue the run."} {"_id": "q_2652", "text": "Iterator for nodes that affect a given wire\n\n Args:\n wire (tuple(Register, index)): the wire to be looked at.\n only_ops (bool): True if only the ops nodes are wanted\n otherwise all nodes are returned.\n Yield:\n DAGNode: the successive ops on the given wire\n\n Raises:\n DAGCircuitError: if the given wire doesn't exist in the DAG"} {"_id": "q_2653", "text": "Generate a TomographyBasis object.\n\n See TomographyBasis for further details.abs\n\n Args:\n prep_fun (callable) optional: the function which adds preparation\n gates to a circuit.\n meas_fun (callable) optional: the function which adds measurement\n gates to a circuit.\n\n Returns:\n TomographyBasis: A tomography basis."} {"_id": "q_2654", "text": "Add state measurement gates to a circuit."} {"_id": "q_2655", "text": "Generate a dictionary of tomography experiment configurations.\n\n This returns a data structure that is used by other tomography functions\n to generate state and process tomography circuits, and extract tomography\n data from results after execution on a backend.\n\n Quantum State Tomography:\n Be default it will return a set for performing Quantum State\n Tomography where individual qubits are measured in the Pauli basis.\n A custom measurement basis may also be used by defining a user\n `tomography_basis` and passing this in for the `meas_basis` argument.\n\n Quantum Process Tomography:\n A quantum process tomography set is created by specifying a preparation\n basis along with a measurement basis. The preparation basis may be a\n user defined `tomography_basis`, or one of the two built in basis 'SIC'\n or 'Pauli'.\n - SIC: Is a minimal symmetric informationally complete preparation\n basis for 4 states for each qubit (4 ^ number of qubits total\n preparation states). These correspond to the |0> state and the 3\n other vertices of a tetrahedron on the Bloch-sphere.\n - Pauli: Is a tomographically overcomplete preparation basis of the six\n eigenstates of the 3 Pauli operators (6 ^ number of qubits\n total preparation states).\n\n Args:\n meas_qubits (list): The qubits being measured.\n meas_basis (tomography_basis or str): The qubit measurement basis.\n The default value is 'Pauli'.\n prep_qubits (list or None): The qubits being prepared. If None then\n meas_qubits will be used for process tomography experiments.\n prep_basis (tomography_basis or None): The optional qubit preparation\n basis. If no basis is specified state tomography will be performed\n instead of process tomography. A built in basis may be specified by\n 'SIC' or 'Pauli' (SIC basis recommended for > 2 qubits).\n\n Returns:\n dict: A dict of tomography configurations that can be parsed by\n `create_tomography_circuits` and `tomography_data` functions\n for implementing quantum tomography experiments. This output contains\n fields \"qubits\", \"meas_basis\", \"circuits\". It may also optionally\n contain a field \"prep_basis\" for process tomography experiments.\n ```\n {\n 'qubits': qubits (list[ints]),\n 'meas_basis': meas_basis (tomography_basis),\n 'circuit_labels': (list[string]),\n 'circuits': (list[dict]) # prep and meas configurations\n # optionally for process tomography experiments:\n 'prep_basis': prep_basis (tomography_basis)\n }\n ```\n Raises:\n QiskitError: if the Qubits argument is not a list."} {"_id": "q_2656", "text": "Generate a dictionary of process tomography experiment configurations.\n\n This returns a data structure that is used by other tomography functions\n to generate state and process tomography circuits, and extract tomography\n data from results after execution on a backend.\n\n A quantum process tomography set is created by specifying a preparation\n basis along with a measurement basis. The preparation basis may be a\n user defined `tomography_basis`, or one of the two built in basis 'SIC'\n or 'Pauli'.\n - SIC: Is a minimal symmetric informationally complete preparation\n basis for 4 states for each qubit (4 ^ number of qubits total\n preparation states). These correspond to the |0> state and the 3\n other vertices of a tetrahedron on the Bloch-sphere.\n - Pauli: Is a tomographically overcomplete preparation basis of the six\n eigenstates of the 3 Pauli operators (6 ^ number of qubits\n total preparation states).\n\n Args:\n meas_qubits (list): The qubits being measured.\n meas_basis (tomography_basis or str): The qubit measurement basis.\n The default value is 'Pauli'.\n prep_qubits (list or None): The qubits being prepared. If None then\n meas_qubits will be used for process tomography experiments.\n prep_basis (tomography_basis or str): The qubit preparation basis.\n The default value is 'SIC'.\n\n Returns:\n dict: A dict of tomography configurations that can be parsed by\n `create_tomography_circuits` and `tomography_data` functions\n for implementing quantum tomography experiments. This output contains\n fields \"qubits\", \"meas_basis\", \"prep_basus\", circuits\".\n ```\n {\n 'qubits': qubits (list[ints]),\n 'meas_basis': meas_basis (tomography_basis),\n 'prep_basis': prep_basis (tomography_basis),\n 'circuit_labels': (list[string]),\n 'circuits': (list[dict]) # prep and meas configurations\n }\n ```"} {"_id": "q_2657", "text": "Add tomography measurement circuits to a QuantumProgram.\n\n The quantum program must contain a circuit 'name', which is treated as a\n state preparation circuit for state tomography, or as teh circuit being\n measured for process tomography. This function then appends the circuit\n with a set of measurements specified by the input `tomography_set`,\n optionally it also prepends the circuit with state preparation circuits if\n they are specified in the `tomography_set`.\n\n For n-qubit tomography with a tomographically complete set of preparations\n and measurements this results in $4^n 3^n$ circuits being added to the\n quantum program.\n\n Args:\n circuit (QuantumCircuit): The circuit to be appended with tomography\n state preparation and/or measurements.\n qreg (QuantumRegister): the quantum register containing qubits to be\n measured.\n creg (ClassicalRegister): the classical register containing bits to\n store measurement outcomes.\n tomoset (tomography_set): the dict of tomography configurations.\n\n Returns:\n list: A list of quantum tomography circuits for the input circuit.\n\n Raises:\n QiskitError: if circuit is not a valid QuantumCircuit\n\n Example:\n For a tomography set specifying state tomography of qubit-0 prepared\n by a circuit 'circ' this would return:\n ```\n ['circ_meas_X(0)', 'circ_meas_Y(0)', 'circ_meas_Z(0)']\n ```\n For process tomography of the same circuit with preparation in the\n SIC-POVM basis it would return:\n ```\n [\n 'circ_prep_S0(0)_meas_X(0)', 'circ_prep_S0(0)_meas_Y(0)',\n 'circ_prep_S0(0)_meas_Z(0)', 'circ_prep_S1(0)_meas_X(0)',\n 'circ_prep_S1(0)_meas_Y(0)', 'circ_prep_S1(0)_meas_Z(0)',\n 'circ_prep_S2(0)_meas_X(0)', 'circ_prep_S2(0)_meas_Y(0)',\n 'circ_prep_S2(0)_meas_Z(0)', 'circ_prep_S3(0)_meas_X(0)',\n 'circ_prep_S3(0)_meas_Y(0)', 'circ_prep_S3(0)_meas_Z(0)'\n ]\n ```"} {"_id": "q_2658", "text": "Return a results dict for a state or process tomography experiment.\n\n Args:\n results (Result): Results from execution of a process tomography\n circuits on a backend.\n name (string): The name of the circuit being reconstructed.\n tomoset (tomography_set): the dict of tomography configurations.\n\n Returns:\n list: A list of dicts for the outcome of each process tomography\n measurement circuit."} {"_id": "q_2659", "text": "Reconstruct a density matrix or process-matrix from tomography data.\n\n If the input data is state_tomography_data the returned operator will\n be a density matrix. If the input data is process_tomography_data the\n returned operator will be a Choi-matrix in the column-vectorization\n convention.\n\n Args:\n tomo_data (dict): process tomography measurement data.\n method (str): the fitting method to use.\n Available methods:\n - 'wizard' (default)\n - 'leastsq'\n options (dict or None): additional options for fitting method.\n\n Returns:\n numpy.array: The fitted operator.\n\n Available methods:\n - 'wizard' (Default): The returned operator will be constrained to be\n positive-semidefinite.\n Options:\n - 'trace': the trace of the returned operator.\n The default value is 1.\n - 'beta': hedging parameter for computing frequencies from\n zero-count data. The default value is 0.50922.\n - 'epsilon: threshold for truncating small eigenvalues to zero.\n The default value is 0\n - 'leastsq': Fitting without positive-semidefinite constraint.\n Options:\n - 'trace': Same as for 'wizard' method.\n - 'beta': Same as for 'wizard' method.\n Raises:\n Exception: if the `method` parameter is not valid."} {"_id": "q_2660", "text": "Reconstruct a state from unconstrained least-squares fitting.\n\n Args:\n tomo_data (list[dict]): state or process tomography data.\n weights (list or array or None): weights to use for least squares\n fitting. The default is standard deviation from a binomial\n distribution.\n trace (float or None): trace of returned operator. The default is 1.\n beta (float or None): hedge parameter (>=0) for computing frequencies\n from zero-count data. The default value is 0.50922.\n\n Returns:\n numpy.array: A numpy array of the reconstructed operator."} {"_id": "q_2661", "text": "Returns a projectors."} {"_id": "q_2662", "text": "Monitor the status of a IBMQJob instance.\n\n Args:\n job (BaseJob): Job to monitor.\n interval (int): Time interval between status queries.\n monitor_async (bool): Monitor asyncronously (in Jupyter only).\n quiet (bool): If True, do not print status messages.\n output (file): The file like object to write status messages to.\n By default this is sys.stdout.\n\n Raises:\n QiskitError: When trying to run async outside of Jupyter\n ImportError: ipywidgets not available for notebook."} {"_id": "q_2663", "text": "Compute Euler angles for a single-qubit gate.\n\n Find angles (theta, phi, lambda) such that\n unitary_matrix = phase * Rz(phi) * Ry(theta) * Rz(lambda)\n\n Args:\n unitary_matrix (ndarray): 2x2 unitary matrix\n\n Returns:\n tuple: (theta, phi, lambda) Euler angles of SU(2)\n\n Raises:\n QiskitError: if unitary_matrix not 2x2, or failure"} {"_id": "q_2664", "text": "Extends dag with virtual qubits that are in layout but not in the circuit yet.\n\n Args:\n dag (DAGCircuit): DAG to extend.\n\n Returns:\n DAGCircuit: An extended DAG.\n\n Raises:\n TranspilerError: If there is not layout in the property set or not set at init time."} {"_id": "q_2665", "text": "The qubits properties widget\n\n Args:\n backend (IBMQbackend): The backend.\n\n Returns:\n VBox: A VBox widget."} {"_id": "q_2666", "text": "Widget for displaying job history\n\n Args:\n backend (IBMQbackend): The backend.\n\n Returns:\n Tab: A tab widget for history images."} {"_id": "q_2667", "text": "Plots the job history of the user from the given list of jobs.\n\n Args:\n jobs (list): A list of jobs with type IBMQjob.\n interval (str): Interval over which to examine.\n\n Returns:\n fig: A Matplotlib figure instance."} {"_id": "q_2668", "text": "transpile one or more circuits, according to some desired\n transpilation targets.\n\n All arguments may be given as either singleton or list. In case of list,\n the length must be equal to the number of circuits being transpiled.\n\n Transpilation is done in parallel using multiprocessing.\n\n Args:\n circuits (QuantumCircuit or list[QuantumCircuit]):\n Circuit(s) to transpile\n\n backend (BaseBackend):\n If set, transpiler options are automatically grabbed from\n backend.configuration() and backend.properties().\n If any other option is explicitly set (e.g. coupling_map), it\n will override the backend's.\n Note: the backend arg is purely for convenience. The resulting\n circuit may be run on any backend as long as it is compatible.\n\n basis_gates (list[str]):\n List of basis gate names to unroll to.\n e.g:\n ['u1', 'u2', 'u3', 'cx']\n If None, do not unroll.\n\n coupling_map (CouplingMap or list):\n Coupling map (perhaps custom) to target in mapping.\n Multiple formats are supported:\n a. CouplingMap instance\n\n b. list\n Must be given as an adjacency matrix, where each entry\n specifies all two-qubit interactions supported by backend\n e.g:\n [[0, 1], [0, 3], [1, 2], [1, 5], [2, 5], [4, 1], [5, 3]]\n\n backend_properties (BackendProperties):\n properties returned by a backend, including information on gate\n errors, readout errors, qubit coherence times, etc. For a backend\n that provides this information, it can be obtained with:\n ``backend.properties()``\n\n initial_layout (Layout or dict or list):\n Initial position of virtual qubits on physical qubits.\n If this layout makes the circuit compatible with the coupling_map\n constraints, it will be used.\n The final layout is not guaranteed to be the same, as the transpiler\n may permute qubits through swaps or other means.\n\n Multiple formats are supported:\n a. Layout instance\n\n b. dict\n virtual to physical:\n {qr[0]: 0,\n qr[1]: 3,\n qr[2]: 5}\n\n physical to virtual:\n {0: qr[0],\n 3: qr[1],\n 5: qr[2]}\n\n c. list\n virtual to physical:\n [0, 3, 5] # virtual qubits are ordered (in addition to named)\n\n physical to virtual:\n [qr[0], None, None, qr[1], None, qr[2]]\n\n seed_transpiler (int):\n sets random seed for the stochastic parts of the transpiler\n\n optimization_level (int):\n How much optimization to perform on the circuits.\n Higher levels generate more optimized circuits,\n at the expense of longer transpilation time.\n 0: no optimization\n 1: light optimization\n 2: heavy optimization\n\n pass_manager (PassManager):\n The pass manager to use for a custom pipeline of transpiler passes.\n If this arg is present, all other args will be ignored and the\n pass manager will be used directly (Qiskit will not attempt to\n auto-select a pass manager based on transpile options).\n\n seed_mapper (int):\n DEPRECATED in 0.8: use ``seed_transpiler`` kwarg instead\n\n Returns:\n QuantumCircuit or list[QuantumCircuit]: transpiled circuit(s).\n\n Raises:\n TranspilerError: in case of bad inputs to transpiler or errors in passes"} {"_id": "q_2669", "text": "Execute a list of circuits or pulse schedules on a backend.\n\n The execution is asynchronous, and a handle to a job instance is returned.\n\n Args:\n experiments (QuantumCircuit or list[QuantumCircuit] or Schedule or list[Schedule]):\n Circuit(s) or pulse schedule(s) to execute\n\n backend (BaseBackend):\n Backend to execute circuits on.\n Transpiler options are automatically grabbed from\n backend.configuration() and backend.properties().\n If any other option is explicitly set (e.g. coupling_map), it\n will override the backend's.\n\n basis_gates (list[str]):\n List of basis gate names to unroll to.\n e.g:\n ['u1', 'u2', 'u3', 'cx']\n If None, do not unroll.\n\n coupling_map (CouplingMap or list):\n Coupling map (perhaps custom) to target in mapping.\n Multiple formats are supported:\n a. CouplingMap instance\n\n b. list\n Must be given as an adjacency matrix, where each entry\n specifies all two-qubit interactions supported by backend\n e.g:\n [[0, 1], [0, 3], [1, 2], [1, 5], [2, 5], [4, 1], [5, 3]]\n\n backend_properties (BackendProperties):\n Properties returned by a backend, including information on gate\n errors, readout errors, qubit coherence times, etc. For a backend\n that provides this information, it can be obtained with:\n ``backend.properties()``\n\n initial_layout (Layout or dict or list):\n Initial position of virtual qubits on physical qubits.\n If this layout makes the circuit compatible with the coupling_map\n constraints, it will be used.\n The final layout is not guaranteed to be the same, as the transpiler\n may permute qubits through swaps or other means.\n\n Multiple formats are supported:\n a. Layout instance\n\n b. dict\n virtual to physical:\n {qr[0]: 0,\n qr[1]: 3,\n qr[2]: 5}\n\n physical to virtual:\n {0: qr[0],\n 3: qr[1],\n 5: qr[2]}\n\n c. list\n virtual to physical:\n [0, 3, 5] # virtual qubits are ordered (in addition to named)\n\n physical to virtual:\n [qr[0], None, None, qr[1], None, qr[2]]\n\n seed_transpiler (int):\n Sets random seed for the stochastic parts of the transpiler\n\n optimization_level (int):\n How much optimization to perform on the circuits.\n Higher levels generate more optimized circuits,\n at the expense of longer transpilation time.\n 0: no optimization\n 1: light optimization\n 2: heavy optimization\n\n pass_manager (PassManager):\n The pass manager to use during transpilation. If this arg is present,\n auto-selection of pass manager based on the transpile options will be\n turned off and this pass manager will be used directly.\n\n qobj_id (str):\n String identifier to annotate the Qobj\n\n qobj_header (QobjHeader or dict):\n User input that will be inserted in Qobj header, and will also be\n copied to the corresponding Result header. Headers do not affect the run.\n\n shots (int):\n Number of repetitions of each circuit, for sampling. Default: 2014\n\n memory (bool):\n If True, per-shot measurement bitstrings are returned as well\n (provided the backend supports it). For OpenPulse jobs, only\n measurement level 2 supports this option. Default: False\n\n max_credits (int):\n Maximum credits to spend on job. Default: 10\n\n seed_simulator (int):\n Random seed to control sampling, for when backend is a simulator\n\n default_qubit_los (list):\n List of default qubit lo frequencies\n\n default_meas_los (list):\n List of default meas lo frequencies\n\n schedule_los (None or list[Union[Dict[PulseChannel, float], LoConfig]] or\n Union[Dict[PulseChannel, float], LoConfig]):\n Experiment LO configurations\n\n meas_level (int):\n Set the appropriate level of the measurement output for pulse experiments.\n\n meas_return (str):\n Level of measurement data for the backend to return\n For `meas_level` 0 and 1:\n \"single\" returns information from every shot.\n \"avg\" returns average measurement output (averaged over number of shots).\n\n memory_slots (int):\n Number of classical memory slots used in this job.\n\n memory_slot_size (int):\n Size of each memory slot if the output is Level 0.\n\n rep_time (int): repetition time of the experiment in \u03bcs.\n The delay between experiments will be rep_time.\n Must be from the list provided by the device.\n\n parameter_binds (list[dict{Parameter: Value}]):\n List of Parameter bindings over which the set of experiments will be\n executed. Each list element (bind) should be of the form\n {Parameter1: value1, Parameter2: value2, ...}. All binds will be\n executed across all experiments, e.g. if parameter_binds is a\n length-n list, and there are m experiments, a total of m x n\n experiments will be run (one for each experiment/bind pair).\n\n seed (int):\n DEPRECATED in 0.8: use ``seed_simulator`` kwarg instead\n\n seed_mapper (int):\n DEPRECATED in 0.8: use ``seed_transpiler`` kwarg instead\n\n config (dict):\n DEPRECATED in 0.8: use run_config instead\n\n circuits (QuantumCircuit or list[QuantumCircuit]):\n DEPRECATED in 0.8: use ``experiments`` kwarg instead.\n\n run_config (dict):\n Extra arguments used to configure the run (e.g. for Aer configurable backends)\n Refer to the backend documentation for details on these arguments\n Note: for now, these keyword arguments will both be copied to the\n Qobj config, and passed to backend.run()\n\n Returns:\n BaseJob: returns job instance derived from BaseJob\n\n Raises:\n QiskitError: if the execution cannot be interpreted as either circuits or schedules"} {"_id": "q_2670", "text": "Return the primary drive channel of this qubit."} {"_id": "q_2671", "text": "Return the primary measure channel of this qubit."} {"_id": "q_2672", "text": "Return the primary acquire channel of this qubit."} {"_id": "q_2673", "text": "Remove the handlers for the 'qiskit' logger."} {"_id": "q_2674", "text": "Create a hinton representation.\n\n Graphical representation of the input array using a 2D city style\n graph (hinton).\n\n Args:\n rho (array): Density matrix\n figsize (tuple): Figure size in pixels."} {"_id": "q_2675", "text": "Return the process fidelity between two quantum channels.\n\n This is given by\n\n F_p(E1, E2) = Tr[S2^dagger.S1])/dim^2\n\n where S1 and S2 are the SuperOp matrices for channels E1 and E2,\n and dim is the dimension of the input output statespace.\n\n Args:\n channel1 (QuantumChannel or matrix): a quantum channel or unitary matrix.\n channel2 (QuantumChannel or matrix): a quantum channel or unitary matrix.\n require_cptp (bool): require input channels to be CPTP [Default: True].\n\n Returns:\n array_like: The state fidelity F(state1, state2).\n\n Raises:\n QiskitError: if inputs channels do not have the same dimensions,\n have different input and output dimensions, or are not CPTP with\n `require_cptp=True`."} {"_id": "q_2676", "text": "Set the input text data."} {"_id": "q_2677", "text": "Pop a PLY lexer off the stack."} {"_id": "q_2678", "text": "iterate over each block and replace it with an equivalent Unitary\n on the same wires."} {"_id": "q_2679", "text": "Return converted `FrameChangeInstruction`.\n\n Args:\n shift(int): Offset time.\n instruction (FrameChangeInstruction): frame change instruction.\n Returns:\n dict: Dictionary of required parameters."} {"_id": "q_2680", "text": "Return converted `PulseInstruction`.\n\n Args:\n shift(int): Offset time.\n instruction (PulseInstruction): drive instruction.\n Returns:\n dict: Dictionary of required parameters."} {"_id": "q_2681", "text": "Return converted `Snapshot`.\n\n Args:\n shift(int): Offset time.\n instruction (Snapshot): snapshot instruction.\n Returns:\n dict: Dictionary of required parameters."} {"_id": "q_2682", "text": "Sampler decorator base method.\n\n Samplers are used for converting an continuous function to a discretized pulse.\n\n They operate on a function with the signature:\n `def f(times: np.ndarray, *args, **kwargs) -> np.ndarray`\n Where `times` is a numpy array of floats with length n_times and the output array\n is a complex numpy array with length n_times. The output of the decorator is an\n instance of `FunctionalPulse` with signature:\n `def g(duration: int, *args, **kwargs) -> SamplePulse`\n\n Note if your continuous pulse function outputs a `complex` scalar rather than a\n `np.ndarray`, you should first vectorize it before applying a sampler.\n\n\n This class implements the sampler boilerplate for the sampler.\n\n Args:\n sample_function: A sampler function to be decorated."} {"_id": "q_2683", "text": "Return the backends matching the specified filtering.\n\n Filter the `backends` list by their `configuration` or `status`\n attributes, or from a boolean callable. The criteria for filtering can\n be specified via `**kwargs` or as a callable via `filters`, and the\n backends must fulfill all specified conditions.\n\n Args:\n backends (list[BaseBackend]): list of backends.\n filters (callable): filtering conditions as a callable.\n **kwargs (dict): dict of criteria.\n\n Returns:\n list[BaseBackend]: a list of backend instances matching the\n conditions."} {"_id": "q_2684", "text": "Resolve backend name from a deprecated name or an alias.\n\n A group will be resolved in order of member priorities, depending on\n availability.\n\n Args:\n name (str): name of backend to resolve\n backends (list[BaseBackend]): list of available backends.\n deprecated (dict[str: str]): dict of deprecated names.\n aliased (dict[str: list[str]]): dict of aliased names.\n\n Returns:\n str: resolved name (name of an available backend)\n\n Raises:\n LookupError: if name cannot be resolved through regular available\n names, nor deprecated, nor alias names."} {"_id": "q_2685", "text": "Convert an observable in matrix form to dictionary form.\n\n Takes in a diagonal observable as a matrix and converts it to a dictionary\n form. Can also handle a list sorted of the diagonal elements.\n\n Args:\n matrix_observable (list): The observable to be converted to dictionary\n form. Can be a matrix or just an ordered list of observed values\n\n Returns:\n Dict: A dictionary with all observable states as keys, and corresponding\n values being the observed value for that state"} {"_id": "q_2686", "text": "Verify each expression in a list."} {"_id": "q_2687", "text": "Verify a user defined gate call."} {"_id": "q_2688", "text": "Parse some data."} {"_id": "q_2689", "text": "Parser runner.\n\n To use this module stand-alone."} {"_id": "q_2690", "text": "Return a basis state ndarray.\n\n Args:\n str_state (string): a string representing the state.\n num (int): the number of qubits\n Returns:\n ndarray: state(2**num) a quantum state with basis basis state.\n Raises:\n QiskitError: if the dimensions is wrong"} {"_id": "q_2691", "text": "maps a pure state to a state matrix\n\n Args:\n state (ndarray): the number of qubits\n flatten (bool): determine if state matrix of column work\n Returns:\n ndarray: state_mat(2**num, 2**num) if flatten is false\n ndarray: state_mat(4**num) if flatten is true stacked on by the column"} {"_id": "q_2692", "text": "Calculate the purity of a quantum state.\n\n Args:\n state (ndarray): a quantum state\n Returns:\n float: purity."} {"_id": "q_2693", "text": "Run the pass on the DAG, and write the discovered commutation relations\n into the property_set."} {"_id": "q_2694", "text": "Creates a backend widget."} {"_id": "q_2695", "text": "Updates the monitor info\n Called from another thread."} {"_id": "q_2696", "text": "Get the number and size of unique registers from bit_labels list.\n\n Args:\n bit_labels (list): this list is of the form::\n\n [['reg1', 0], ['reg1', 1], ['reg2', 0]]\n\n which indicates a register named \"reg1\" of size 2\n and a register named \"reg2\" of size 1. This is the\n format of classic and quantum bit labels in qobj\n header.\n\n Yields:\n tuple: iterator of register_name:size pairs."} {"_id": "q_2697", "text": "Get depth information for the circuit.\n\n Returns:\n int: number of columns in the circuit\n int: total size of columns in the circuit"} {"_id": "q_2698", "text": "Get height, width & scale attributes for the beamer page.\n\n Returns:\n tuple: (height, width, scale) desirable page attributes"} {"_id": "q_2699", "text": "Loads the QObj schema for use in future validations.\n\n Caches schema in _SCHEMAS module attribute.\n\n Args:\n file_path(str): Path to schema.\n name(str): Given name for schema. Defaults to file_path filename\n without schema.\n Return:\n schema(dict): Loaded schema."} {"_id": "q_2700", "text": "Generate validator for JSON schema.\n\n Args:\n name (str): Name for validator. Will be validator key in\n `_VALIDATORS` dict.\n schema (dict): JSON schema `dict`. If not provided searches for schema\n in `_SCHEMAS`.\n check_schema (bool): Verify schema is valid.\n validator_class (jsonschema.IValidator): jsonschema IValidator instance.\n Default behavior is to determine this from the schema `$schema`\n field.\n **validator_kwargs (dict): Additional keyword arguments for validator.\n\n Return:\n jsonschema.IValidator: Validator for JSON schema.\n\n Raises:\n SchemaValidationError: Raised if validation fails."} {"_id": "q_2701", "text": "Load all default schemas into `_SCHEMAS`."} {"_id": "q_2702", "text": "Return a cascading explanation of the validation error.\n\n Returns a cascading explanation of the validation error in the form of::\n\n failed @ because of:\n failed @ because of:\n ...\n failed @ because of:\n ...\n ...\n\n For example::\n\n 'oneOf' failed @ '' because of:\n 'required' failed @ '.config' because of:\n 'meas_level' is a required property\n\n Meaning the validator 'oneOf' failed while validating the whole object\n because of the validator 'required' failing while validating the property\n 'config' because its 'meas_level' field is missing.\n\n The cascade repeats the format \" failed @ because of\"\n until there are no deeper causes. In this case, the string representation\n of the error is shown.\n\n Args:\n err (jsonschema.ValidationError): the instance to explain.\n level (int): starting level of indentation for the cascade of\n explanations.\n\n Return:\n str: a formatted string with the explanation of the error."} {"_id": "q_2703", "text": "Majority gate."} {"_id": "q_2704", "text": "Unmajority gate."} {"_id": "q_2705", "text": "Convert QuantumCircuit to LaTeX string.\n\n Args:\n circuit (QuantumCircuit): input circuit\n scale (float): image scaling\n filename (str): optional filename to write latex\n style (dict or str): dictionary of style or file name of style file\n reverse_bits (bool): When set to True reverse the bit order inside\n registers for the output visualization.\n plot_barriers (bool): Enable/disable drawing barriers in the output\n circuit. Defaults to True.\n justify (str) : `left`, `right` or `none`. Defaults to `left`. Says how\n the circuit should be justified.\n\n Returns:\n str: Latex string appropriate for writing to file."} {"_id": "q_2706", "text": "Draw a quantum circuit based on matplotlib.\n If `%matplotlib inline` is invoked in a Jupyter notebook, it visualizes a circuit inline.\n We recommend `%config InlineBackend.figure_format = 'svg'` for the inline visualization.\n\n Args:\n circuit (QuantumCircuit): a quantum circuit\n scale (float): scaling factor\n filename (str): file path to save image to\n style (dict or str): dictionary of style or file name of style file\n reverse_bits (bool): When set to True reverse the bit order inside\n registers for the output visualization.\n plot_barriers (bool): Enable/disable drawing barriers in the output\n circuit. Defaults to True.\n justify (str) : `left`, `right` or `none`. Defaults to `left`. Says how\n the circuit should be justified.\n\n\n Returns:\n matplotlib.figure: a matplotlib figure object for the circuit diagram"} {"_id": "q_2707", "text": "Return a random dim x dim unitary Operator from the Haar measure.\n\n Args:\n dim (int): the dim of the state space.\n seed (int): Optional. To set a random seed.\n\n Returns:\n Operator: (dim, dim) unitary operator.\n\n Raises:\n QiskitError: if dim is not a positive power of 2."} {"_id": "q_2708", "text": "Generate a random density matrix from the Hilbert-Schmidt metric.\n\n Args:\n N (int): the length of the density matrix.\n rank (int or None): the rank of the density matrix. The default\n value is full-rank.\n seed (int): Optional. To set a random seed.\n Returns:\n ndarray: rho (N,N a density matrix."} {"_id": "q_2709", "text": "Generate a random density matrix from the Bures metric.\n\n Args:\n N (int): the length of the density matrix.\n rank (int or None): the rank of the density matrix. The default\n value is full-rank.\n seed (int): Optional. To set a random seed.\n Returns:\n ndarray: rho (N,N) a density matrix."} {"_id": "q_2710", "text": "Return a list of custom gate names in this gate body."} {"_id": "q_2711", "text": "Return the compose of a QuantumChannel with itself n times.\n\n Args:\n n (int): compute the matrix power of the superoperator matrix.\n\n Returns:\n SuperOp: the n-times composition channel as a SuperOp object.\n\n Raises:\n QiskitError: if the input and output dimensions of the\n QuantumChannel are not equal, or the power is not an integer."} {"_id": "q_2712", "text": "Convert a list of circuits into a qobj.\n\n Args:\n circuits (list[QuantumCircuits] or QuantumCircuit): circuits to compile\n qobj_header (QobjHeader): header to pass to the results\n qobj_id (int): TODO: delete after qiskit-terra 0.8\n backend_name (str): TODO: delete after qiskit-terra 0.8\n config (dict): TODO: delete after qiskit-terra 0.8\n shots (int): TODO: delete after qiskit-terra 0.8\n max_credits (int): TODO: delete after qiskit-terra 0.8\n basis_gates (str): TODO: delete after qiskit-terra 0.8\n coupling_map (list): TODO: delete after qiskit-terra 0.8\n seed (int): TODO: delete after qiskit-terra 0.8\n memory (bool): TODO: delete after qiskit-terra 0.8\n\n Returns:\n Qobj: the Qobj to be run on the backends"} {"_id": "q_2713", "text": "Expand 3+ qubit gates using their decomposition rules.\n\n Args:\n dag(DAGCircuit): input dag\n Returns:\n DAGCircuit: output dag with maximum node degrees of 2\n Raises:\n QiskitError: if a 3q+ gate is not decomposable"} {"_id": "q_2714", "text": "Calculate a subcircuit that implements this unitary."} {"_id": "q_2715", "text": "Validate if the value is of the type of the schema's model.\n\n Assumes the nested schema is a ``BaseSchema``."} {"_id": "q_2716", "text": "Validate if it's a list of valid item-field values.\n\n Check if each element in the list can be validated by the item-field\n passed during construction."} {"_id": "q_2717", "text": "Set the absolute tolerence parameter for float comparisons."} {"_id": "q_2718", "text": "Set the relative tolerence parameter for float comparisons."} {"_id": "q_2719", "text": "Return tuple of input dimension for specified subsystems."} {"_id": "q_2720", "text": "Make a copy of current operator."} {"_id": "q_2721", "text": "Return the compose of a operator with itself n times.\n\n Args:\n n (int): the number of times to compose with self (n>0).\n\n Returns:\n BaseOperator: the n-times composed operator.\n\n Raises:\n QiskitError: if the input and output dimensions of the operator\n are not equal, or the power is not a positive integer."} {"_id": "q_2722", "text": "Perform a contraction using Numpy.einsum\n\n Args:\n tensor (np.array): a vector or matrix reshaped to a rank-N tensor.\n mat (np.array): a matrix reshaped to a rank-2M tensor.\n indices (list): tensor indices to contract with mat.\n shift (int): shift for indicies of tensor to contract [Default: 0].\n right_mul (bool): if True right multiply tensor by mat\n (else left multiply) [Default: False].\n\n Returns:\n Numpy.ndarray: the matrix multiplied rank-N tensor.\n\n Raises:\n QiskitError: if mat is not an even rank tensor."} {"_id": "q_2723", "text": "Override ``_deserialize`` for customizing the exception raised."} {"_id": "q_2724", "text": "Check if at least one of the possible choices validates the value.\n\n Possible choices are assumed to be ``ModelTypeValidator`` fields."} {"_id": "q_2725", "text": "Return the state fidelity between two quantum states.\n\n Either input may be a state vector, or a density matrix. The state\n fidelity (F) for two density matrices is defined as::\n\n F(rho1, rho2) = Tr[sqrt(sqrt(rho1).rho2.sqrt(rho1))] ^ 2\n\n For a pure state and mixed state the fidelity is given by::\n\n F(|psi1>, rho2) = \n\n For two pure states the fidelity is given by::\n\n F(|psi1>, |psi2>) = ||^2\n\n Args:\n state1 (array_like): a quantum state vector or density matrix.\n state2 (array_like): a quantum state vector or density matrix.\n\n Returns:\n array_like: The state fidelity F(state1, state2)."} {"_id": "q_2726", "text": "Apply real scalar function to singular values of a matrix.\n\n Args:\n a (array_like): (N, N) Matrix at which to evaluate the function.\n func (callable): Callable object that evaluates a scalar function f.\n\n Returns:\n ndarray: funm (N, N) Value of the matrix function specified by func\n evaluated at `A`."} {"_id": "q_2727", "text": "Special case. Return self."} {"_id": "q_2728", "text": "Set snapshot label to name\n\n Args:\n name (str or None): label to assign unitary\n\n Raises:\n TypeError: name is not string or None."} {"_id": "q_2729", "text": "Convert to a Kraus or UnitaryGate circuit instruction.\n\n If the channel is unitary it will be added as a unitary gate,\n otherwise it will be added as a kraus simulator instruction.\n\n Returns:\n Instruction: A kraus instruction for the channel.\n\n Raises:\n QiskitError: if input data is not an N-qubit CPTP quantum channel."} {"_id": "q_2730", "text": "Convert input into a QuantumChannel subclass object or Operator object"} {"_id": "q_2731", "text": "Alternative constructor for a TensorFlowModel that\n accepts a `tf.keras.Model` instance.\n\n Parameters\n ----------\n model : `tensorflow.keras.Model`\n A `tensorflow.keras.Model` that accepts a single input tensor\n and returns a single output tensor representing logits.\n bounds : tuple\n Tuple of lower and upper bound for the pixel values, usually\n (0, 1) or (0, 255).\n input_shape : tuple\n The shape of a single input, e.g. (28, 28, 1) for MNIST.\n If None, tries to get the the shape from the model's\n input_shape attribute.\n channel_axis : int\n The index of the axis that represents color channels.\n preprocessing: 2-element tuple with floats or numpy arrays\n Elementwises preprocessing of input; we first subtract the first\n element of preprocessing from the input and then divide the input\n by the second element."} {"_id": "q_2732", "text": "Interface to model.channel_axis for attacks.\n\n Parameters\n ----------\n batch : bool\n Controls whether the index of the axis for a batch of images\n (4 dimensions) or a single image (3 dimensions) should be returned."} {"_id": "q_2733", "text": "Returns true if _backward and _forward_backward can be called\n by an attack, False otherwise."} {"_id": "q_2734", "text": "Interface to model.predictions for attacks.\n\n Parameters\n ----------\n image : `numpy.ndarray`\n Single input with shape as expected by the model\n (without the batch dimension).\n strict : bool\n Controls if the bounds for the pixel values should be checked."} {"_id": "q_2735", "text": "Interface to model.batch_predictions for attacks.\n\n Parameters\n ----------\n images : `numpy.ndarray`\n Batch of inputs with shape as expected by the model.\n greedy : bool\n Whether the first adversarial should be returned.\n strict : bool\n Controls if the bounds for the pixel values should be checked."} {"_id": "q_2736", "text": "Interface to model.gradient for attacks.\n\n Parameters\n ----------\n image : `numpy.ndarray`\n Single input with shape as expected by the model\n (without the batch dimension).\n Defaults to the original image.\n label : int\n Label used to calculate the loss that is differentiated.\n Defaults to the original label.\n strict : bool\n Controls if the bounds for the pixel values should be checked."} {"_id": "q_2737", "text": "Interface to model.predictions_and_gradient for attacks.\n\n Parameters\n ----------\n image : `numpy.ndarray`\n Single input with shape as expected by the model\n (without the batch dimension).\n Defaults to the original image.\n label : int\n Label used to calculate the loss that is differentiated.\n Defaults to the original label.\n strict : bool\n Controls if the bounds for the pixel values should be checked."} {"_id": "q_2738", "text": "Interface to model.backward for attacks.\n\n Parameters\n ----------\n gradient : `numpy.ndarray`\n Gradient of some loss w.r.t. the logits.\n image : `numpy.ndarray`\n Single input with shape as expected by the model\n (without the batch dimension).\n\n Returns\n -------\n gradient : `numpy.ndarray`\n The gradient w.r.t the image.\n\n See Also\n --------\n :meth:`gradient`"} {"_id": "q_2739", "text": "Returns the index of the largest logit, ignoring the class that\n is passed as `exclude`."} {"_id": "q_2740", "text": "Concatenates the names of the given criteria in alphabetical order.\n\n If a sub-criterion is itself a combined criterion, its name is\n first split into the individual names and the names of the\n sub-sub criteria is used instead of the name of the sub-criterion.\n This is done recursively to ensure that the order and the hierarchy\n of the criteria does not influence the name.\n\n Returns\n -------\n str\n The alphabetically sorted names of the sub-criteria concatenated\n using double underscores between them."} {"_id": "q_2741", "text": "Calculates the cross-entropy.\n\n Parameters\n ----------\n logits : array_like\n The logits predicted by the model.\n label : int\n The label describing the target distribution.\n\n Returns\n -------\n float\n The cross-entropy between softmax(logits) and onehot(label)."} {"_id": "q_2742", "text": "Convenience method that calculates predictions for a single image.\n\n Parameters\n ----------\n image : `numpy.ndarray`\n Single input with shape as expected by the model\n (without the batch dimension).\n\n Returns\n -------\n `numpy.ndarray`\n Vector of predictions (logits, i.e. before the softmax) with\n shape (number of classes,).\n\n See Also\n --------\n :meth:`batch_predictions`"} {"_id": "q_2743", "text": "Clone a remote git repository to a local path.\n\n :param git_uri: the URI to the git repository to be cloned\n :return: the generated local path where the repository has been cloned to"} {"_id": "q_2744", "text": "Create Graphene Enum for sorting a SQLAlchemy class query\n\n Parameters\n - cls : Sqlalchemy model class\n Model used to create the sort enumerator\n - name : str, optional, default None\n Name to use for the enumerator. If not provided it will be set to `cls.__name__ + 'SortEnum'`\n - symbol_name : function, optional, default `_symbol_name`\n Function which takes the column name and a boolean indicating if the sort direction is ascending,\n and returns the symbol name for the current column and sort direction.\n The default function will create, for a column named 'foo', the symbols 'foo_asc' and 'foo_desc'\n\n Returns\n - Enum\n The Graphene enumerator"} {"_id": "q_2745", "text": "Monkey patching _strptime to avoid problems related with non-english\n locale changes on the system.\n\n For example, if system's locale is set to fr_FR. Parser won't recognize\n any date since all languages are translated to english dates."} {"_id": "q_2746", "text": "Get an ordered mapping with locale codes as keys\n and corresponding locale instances as values.\n\n :param languages:\n A list of language codes, e.g. ['en', 'es', 'zh-Hant'].\n If locales are not given, languages and region are\n used to construct locales to load.\n :type languages: list\n\n :param locales:\n A list of codes of locales which are to be loaded,\n e.g. ['fr-PF', 'qu-EC', 'af-NA']\n :type locales: list\n\n :param region:\n A region code, e.g. 'IN', '001', 'NE'.\n If locales are not given, languages and region are\n used to construct locales to load.\n :type region: str|unicode\n\n :param use_given_order:\n If True, the returned mapping is ordered in the order locales are given.\n :type allow_redetect_language: bool\n\n :param allow_conflicting_locales:\n if True, locales with same language and different region can be loaded.\n :type allow_conflicting_locales: bool\n\n :return: ordered locale code to locale instance mapping"} {"_id": "q_2747", "text": "Yield locale instances.\n\n :param languages:\n A list of language codes, e.g. ['en', 'es', 'zh-Hant'].\n If locales are not given, languages and region are\n used to construct locales to load.\n :type languages: list\n\n :param locales:\n A list of codes of locales which are to be loaded,\n e.g. ['fr-PF', 'qu-EC', 'af-NA']\n :type locales: list\n\n :param region:\n A region code, e.g. 'IN', '001', 'NE'.\n If locales are not given, languages and region are\n used to construct locales to load.\n :type region: str|unicode\n\n :param use_given_order:\n If True, the returned mapping is ordered in the order locales are given.\n :type allow_redetect_language: bool\n\n :param allow_conflicting_locales:\n if True, locales with same language and different region can be loaded.\n :type allow_conflicting_locales: bool\n\n :yield: locale instances"} {"_id": "q_2748", "text": "Check if tokens are valid tokens for the locale.\n\n :param tokens:\n a list of string or unicode tokens.\n :type tokens: list\n\n :return: True if tokens are valid, False otherwise."} {"_id": "q_2749", "text": "Attemps to parse time part of date strings like '1 day ago, 2 PM'"} {"_id": "q_2750", "text": "Check if the locale is applicable to translate date string.\n\n :param date_string:\n A string representing date and/or time in a recognizably valid format.\n :type date_string: str|unicode\n\n :param strip_timezone:\n If True, timezone is stripped from date string.\n :type strip_timezone: bool\n\n :return: boolean value representing if the locale is applicable for the date string or not."} {"_id": "q_2751", "text": "Parse with formats and return a dictionary with 'period' and 'obj_date'.\n\n :returns: :class:`datetime.datetime`, dict or None"} {"_id": "q_2752", "text": "return ammo generator"} {"_id": "q_2753", "text": "translate http code to net code. if accertion failed, set net code to 314"} {"_id": "q_2754", "text": "Generate phantom tool run config"} {"_id": "q_2755", "text": "get merged info about phantom conf"} {"_id": "q_2756", "text": "compose benchmark block"} {"_id": "q_2757", "text": "This function polls stdout and stderr streams and writes their contents\n to log"} {"_id": "q_2758", "text": "helper for above functions"} {"_id": "q_2759", "text": "Read stepper info from json"} {"_id": "q_2760", "text": "Write stepper info to json"} {"_id": "q_2761", "text": "Create Load Plan as defined in schedule. Publish info about its duration."} {"_id": "q_2762", "text": "Return rps for second t"} {"_id": "q_2763", "text": "Execute and check exit code"} {"_id": "q_2764", "text": "The reason why we have two separate methods for monitoring\n and aggregates is a strong difference in incoming data."} {"_id": "q_2765", "text": "x\n Make a set of points for `this` label\n\n overall_quantiles, overall_meta, net_codes, proto_codes, histograms"} {"_id": "q_2766", "text": "A feeder that runs in distinct thread in main process."} {"_id": "q_2767", "text": "Set up logging"} {"_id": "q_2768", "text": "override config options with user specified options"} {"_id": "q_2769", "text": "call shutdown routines"} {"_id": "q_2770", "text": "Collect data, cache it and send to listeners"} {"_id": "q_2771", "text": "Returns a marker function of the requested marker_type\n\n >>> marker = get_marker('uniq')(__test_missile)\n >>> type(marker)\n \n >>> len(marker)\n 32\n\n >>> get_marker('uri')(__test_missile)\n '_example_search_hello_help_us'\n\n >>> marker = get_marker('non-existent')(__test_missile)\n Traceback (most recent call last):\n ...\n NotImplementedError: No such marker: \"non-existent\"\n\n >>> get_marker('3')(__test_missile)\n '_example_search_hello'\n\n >>> marker = get_marker('3', True)\n >>> marker(__test_missile)\n '_example_search_hello#0'\n >>> marker(__test_missile)\n '_example_search_hello#1'"} {"_id": "q_2772", "text": "Parse duration string, such as '3h2m3s' into milliseconds\n\n >>> parse_duration('3h2m3s')\n 10923000\n\n >>> parse_duration('0.3s')\n 300\n\n >>> parse_duration('5')\n 5000"} {"_id": "q_2773", "text": "Start remote agent"} {"_id": "q_2774", "text": "Searching for line in jmeter.log such as\n Waiting for possible shutdown message on port 4445"} {"_id": "q_2775", "text": "Gracefull termination of running process"} {"_id": "q_2776", "text": "Parse lines and return stats"} {"_id": "q_2777", "text": "instantiate criterion from config string"} {"_id": "q_2778", "text": "Prepare config data."} {"_id": "q_2779", "text": "raise exception on disk space exceeded"} {"_id": "q_2780", "text": "raise exception on RAM exceeded"} {"_id": "q_2781", "text": "Gets next line for right panel"} {"_id": "q_2782", "text": "Cut tuple of line chunks according to it's wisible lenght"} {"_id": "q_2783", "text": "Right-pad lines of block to equal width"} {"_id": "q_2784", "text": "Calculate wisible length of string"} {"_id": "q_2785", "text": "Creates load plan timestamps generator\n\n >>> from util import take\n\n >>> take(7, LoadPlanBuilder().ramp(5, 4000).create())\n [0, 1000, 2000, 3000, 4000, 0, 0]\n\n >>> take(7, create(['ramp(5, 4s)']))\n [0, 1000, 2000, 3000, 4000, 0, 0]\n\n >>> take(12, create(['ramp(5, 4s)', 'wait(5s)', 'ramp(5,4s)']))\n [0, 1000, 2000, 3000, 4000, 9000, 10000, 11000, 12000, 13000, 0, 0]\n\n >>> take(7, create(['wait(5s)', 'ramp(5, 0)']))\n [5000, 5000, 5000, 5000, 5000, 0, 0]\n\n >>> take(7, create([]))\n [0, 0, 0, 0, 0, 0, 0]\n\n >>> take(12, create(['line(1, 9, 4s)']))\n [0, 500, 1000, 1500, 2000, 2500, 3000, 3500, 4000, 0, 0, 0]\n\n >>> take(12, create(['const(3, 5s)', 'line(7, 11, 2s)']))\n [0, 0, 0, 5000, 5000, 5000, 5000, 5500, 6000, 6500, 7000, 0]\n\n >>> take(12, create(['step(2, 10, 2, 3s)']))\n [0, 0, 3000, 3000, 6000, 6000, 9000, 9000, 12000, 12000, 0, 0]\n\n >>> take(12, LoadPlanBuilder().const(3, 1000).line(5, 10, 5000).steps)\n [(3, 1), (5, 1), (6, 1), (7, 1), (8, 1), (9, 1), (10, 1)]\n\n >>> take(12, LoadPlanBuilder().stairway(100, 950, 100, 30000).steps)\n [(100, 30), (200, 30), (300, 30), (400, 30), (500, 30), (600, 30), (700, 30), (800, 30), (900, 30), (950, 30)]\n\n >>> LoadPlanBuilder().stairway(100, 950, 100, 30000).instances\n 950\n\n >>> LoadPlanBuilder().const(3, 1000).line(5, 10, 5000).instances\n 10\n\n >>> LoadPlanBuilder().line(1, 100, 60000).instances\n 100"} {"_id": "q_2786", "text": "format level str"} {"_id": "q_2787", "text": "add right panel widget"} {"_id": "q_2788", "text": "Send request to writer service."} {"_id": "q_2789", "text": "Tells core to take plugin options and instantiate plugin classes"} {"_id": "q_2790", "text": "Retrieve a plugin of desired class, KeyError raised otherwise"} {"_id": "q_2791", "text": "Retrieve a list of plugins of desired class, KeyError raised otherwise"} {"_id": "q_2792", "text": "Move or copy single file to artifacts dir"} {"_id": "q_2793", "text": "Add file to be stored as result artifact on post-process phase"} {"_id": "q_2794", "text": "Generate temp file name in artifacts base dir\n and close temp file handle"} {"_id": "q_2795", "text": "Read configs set into storage"} {"_id": "q_2796", "text": "Flush current stat to file"} {"_id": "q_2797", "text": "return sections with specified prefix"} {"_id": "q_2798", "text": "Return all items found in this chunk"} {"_id": "q_2799", "text": "returns info object"} {"_id": "q_2800", "text": "Prepare for monitoring - install agents etc"} {"_id": "q_2801", "text": "Poll agents for data"} {"_id": "q_2802", "text": "sends pending data set to listeners"} {"_id": "q_2803", "text": "decode agents jsons, count diffs"} {"_id": "q_2804", "text": "Perform one request, possibly raising RetryException in the case\n the response is 429. Otherwise, if error text contain \"code\" string,\n then it decodes to json object and returns APIError.\n Returns the body json in the 200 status."} {"_id": "q_2805", "text": "Request a new order"} {"_id": "q_2806", "text": "Get an order"} {"_id": "q_2807", "text": "Result may be a python dictionary, array or a primitive type\n that can be converted to JSON for writing back the result."} {"_id": "q_2808", "text": "Writes the message as part of the response and sets 404 status."} {"_id": "q_2809", "text": "Makes the base dict for the response.\n The status is the string value for\n the key \"status\" of the response. This\n should be \"success\" or \"failure\"."} {"_id": "q_2810", "text": "Makes the python dict corresponding to the\n JSON that needs to be sent for a successful\n response. Result is the actual payload\n that gets sent."} {"_id": "q_2811", "text": "Helper function to get request argument.\n Raises exception if argument is missing.\n Returns the cluster argument."} {"_id": "q_2812", "text": "Helper function to get request argument.\n Raises exception if argument is missing.\n Returns the role argument."} {"_id": "q_2813", "text": "Helper function to get request argument.\n Raises exception if argument is missing.\n Returns the environ argument."} {"_id": "q_2814", "text": "Helper function to get topology argument.\n Raises exception if argument is missing.\n Returns the topology argument."} {"_id": "q_2815", "text": "Helper function to get starttime argument.\n Raises exception if argument is missing.\n Returns the starttime argument."} {"_id": "q_2816", "text": "Helper function to get endtime argument.\n Raises exception if argument is missing.\n Returns the endtime argument."} {"_id": "q_2817", "text": "Helper function to get length argument.\n Raises exception if argument is missing.\n Returns the length argument."} {"_id": "q_2818", "text": "Helper function to get metricname arguments.\n Notice that it is get_argument\"s\" variation, which means that this can be repeated.\n Raises exception if argument is missing.\n Returns a list of metricname arguments"} {"_id": "q_2819", "text": "Tries to connect to the Heron Server\n\n ``loop()`` method needs to be called after this."} {"_id": "q_2820", "text": "Registers protobuf message builders that this client wants to receive\n\n :param msg_builder: callable to create a protobuf message that this client wants to receive"} {"_id": "q_2821", "text": "This will extract heron directory from .pex file.\n\n For example,\n when __file__ is '/Users/heron-user/bin/heron/heron/tools/common/src/python/utils/config.pyc', and\n its real path is '/Users/heron-user/.heron/bin/heron/tools/common/src/python/utils/config.pyc',\n the internal variable ``path`` would be '/Users/heron-user/.heron', which is the heron directory\n\n This means the variable `go_above_dirs` below is 9.\n\n :return: root location of the .pex file"} {"_id": "q_2822", "text": "if role is not provided, supply userid\n if environ is not provided, supply 'default'"} {"_id": "q_2823", "text": "Parse the command line for overriding the defaults and\n create an override file."} {"_id": "q_2824", "text": "Get the path of java executable"} {"_id": "q_2825", "text": "Check if the release.yaml file exists"} {"_id": "q_2826", "text": "Print version from release.yaml\n\n :param zipped_pex: True if the PEX file is built with flag `zip_safe=False'."} {"_id": "q_2827", "text": "Returns the UUID with which the watch is\n registered. This UUID can be used to unregister\n the watch.\n Returns None if watch could not be registered.\n\n The argument 'callback' must be a function that takes\n exactly one argument, the topology on which\n the watch was triggered.\n Note that the watch will be unregistered in case\n it raises any Exception the first time.\n\n This callback is also called at the time\n of registration."} {"_id": "q_2828", "text": "Unregister the watch with the given UUID."} {"_id": "q_2829", "text": "Call all the callbacks.\n If any callback raises an Exception,\n unregister the corresponding watch."} {"_id": "q_2830", "text": "set physical plan"} {"_id": "q_2831", "text": "set packing plan"} {"_id": "q_2832", "text": "set exectuion state"} {"_id": "q_2833", "text": "Number of spouts + bolts"} {"_id": "q_2834", "text": "Get the current state of this topology.\n The state values are from the topology.proto\n RUNNING = 1, PAUSED = 2, KILLED = 3\n if the state is None \"Unknown\" is returned."} {"_id": "q_2835", "text": "Sync the topologies with the statemgrs."} {"_id": "q_2836", "text": "Returns all the topologies for a given state manager."} {"_id": "q_2837", "text": "Returns the repesentation of execution state that will\n be returned from Tracker."} {"_id": "q_2838", "text": "Returns the representation of scheduler location that will\n be returned from Tracker."} {"_id": "q_2839", "text": "Returns the representation of tmaster that will\n be returned from Tracker."} {"_id": "q_2840", "text": "validate extra link"} {"_id": "q_2841", "text": "Emits a new tuple from this Spout\n\n It is compatible with StreamParse API.\n\n :type tup: list or tuple\n :param tup: the new output Tuple to send from this spout,\n should contain only serializable data.\n :type tup_id: str or object\n :param tup_id: the ID for the Tuple. Leave this blank for an unreliable emit.\n (Same as messageId in Java)\n :type stream: str\n :param stream: the ID of the stream this Tuple should be emitted to.\n Leave empty to emit to the default stream.\n :type direct_task: int\n :param direct_task: the task to send the Tuple to if performing a direct emit.\n :type need_task_ids: bool\n :param need_task_ids: indicate whether or not you would like the task IDs the Tuple was emitted."} {"_id": "q_2842", "text": "normalize raw logical plan info to table"} {"_id": "q_2843", "text": "filter to keep bolts"} {"_id": "q_2844", "text": "get physical plan"} {"_id": "q_2845", "text": "create physical plan"} {"_id": "q_2846", "text": "get execution state"} {"_id": "q_2847", "text": "Helper function to get execution state with\n a callback. The future watch is placed\n only if isWatching is True."} {"_id": "q_2848", "text": "Deserializes Java primitive data and objects serialized by ObjectOutputStream\n from a file-like object."} {"_id": "q_2849", "text": "copy an object"} {"_id": "q_2850", "text": "Fetches Instance jstack from heron-shell."} {"_id": "q_2851", "text": "Create the parse for the update command"} {"_id": "q_2852", "text": "flatten extra args"} {"_id": "q_2853", "text": "Checks if a given gtype is sane"} {"_id": "q_2854", "text": "Custom grouping from a given implementation of ICustomGrouping\n\n :param customgrouper: The ICustomGrouping implemention to use"} {"_id": "q_2855", "text": "Update the value of CountMetric or MultiCountMetric\n\n :type name: str\n :param name: name of the registered metric to be updated.\n :type incr_by: int\n :param incr_by: specifies how much to increment. Default is 1.\n :type key: str or None\n :param key: specifies a key for MultiCountMetric. Needs to be `None` for updating CountMetric."} {"_id": "q_2856", "text": "Update the value of ReducedMetric or MultiReducedMetric\n\n :type name: str\n :param name: name of the registered metric to be updated.\n :param value: specifies a value to be reduced.\n :type key: str or None\n :param key: specifies a key for MultiReducedMetric. Needs to be `None` for updating\n ReducedMetric."} {"_id": "q_2857", "text": "Apply updates to the execute metrics"} {"_id": "q_2858", "text": "Apply updates to the deserialization metrics"} {"_id": "q_2859", "text": "Registers a given metric\n\n :param name: name of the metric\n :param metric: IMetric object to be registered\n :param time_bucket_in_sec: time interval for update to the metrics manager"} {"_id": "q_2860", "text": "Offer to the buffer\n\n It is a non-blocking operation, and when the buffer is full, it raises Queue.Full exception"} {"_id": "q_2861", "text": "Parse version to major, minor, patch, pre-release, build parts."} {"_id": "q_2862", "text": "Returns all the file state_managers."} {"_id": "q_2863", "text": "Increments the value of a given key by ``to_add``"} {"_id": "q_2864", "text": "Adds a new key to this metric"} {"_id": "q_2865", "text": "Add a new data tuple to the currently buffered set of tuples"} {"_id": "q_2866", "text": "Add the checkpoint state message to be sent back the stmgr\n\n :param ckpt_id: The id of the checkpoint\n :ckpt_state: The checkpoint state"} {"_id": "q_2867", "text": "Check if an entry in the class path exists as either a directory or a file"} {"_id": "q_2868", "text": "Given a java classpath, check whether the path entries are valid or not"} {"_id": "q_2869", "text": "Get a list of paths to included dependencies in the specified pex file\n\n Note that dependencies are located under `.deps` directory"} {"_id": "q_2870", "text": "Loads pex file and its dependencies to the current python path"} {"_id": "q_2871", "text": "Resolves duplicate package suffix problems\n\n When dynamically loading a pex file and a corresponding python class (bolt/spout/topology),\n if the top level package in which to-be-loaded classes reside is named 'heron', the path conflicts\n with this Heron Instance pex package (heron.instance.src.python...), making the Python\n interpreter unable to find the target class in a given pex file.\n This function resolves this issue by individually loading packages with suffix `heron` to\n avoid this issue.\n\n However, if a dependent module/class that is not directly specified under ``class_path``\n and has conflicts with other native heron packages, there is a possibility that\n such a class/module might not be imported correctly. For example, if a given ``class_path`` was\n ``heron.common.src.module.Class``, but it has a dependent module (such as by import statement),\n ``heron.common.src.python.dep_module.DepClass`` for example, pex_loader does not guarantee that\n ``DepClass` is imported correctly. This is because ``heron.common.src.python.dep_module`` is not\n explicitly added to sys.path, while ``heron.common.src.python`` module exists as the native heron\n package, from which ``dep_module`` cannot be found, so Python interpreter may raise ImportError.\n\n The best way to avoid this issue is NOT to dynamically load a pex file whose top level package\n name is ``heron``. Note that this method is included because some of the example topologies and\n tests have to have a pex with its top level package name of ``heron``."} {"_id": "q_2872", "text": "Builds the topology and returns the builder"} {"_id": "q_2873", "text": "For each kvp in config, do wildcard substitution on the values"} {"_id": "q_2874", "text": "set default time"} {"_id": "q_2875", "text": "Process a single tuple of input\n\n We add the (time, tuple) pair into our current_tuples. And then look for expiring\n elemnents"} {"_id": "q_2876", "text": "Called every slide_interval"} {"_id": "q_2877", "text": "Called every window_duration"} {"_id": "q_2878", "text": "Get summary of stream managers registration summary"} {"_id": "q_2879", "text": "Set up log, process and signal handlers"} {"_id": "q_2880", "text": "Register exit handlers, initialize the executor and run it."} {"_id": "q_2881", "text": "Returns the processes to handle streams, including the stream-mgr and the user code containing\n the stream logic of the topology"} {"_id": "q_2882", "text": "For the given packing_plan, return the container plan with the given container_id. If protobufs\n supported maps, we could just get the plan by id, but it doesn't so we have a collection of\n containers to iterate over."} {"_id": "q_2883", "text": "Get a map from all daemon services' name to the command to start them"} {"_id": "q_2884", "text": "Start all commands and add them to the dict of processes to be monitored"} {"_id": "q_2885", "text": "Monitor all processes in processes_to_monitor dict,\n restarting any if they fail, up to max_runs times."} {"_id": "q_2886", "text": "Determines the commands to be run and compares them with the existing running commands.\n Then starts new ones required and kills old ones no longer required."} {"_id": "q_2887", "text": "Builds the topology and submits it"} {"_id": "q_2888", "text": "Force every module in modList to be placed into main"} {"_id": "q_2889", "text": "Loads additional properties into class `cls`."} {"_id": "q_2890", "text": "Returns last n lines from the filename. No exception handling"} {"_id": "q_2891", "text": "Returns a serializer for a given context"} {"_id": "q_2892", "text": "Registers a new timer task\n\n :param task: function to be run at a specified second from now\n :param second: how many seconds to wait before the timer is triggered"} {"_id": "q_2893", "text": "Get the next timeout from now\n\n This should be used from do_wait().\n :returns (float) next_timeout, or 10.0 if there are no timer events"} {"_id": "q_2894", "text": "Returns a parse tree for the query, each of the node is a\n subclass of Operator. This is both a lexical as well as syntax analyzer step."} {"_id": "q_2895", "text": "Indicate that processing of a Tuple has failed\n\n It is compatible with StreamParse API."} {"_id": "q_2896", "text": "Template slave config file"} {"_id": "q_2897", "text": "Template scheduler.yaml"} {"_id": "q_2898", "text": "Tempate uploader.yaml"} {"_id": "q_2899", "text": "Template statemgr.yaml"} {"_id": "q_2900", "text": "template heron tools"} {"_id": "q_2901", "text": "get cluster info for standalone cluster"} {"_id": "q_2902", "text": "Start a Heron standalone cluster"} {"_id": "q_2903", "text": "Start Heron tracker and UI"} {"_id": "q_2904", "text": "Wait for a nomad master to start"} {"_id": "q_2905", "text": "Tar a directory"} {"_id": "q_2906", "text": "Start master nodes"} {"_id": "q_2907", "text": "read config files to get roles"} {"_id": "q_2908", "text": "check if this host is this addr"} {"_id": "q_2909", "text": "Resolve all symbolic references that `src` points to. Note that this\n is different than `os.path.realpath` as path components leading up to\n the final location may still be symbolic links."} {"_id": "q_2910", "text": "normalize raw result to table"} {"_id": "q_2911", "text": "Monitor the rootpath and call the callback\n corresponding to the change.\n This monitoring happens periodically. This function\n is called in a seperate thread from the main thread,\n because it sleeps for the intervals between each poll."} {"_id": "q_2912", "text": "Get physical plan of a topology"} {"_id": "q_2913", "text": "Get execution state"} {"_id": "q_2914", "text": "Get scheduler location"} {"_id": "q_2915", "text": "Creates SocketOptions object from a given sys_config dict"} {"_id": "q_2916", "text": "Retrieves heron options from the `HERON_OPTIONS` environment variable.\n\n Heron options have the following format:\n\n cmdline.topologydefn.tmpdirectory=/var/folders/tmpdir\n cmdline.topology.initial.state=PAUSED\n\n In this case, the returned map will contain:\n\n #!json\n {\n \"cmdline.topologydefn.tmpdirectory\": \"/var/folders/tmpdir\",\n \"cmdline.topology.initial.state\": \"PAUSED\"\n }\n\n Currently supports the following options natively:\n\n - `cmdline.topologydefn.tmpdirectory`: (required) the directory to which this\n topology's defn file is written\n - `cmdline.topology.initial.state`: (default: \"RUNNING\") the initial state of the topology\n - `cmdline.topology.name`: (default: class name) topology name on deployment\n\n Returns: map mapping from key to value"} {"_id": "q_2917", "text": "Add specs to the topology\n\n :type specs: HeronComponentSpec\n :param specs: specs to add to the topology"} {"_id": "q_2918", "text": "Set topology-wide configuration to the topology\n\n :type config: dict\n :param config: topology-wide config"} {"_id": "q_2919", "text": "Builds the topology and submits to the destination"} {"_id": "q_2920", "text": "map from query parameter to query name"} {"_id": "q_2921", "text": "Synced API call to get logical plans"} {"_id": "q_2922", "text": "Synced API call to get topology information"} {"_id": "q_2923", "text": "Synced API call to get component metrics"} {"_id": "q_2924", "text": "Configure logger which dumps log on terminal\n\n :param level: logging level: info, warning, verbose...\n :type level: logging level\n :param logfile: log file name, default to None\n :type logfile: string\n :return: None\n :rtype: None"} {"_id": "q_2925", "text": "Initializes a rotating logger\n\n It also makes sure that any StreamHandler is removed, so as to avoid stdout/stderr\n constipation issues"} {"_id": "q_2926", "text": "simply set verbose level based on command-line args\n\n :param cl_args: CLI arguments\n :type cl_args: dict\n :return: None\n :rtype: None"} {"_id": "q_2927", "text": "Returns Spout protobuf message"} {"_id": "q_2928", "text": "Returns Component protobuf message"} {"_id": "q_2929", "text": "Returns component-specific Config protobuf message\n\n It first adds ``topology.component.parallelism``, and is overriden by\n a user-defined component-specific configuration, specified by spec()."} {"_id": "q_2930", "text": "Adds outputs to a given protobuf Bolt or Spout message"} {"_id": "q_2931", "text": "Returns a set of output stream ids registered for this component"} {"_id": "q_2932", "text": "Returns a StreamId protobuf message"} {"_id": "q_2933", "text": "Returns a StreamSchema protobuf message"} {"_id": "q_2934", "text": "Returns component_id of this GlobalStreamId\n\n Note that if HeronComponentSpec is specified as componentId and its name is not yet\n available (i.e. when ``name`` argument was not given in ``spec()`` method in Bolt or Spout),\n this property returns a message with uuid. However, this is provided only for safety\n with __eq__(), __str__(), and __hash__() methods, and not meant to be called explicitly\n before TopologyType class finally sets the name attribute of HeronComponentSpec."} {"_id": "q_2935", "text": "Registers a new metric to this context"} {"_id": "q_2936", "text": "Returns the declared inputs to specified component\n\n :return: map gtype>, or\n None if not found"} {"_id": "q_2937", "text": "invoke task hooks for every time spout acks a tuple\n\n :type message_id: str\n :param message_id: message id to which an acked tuple was anchored\n :type complete_latency_ns: float\n :param complete_latency_ns: complete latency in nano seconds"} {"_id": "q_2938", "text": "invoke task hooks for every time spout fails a tuple\n\n :type message_id: str\n :param message_id: message id to which a failed tuple was anchored\n :type fail_latency_ns: float\n :param fail_latency_ns: fail latency in nano seconds"} {"_id": "q_2939", "text": "invoke task hooks for every time bolt processes a tuple\n\n :type heron_tuple: HeronTuple\n :param heron_tuple: tuple that is executed\n :type execute_latency_ns: float\n :param execute_latency_ns: execute latency in nano seconds"} {"_id": "q_2940", "text": "invoke task hooks for every time bolt fails a tuple\n\n :type heron_tuple: HeronTuple\n :param heron_tuple: tuple that is failed\n :type fail_latency_ns: float\n :param fail_latency_ns: fail latency in nano seconds"} {"_id": "q_2941", "text": "Extract and execute the java files inside the tar and then add topology\n definition file created by running submitTopology\n\n We use the packer to make a package for the tar and dump it\n to a well-known location. We then run the main method of class\n with the specified arguments. We pass arguments as an environment variable HERON_OPTIONS.\n This will run the jar file with the topology class name.\n\n The submitter inside will write out the topology defn file to a location\n that we specify. Then we write the topology defn file to a well known\n packer location. We then write to appropriate places in zookeeper\n and launch the aurora jobs\n :param cl_args:\n :param unknown_args:\n :param tmp_dir:\n :return:"} {"_id": "q_2942", "text": "Makes the http endpoint for the heron shell\n if shell port is present, otherwise returns None."} {"_id": "q_2943", "text": "Make the url for log-file data in heron-shell\n from the info stored in stmgr."} {"_id": "q_2944", "text": "Sends this outgoing packet to dispatcher's socket"} {"_id": "q_2945", "text": "Creates an IncomingPacket object from header and data\n\n This method is for testing purposes"} {"_id": "q_2946", "text": "Reads incoming data from asyncore.dispatcher"} {"_id": "q_2947", "text": "Reads yaml config file and returns auto-typed config_dict"} {"_id": "q_2948", "text": "Send messages in out_stream to the Stream Manager"} {"_id": "q_2949", "text": "Called when state change is commanded by stream manager"} {"_id": "q_2950", "text": "Checks if a given stream_id and tuple matches with the output schema\n\n :type stream_id: str\n :param stream_id: stream id into which tuple is sent\n :type tup: list\n :param tup: tuple that is going to be sent"} {"_id": "q_2951", "text": "Adds the target component\n\n :type stream_id: str\n :param stream_id: stream id into which tuples are emitted\n :type task_ids: list of str\n :param task_ids: list of task ids to which tuples are emitted\n :type grouping: ICustomStreamGrouping object\n :param grouping: custom grouping to use\n :type source_comp_name: str\n :param source_comp_name: source component name"} {"_id": "q_2952", "text": "Prepares the custom grouping for this component"} {"_id": "q_2953", "text": "Format a line in the directory list based on the file's type and other attributes."} {"_id": "q_2954", "text": "Format the date associated with a file to be displayed in directory listing."} {"_id": "q_2955", "text": "Prefix to a filename in the directory listing. This is to make the\n listing similar to an output of \"ls -alh\"."} {"_id": "q_2956", "text": "Read a chunk of a file from an offset upto the length."} {"_id": "q_2957", "text": "Runs the command and returns its stdout and stderr."} {"_id": "q_2958", "text": "Feed output of one command to the next and return final output\n Returns string output of chained application of commands."} {"_id": "q_2959", "text": "normalize raw metrics API result to table"} {"_id": "q_2960", "text": "run metrics subcommand"} {"_id": "q_2961", "text": "run containers subcommand"} {"_id": "q_2962", "text": "Creates a HeronTuple\n\n :param stream: protobuf message ``StreamId``\n :param tuple_key: tuple id\n :param values: a list of values\n :param roots: a list of protobuf message ``RootId``"} {"_id": "q_2963", "text": "Creates a RootTupleInfo"} {"_id": "q_2964", "text": "Updates the list of global error suppressions.\n\n Parses any lint directives in the file that have global effect.\n\n Args:\n lines: An array of strings, each representing a line of the file, with the\n last element being empty if the file is terminated with a newline."} {"_id": "q_2965", "text": "Searches the string for the pattern, caching the compiled regexp."} {"_id": "q_2966", "text": "Removes C++11 raw strings from lines.\n\n Before:\n static const char kData[] = R\"(\n multi-line string\n )\";\n\n After:\n static const char kData[] = \"\"\n (replaced by blank line)\n \"\";\n\n Args:\n raw_lines: list of raw lines.\n\n Returns:\n list of lines with C++11 raw strings replaced by empty strings."} {"_id": "q_2967", "text": "We are inside a comment, find the end marker."} {"_id": "q_2968", "text": "Clears a range of lines for multi-line comments."} {"_id": "q_2969", "text": "Find the position just after the end of current parenthesized expression.\n\n Args:\n line: a CleansedLines line.\n startpos: start searching at this position.\n stack: nesting stack at startpos.\n\n Returns:\n On finding matching end: (index just after matching end, None)\n On finding an unclosed expression: (-1, None)\n Otherwise: (-1, new stack at end of this line)"} {"_id": "q_2970", "text": "Find position at the matching start of current expression.\n\n This is almost the reverse of FindEndOfExpressionInLine, but note\n that the input position and returned position differs by 1.\n\n Args:\n line: a CleansedLines line.\n endpos: start searching at this position.\n stack: nesting stack at endpos.\n\n Returns:\n On finding matching start: (index at matching start, None)\n On finding an unclosed expression: (-1, None)\n Otherwise: (-1, new stack at beginning of this line)"} {"_id": "q_2971", "text": "If input points to ) or } or ] or >, finds the position that opens it.\n\n If lines[linenum][pos] points to a ')' or '}' or ']' or '>', finds the\n linenum/pos that correspond to the opening of the expression.\n\n Args:\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n pos: A position on the line.\n\n Returns:\n A tuple (line, linenum, pos) pointer *at* the opening brace, or\n (line, 0, -1) if we never find the matching opening brace. Note\n we ignore strings and comments when matching; and the line we\n return is the 'cleansed' line at linenum."} {"_id": "q_2972", "text": "Logs an error if no Copyright message appears at the top of the file."} {"_id": "q_2973", "text": "Returns the CPP variable that should be used as a header guard.\n\n Args:\n filename: The name of a C++ header file.\n\n Returns:\n The CPP variable that should be used as a header guard in the\n named file."} {"_id": "q_2974", "text": "Checks that the file contains a header guard.\n\n Logs an error if no #ifndef header guard is present. For other\n headers, checks that the full pathname is used.\n\n Args:\n filename: The name of the C++ header file.\n clean_lines: A CleansedLines instance containing the file.\n error: The function to call with any errors found."} {"_id": "q_2975", "text": "Logs an error if a source file does not include its header."} {"_id": "q_2976", "text": "Logs an error for each line containing bad characters.\n\n Two kinds of bad characters:\n\n 1. Unicode replacement characters: These indicate that either the file\n contained invalid UTF-8 (likely) or Unicode replacement characters (which\n it shouldn't). Note that it's possible for this to throw off line\n numbering if the invalid UTF-8 occurred adjacent to a newline.\n\n 2. NUL bytes. These are problematic for some tools.\n\n Args:\n filename: The name of the current file.\n lines: An array of strings, each representing a line of the file.\n error: The function to call with any errors found."} {"_id": "q_2977", "text": "Checks for calls to thread-unsafe functions.\n\n Much code has been originally written without consideration of\n multi-threading. Also, engineers are relying on their old experience;\n they have learned posix before threading extensions were added. These\n tests guide the engineers to use thread-safe functions (when using\n posix directly).\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."} {"_id": "q_2978", "text": "Checks for the correctness of various spacing issues in the code.\n\n Things we check for: spaces around operators, spaces after\n if/for/while/switch, no spaces around parens in function calls, two\n spaces between code and comment, don't start a block with a blank\n line, don't end a function with a blank line, don't add a blank line\n after public/protected/private, don't have too many blank lines in a row.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n nesting_state: A NestingState instance which maintains information about\n the current stack of nested blocks being parsed.\n error: The function to call with any errors found."} {"_id": "q_2979", "text": "Checks for horizontal spacing around parentheses.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."} {"_id": "q_2980", "text": "Check if expression looks like a type name, returns true if so.\n\n Args:\n clean_lines: A CleansedLines instance containing the file.\n nesting_state: A NestingState instance which maintains information about\n the current stack of nested blocks being parsed.\n expr: The expression to check.\n Returns:\n True, if token looks like a type."} {"_id": "q_2981", "text": "Checks for additional blank line issues related to sections.\n\n Currently the only thing checked here is blank line before protected/private.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n class_info: A _ClassInfo objects.\n linenum: The number of the line to check.\n error: The function to call with any errors found."} {"_id": "q_2982", "text": "Return the most recent non-blank line and its line number.\n\n Args:\n clean_lines: A CleansedLines instance containing the file contents.\n linenum: The number of the line to check.\n\n Returns:\n A tuple with two elements. The first element is the contents of the last\n non-blank line before the current line, or the empty string if this is the\n first non-blank line. The second is the line number of that line, or -1\n if this is the first non-blank line."} {"_id": "q_2983", "text": "Looks for redundant trailing semicolon.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."} {"_id": "q_2984", "text": "Find a replaceable CHECK-like macro.\n\n Args:\n line: line to search on.\n Returns:\n (macro name, start position), or (None, -1) if no replaceable\n macro is found."} {"_id": "q_2985", "text": "Checks the use of CHECK and EXPECT macros.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."} {"_id": "q_2986", "text": "Check alternative keywords being used in boolean expressions.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."} {"_id": "q_2987", "text": "Determines the width of the line in column positions.\n\n Args:\n line: A string, which may be a Unicode string.\n\n Returns:\n The width of the line in column positions, accounting for Unicode\n combining characters and wide characters."} {"_id": "q_2988", "text": "Drops common suffixes like _test.cc or -inl.h from filename.\n\n For example:\n >>> _DropCommonSuffixes('foo/foo-inl.h')\n 'foo/foo'\n >>> _DropCommonSuffixes('foo/bar/foo.cc')\n 'foo/bar/foo'\n >>> _DropCommonSuffixes('foo/foo_internal.h')\n 'foo/foo'\n >>> _DropCommonSuffixes('foo/foo_unusualinternal.h')\n 'foo/foo_unusualinternal'\n\n Args:\n filename: The input filename.\n\n Returns:\n The filename with the common suffix removed."} {"_id": "q_2989", "text": "Figures out what kind of header 'include' is.\n\n Args:\n fileinfo: The current file cpplint is running over. A FileInfo instance.\n include: The path to a #included file.\n is_system: True if the #include used <> rather than \"\".\n\n Returns:\n One of the _XXX_HEADER constants.\n\n For example:\n >>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'stdio.h', True)\n _C_SYS_HEADER\n >>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'string', True)\n _CPP_SYS_HEADER\n >>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'foo/foo.h', False)\n _LIKELY_MY_HEADER\n >>> _ClassifyInclude(FileInfo('foo/foo_unknown_extension.cc'),\n ... 'bar/foo_other_ext.h', False)\n _POSSIBLE_MY_HEADER\n >>> _ClassifyInclude(FileInfo('foo/foo.cc'), 'foo/bar.h', False)\n _OTHER_HEADER"} {"_id": "q_2990", "text": "Check for unsafe global or static objects.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."} {"_id": "q_2991", "text": "Check for printf related issues.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."} {"_id": "q_2992", "text": "Check if current line is inside constructor initializer list.\n\n Args:\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n Returns:\n True if current line appears to be inside constructor initializer\n list, False otherwise."} {"_id": "q_2993", "text": "Check for non-const references.\n\n Separate from CheckLanguage since it scans backwards from current\n line, instead of scanning forward.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n nesting_state: A NestingState instance which maintains information about\n the current stack of nested blocks being parsed.\n error: The function to call with any errors found."} {"_id": "q_2994", "text": "Check if these two filenames belong to the same module.\n\n The concept of a 'module' here is a as follows:\n foo.h, foo-inl.h, foo.cc, foo_test.cc and foo_unittest.cc belong to the\n same 'module' if they are in the same directory.\n some/path/public/xyzzy and some/path/internal/xyzzy are also considered\n to belong to the same module here.\n\n If the filename_cc contains a longer path than the filename_h, for example,\n '/absolute/path/to/base/sysinfo.cc', and this file would include\n 'base/sysinfo.h', this function also produces the prefix needed to open the\n header. This is used by the caller of this function to more robustly open the\n header file. We don't have access to the real include paths in this context,\n so we need this guesswork here.\n\n Known bugs: tools/base/bar.cc and base/bar.h belong to the same module\n according to this implementation. Because of this, this function gives\n some false positives. This should be sufficiently rare in practice.\n\n Args:\n filename_cc: is the path for the source (e.g. .cc) file\n filename_h: is the path for the header path\n\n Returns:\n Tuple with a bool and a string:\n bool: True if filename_cc and filename_h belong to the same module.\n string: the additional prefix needed to open the header file."} {"_id": "q_2995", "text": "Check that make_pair's template arguments are deduced.\n\n G++ 4.6 in C++11 mode fails badly if make_pair's template arguments are\n specified explicitly, and such use isn't intended in any case.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."} {"_id": "q_2996", "text": "Check if line contains a redundant \"virtual\" function-specifier.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."} {"_id": "q_2997", "text": "Flag those C++14 features that we restrict.\n\n Args:\n filename: The name of the current file.\n clean_lines: A CleansedLines instance containing the file.\n linenum: The number of the line to check.\n error: The function to call with any errors found."} {"_id": "q_2998", "text": "Performs lint checks and reports any errors to the given error function.\n\n Args:\n filename: Filename of the file that is being processed.\n file_extension: The extension (dot not included) of the file.\n lines: An array of strings, each representing a line of the file, with the\n last element being empty if the file is terminated with a newline.\n error: A callable to which errors are reported, which takes 4 arguments:\n filename, line number, error level, and message\n extra_check_functions: An array of additional check functions that will be\n run on each source line. Each function takes 4\n arguments: filename, clean_lines, line, error"} {"_id": "q_2999", "text": "Loads the configuration files and processes the config overrides.\n\n Args:\n filename: The name of the file being processed by the linter.\n\n Returns:\n False if the current |filename| should not be processed further."} {"_id": "q_3000", "text": "Parses the command line arguments.\n\n This may set the output format and verbosity level as side-effects.\n\n Args:\n args: The command line arguments:\n\n Returns:\n The list of filenames to lint."} {"_id": "q_3001", "text": "Searches a list of filenames and replaces directories in the list with\n all files descending from those directories. Files with extensions not in\n the valid extensions list are excluded.\n\n Args:\n filenames: A list of files or directories\n\n Returns:\n A list of all files that are members of filenames or descended from a\n directory in filenames"} {"_id": "q_3002", "text": "Check if a header has already been included.\n\n Args:\n header: header to check.\n Returns:\n Line number of previous occurrence, or -1 if the header has not\n been seen before."} {"_id": "q_3003", "text": "Returns a non-empty error message if the next header is out of order.\n\n This function also updates the internal state to be ready to check\n the next include.\n\n Args:\n header_type: One of the _XXX_HEADER constants defined above.\n\n Returns:\n The empty string if the header is in the right order, or an\n error message describing what's wrong."} {"_id": "q_3004", "text": "Bumps the module's error statistic."} {"_id": "q_3005", "text": "Print a summary of errors by category, and the total."} {"_id": "q_3006", "text": "Collapses strings and chars on a line to simple \"\" or '' blocks.\n\n We nix strings first so we're not fooled by text like '\"http://\"'\n\n Args:\n elided: The line being processed.\n\n Returns:\n The line with collapsed strings."} {"_id": "q_3007", "text": "Check end of namespace comments."} {"_id": "q_3008", "text": "Update preprocessor stack.\n\n We need to handle preprocessors due to classes like this:\n #ifdef SWIG\n struct ResultDetailsPageElementExtensionPoint {\n #else\n struct ResultDetailsPageElementExtensionPoint : public Extension {\n #endif\n\n We make the following assumptions (good enough for most files):\n - Preprocessor condition evaluates to true from #if up to first\n #else/#elif/#endif.\n\n - Preprocessor condition evaluates to false from #else/#elif up\n to #endif. We still perform lint checks on these lines, but\n these do not affect nesting stack.\n\n Args:\n line: current line to check."} {"_id": "q_3009", "text": "Get class info on the top of the stack.\n\n Returns:\n A _ClassInfo object if we are inside a class, or None otherwise."} {"_id": "q_3010", "text": "Checks that all classes and namespaces have been completely parsed.\n\n Call this when all lines in a file have been processed.\n Args:\n filename: The name of the current file.\n error: The function to call with any errors found."} {"_id": "q_3011", "text": "Return a new Streamlet by applying map_function to each element of this Streamlet\n and flattening the result"} {"_id": "q_3012", "text": "Return a new Streamlet containing only the elements that satisfy filter_function"} {"_id": "q_3013", "text": "Return num_clones number of streamlets each containing all elements\n of the current streamlet"} {"_id": "q_3014", "text": "Return a new Streamlet in which each element of this Streamlet are collected\n over a window defined by window_config and then reduced using the reduce_function\n reduce_function takes two element at one time and reduces them to one element that\n is used in the subsequent operations."} {"_id": "q_3015", "text": "Returns a new Streamlet that consists of elements of both this and other_streamlet"} {"_id": "q_3016", "text": "Return a new Streamlet by joining join_streamlet with this streamlet"} {"_id": "q_3017", "text": "Return a new Streamlet by left join_streamlet with this streamlet"} {"_id": "q_3018", "text": "extract common args"} {"_id": "q_3019", "text": "Ensures argument obj is a native Python dictionary, raises an exception if not, and otherwise\n returns obj."} {"_id": "q_3020", "text": "Ensures argument obj is either a dictionary or None; if the latter, instantiates an empty\n dictionary."} {"_id": "q_3021", "text": "Record a stream of event records to json"} {"_id": "q_3022", "text": "Read a config file and instantiate the RCParser.\n\n Create new :class:`configparser.ConfigParser` for the given **path**\n and instantiate the :class:`RCParser` with the ConfigParser as\n :attr:`config` attribute.\n\n If the **path** doesn't exist, raise :exc:`ConfigFileError`.\n Otherwise return a new :class:`RCParser` instance.\n\n :param path:\n Optional path to the config file to parse.\n If not given, use ``'~/.pypirc'``."} {"_id": "q_3023", "text": "This recursive descent thing formats a config dict for GraphQL."} {"_id": "q_3024", "text": "Get a pipeline by name. Only constructs that pipeline and caches it.\n\n Args:\n name (str): Name of the pipeline to retriever\n\n Returns:\n PipelineDefinition: Instance of PipelineDefinition with that name."} {"_id": "q_3025", "text": "Return all pipelines as a list\n\n Returns:\n List[PipelineDefinition]:"} {"_id": "q_3026", "text": "This function polls the process until it returns a valid\n item or returns PROCESS_DEAD_AND_QUEUE_EMPTY if it is in\n a state where the process has terminated and the queue is empty\n\n Warning: if the child process is in an infinite loop. This will\n also infinitely loop."} {"_id": "q_3027", "text": "Waits until all there are no processes enqueued."} {"_id": "q_3028", "text": "The schema for configuration data that describes the type, optionality, defaults, and description.\n\n Args:\n dagster_type (DagsterType):\n A ``DagsterType`` describing the schema of this field, ie `Dict({'example': Field(String)})`\n default_value (Any):\n A default value to use that respects the schema provided via dagster_type\n is_optional (bool): Whether the presence of this field is optional\n despcription (str):"} {"_id": "q_3029", "text": "Builds the execution plan."} {"_id": "q_3030", "text": "Here we build a new ExecutionPlan from a pipeline definition and the environment config.\n\n To do this, we iterate through the pipeline's solids in topological order, and hand off the\n execution steps for each solid to a companion _PlanBuilder object.\n\n Once we've processed the entire pipeline, we invoke _PlanBuilder.build() to construct the\n ExecutionPlan object."} {"_id": "q_3031", "text": "Get the shell commands we'll use to actually build and publish a package to PyPI."} {"_id": "q_3032", "text": "Tags all submodules for a new release.\n\n Ensures that git tags, as well as the version.py files in each submodule, agree and that the\n new version is strictly greater than the current version. Will fail if the new version\n is not an increment (following PEP 440). Creates a new git tag and commit."} {"_id": "q_3033", "text": "Create a context definition from a pre-existing context. This can be useful\n in testing contexts where you may want to create a context manually and then\n pass it into a one-off PipelineDefinition\n\n Args:\n context (ExecutionContext): The context that will provided to the pipeline.\n Returns:\n PipelineContextDefinition: The passthrough context definition."} {"_id": "q_3034", "text": "A decorator for a annotating a function that can take the selected properties\n of a ``config_value`` and an instance of a custom type and materialize it.\n\n Args:\n config_cls (Selector):"} {"_id": "q_3035", "text": "Automagically wrap a block of text."} {"_id": "q_3036", "text": "Download an object from s3.\n\n Args:\n info (ExpectationExecutionInfo): Must expose a boto3 S3 client as its `s3` resource.\n\n Returns:\n str:\n The path to the downloaded object."} {"_id": "q_3037", "text": "Wraps the execution of user-space code in an error boundary. This places a uniform\n policy around an user code invoked by the framework. This ensures that all user\n errors are wrapped in the DagsterUserCodeExecutionError, and that the original stack\n trace of the user error is preserved, so that it can be reported without confusing\n framework code in the stack trace, if a tool author wishes to do so. This has\n been especially help in a notebooking context."} {"_id": "q_3038", "text": "The missing mkdir -p functionality in os."} {"_id": "q_3039", "text": "In the event of pipeline initialization failure, we want to be able to log the failure\n without a dependency on the ExecutionContext to initialize DagsterLog"} {"_id": "q_3040", "text": "Whether the solid execution was successful"} {"_id": "q_3041", "text": "Whether the solid execution was skipped"} {"_id": "q_3042", "text": "Returns transformed value either for DEFAULT_OUTPUT or for the output\n given as output_name. Returns None if execution result isn't a success.\n\n Reconstructs the pipeline context to materialize value."} {"_id": "q_3043", "text": "Returns the failing step's data that happened during this solid's execution, if any"} {"_id": "q_3044", "text": "A permissive dict will permit the user to partially specify the permitted fields. Any fields\n that are specified and passed in will be type checked. Other fields will be allowed, but\n will be ignored by the type checker."} {"_id": "q_3045", "text": "Execute the user-specified transform for the solid. Wrap in an error boundary and do\n all relevant logging and metrics tracking"} {"_id": "q_3046", "text": "Takes a python cls and creates a type for it in the Dagster domain.\n\n Args:\n existing_type (cls)\n The python type you want to project in to the Dagster type system.\n name (Optional[str]):\n description (Optiona[str]):\n input_schema (Optional[InputSchema]):\n An instance of a class that inherits from :py:class:`InputSchema` that\n can map config data to a value of this type.\n\n output_schema (Optiona[OutputSchema]):\n An instance of a class that inherits from :py:class:`OutputSchema` that\n can map config data to persisting values of this type.\n\n serialization_strategy (Optional[SerializationStrategy]):\n The default behavior for how to serialize this value for\n persisting between execution steps.\n\n storage_plugins (Optional[Dict[RunStorageMode, TypeStoragePlugin]]):\n Storage type specific overrides for the serialization strategy.\n This allows for storage specific optimzations such as effecient\n distributed storage on S3."} {"_id": "q_3047", "text": "A decorator for creating a resource. The decorated function will be used as the \n resource_fn in a ResourceDefinition."} {"_id": "q_3048", "text": "Events API v2 enables you to add PagerDuty's advanced event and incident management\n functionality to any system that can make an outbound HTTP connection.\n\n Arguments:\n summary {string} -- A high-level, text summary message of the event. Will be used to\n construct an alert's description.\n\n Example: \"PING OK - Packet loss = 0%, RTA = 1.41 ms\" \"Host\n 'acme-andromeda-sv1-c40 :: 179.21.24.50' is DOWN\"\n\n source {string} -- Specific human-readable unique identifier, such as a hostname, for\n the system having the problem.\n\n Examples:\n \"prod05.theseus.acme-widgets.com\"\n \"171.26.23.22\"\n \"aws:elasticache:us-east-1:852511987:cluster/api-stats-prod-003\"\n \"9c09acd49a25\"\n\n severity {string} -- How impacted the affected system is. Displayed to users in lists\n and influences the priority of any created incidents. Must be one\n of {info, warning, error, critical}\n\n Keyword Arguments:\n event_action {str} -- There are three types of events that PagerDuty recognizes, and\n are used to represent different types of activity in your\n monitored systems. (default: 'trigger')\n * trigger: When PagerDuty receives a trigger event, it will either open a new alert,\n or add a new trigger log entry to an existing alert, depending on the\n provided dedup_key. Your monitoring tools should send PagerDuty a trigger\n when a new problem has been detected. You may send additional triggers\n when a previously detected problem has occurred again.\n\n * acknowledge: acknowledge events cause the referenced incident to enter the\n acknowledged state. While an incident is acknowledged, it won't\n generate any additional notifications, even if it receives new\n trigger events. Your monitoring tools should send PagerDuty an\n acknowledge event when they know someone is presently working on the\n problem.\n\n * resolve: resolve events cause the referenced incident to enter the resolved state.\n Once an incident is resolved, it won't generate any additional\n notifications. New trigger events with the same dedup_key as a resolved\n incident won't re-open the incident. Instead, a new incident will be\n created. Your monitoring tools should send PagerDuty a resolve event when\n the problem that caused the initial trigger event has been fixed.\n\n dedup_key {string} -- Deduplication key for correlating triggers and resolves. The\n maximum permitted length of this property is 255 characters.\n\n timestamp {string} -- Timestamp (ISO 8601). When the upstream system detected / created\n the event. This is useful if a system batches or holds events\n before sending them to PagerDuty.\n\n Optional - Will be auto-generated by PagerDuty if not provided.\n\n Example:\n 2015-07-17T08:42:58.315+0000\n\n component {string} -- The part or component of the affected system that is broken.\n\n Examples:\n \"keepalive\"\n \"webping\"\n \"mysql\"\n \"wqueue\"\n\n group {string} -- A cluster or grouping of sources. For example, sources\n \u201cprod-datapipe-02\u201d and \u201cprod-datapipe-03\u201d might both be part of\n \u201cprod-datapipe\u201d\n\n Examples:\n \"prod-datapipe\"\n \"www\"\n \"web_stack\"\n\n event_class {string} -- The class/type of the event.\n\n Examples:\n \"High CPU\"\n \"Latency\"\n \"500 Error\"\n\n custom_details {Dict[str, str]} -- Additional details about the event and affected\n system.\n\n Example:\n {\"ping time\": \"1500ms\", \"load avg\": 0.75 }"} {"_id": "q_3049", "text": "Default method to acquire database connection parameters.\n\n Sets connection parameters to match settings.py, and sets\n default values to blank fields."} {"_id": "q_3050", "text": "Closes the client connection to the database."} {"_id": "q_3051", "text": "Overrides standard to_python method from django models to allow\r\n correct translation of Mongo array to a python list."} {"_id": "q_3052", "text": "Filter the queryset for the instance this manager is bound to."} {"_id": "q_3053", "text": "Computes the matrix of expected false positives for all possible\n sub-intervals of the complete domain of set sizes, assuming uniform\n distribution of set_sizes within each sub-intervals.\n\n Args:\n cum_counts: the complete cummulative distribution of set sizes.\n sizes: the complete domain of set sizes.\n\n Return (np.array): the 2-D array of expected number of false positives\n for every pair of [l, u] interval, where l is axis-0 and u is\n axis-1."} {"_id": "q_3054", "text": "Computes the matrix of expected false positives for all possible\n sub-intervals of the complete domain of set sizes.\n\n Args:\n counts: the complete distribution of set sizes.\n sizes: the complete domain of set sizes.\n\n Return (np.array): the 2-D array of expected number of false positives\n for every pair of [l, u] interval, where l is axis-0 and u is\n axis-1."} {"_id": "q_3055", "text": "Compute the optimal partitions given a distribution of set sizes.\n\n Args:\n sizes (numpy.array): The complete domain of set sizes in ascending\n order.\n counts (numpy.array): The frequencies of all set sizes in the same\n order as `sizes`.\n num_part (int): The number of partitions to create.\n\n Returns:\n list: A list of partitions in the form of `(lower, upper)` tuples,\n where `lower` and `upper` are lower and upper bound (inclusive)\n set sizes of each partition."} {"_id": "q_3056", "text": "Compute the byte size after serialization.\n\n Args:\n byteorder (str, optional): This is byte order of the serialized data. Use one\n of the `byte order characters\n `_:\n ``@``, ``=``, ``<``, ``>``, and ``!``.\n Default is ``@`` -- the native order.\n\n Returns:\n int: Size in number of bytes after serialization."} {"_id": "q_3057", "text": "Serialize this lean MinHash and store the result in an allocated buffer.\n\n Args:\n buf (buffer): `buf` must implement the `buffer`_ interface.\n One such example is the built-in `bytearray`_ class.\n byteorder (str, optional): This is byte order of the serialized data. Use one\n of the `byte order characters\n `_:\n ``@``, ``=``, ``<``, ``>``, and ``!``.\n Default is ``@`` -- the native order.\n\n This is preferred over using `pickle`_ if the serialized lean MinHash needs\n to be used by another program in a different programming language.\n\n The serialization schema:\n 1. The first 8 bytes is the seed integer\n 2. The next 4 bytes is the number of hash values\n 3. The rest is the serialized hash values, each uses 4 bytes\n\n Example:\n To serialize a single lean MinHash into a `bytearray`_ buffer.\n\n .. code-block:: python\n\n buf = bytearray(lean_minhash.bytesize())\n lean_minhash.serialize(buf)\n\n To serialize multiple lean MinHash into a `bytearray`_ buffer.\n\n .. code-block:: python\n\n # assuming lean_minhashs is a list of LeanMinHash with the same size\n size = lean_minhashs[0].bytesize()\n buf = bytearray(size*len(lean_minhashs))\n for i, lean_minhash in enumerate(lean_minhashs):\n lean_minhash.serialize(buf[i*size:])\n\n .. _`buffer`: https://docs.python.org/3/c-api/buffer.html\n .. _`bytearray`: https://docs.python.org/3.6/library/functions.html#bytearray\n .. _`byteorder`: https://docs.python.org/3/library/struct.html"} {"_id": "q_3058", "text": "Deserialize a lean MinHash from a buffer.\n\n Args:\n buf (buffer): `buf` must implement the `buffer`_ interface.\n One such example is the built-in `bytearray`_ class.\n byteorder (str. optional): This is byte order of the serialized data. Use one\n of the `byte order characters\n `_:\n ``@``, ``=``, ``<``, ``>``, and ``!``.\n Default is ``@`` -- the native order.\n\n Return:\n datasketch.LeanMinHash: The deserialized lean MinHash\n\n Example:\n To deserialize a lean MinHash from a buffer.\n\n .. code-block:: python\n\n lean_minhash = LeanMinHash.deserialize(buf)"} {"_id": "q_3059", "text": "Update this MinHash with a new value.\n The value will be hashed using the hash function specified by\n the `hashfunc` argument in the constructor.\n\n Args:\n b: The value to be hashed using the hash function specified.\n\n Example:\n To update with a new string value (using the default SHA1 hash\n function, which requires bytes as input):\n\n .. code-block:: python\n\n minhash = Minhash()\n minhash.update(\"new value\".encode('utf-8'))\n\n We can also use a different hash function, for example, `pyfarmhash`:\n\n .. code-block:: python\n\n import farmhash\n def _hash_32(b):\n return farmhash.hash32(b)\n minhash = MinHash(hashfunc=_hash_32)\n minhash.update(\"new value\")"} {"_id": "q_3060", "text": "Merge the other MinHash with this one, making this one the union\n of both.\n\n Args:\n other (datasketch.MinHash): The other MinHash."} {"_id": "q_3061", "text": "Create a MinHash which is the union of the MinHash objects passed as arguments.\n\n Args:\n *mhs: The MinHash objects to be united. The argument list length is variable,\n but must be at least 2.\n\n Returns:\n datasketch.MinHash: A new union MinHash."} {"_id": "q_3062", "text": "Index all sets given their keys, MinHashes, and sizes.\n It can be called only once after the index is created.\n\n Args:\n entries (`iterable` of `tuple`): An iterable of tuples, each must be\n in the form of `(key, minhash, size)`, where `key` is the unique\n identifier of a set, `minhash` is the MinHash of the set,\n and `size` is the size or number of unique items in the set.\n\n Note:\n `size` must be positive."} {"_id": "q_3063", "text": "Create a new weighted MinHash given a weighted Jaccard vector.\n Each dimension is an integer \n frequency of the corresponding element in the multi-set represented\n by the vector.\n\n Args:\n v (numpy.array): The Jaccard vector."} {"_id": "q_3064", "text": "Estimate the cardinality of the data values seen so far.\n\n Returns:\n int: The estimated cardinality."} {"_id": "q_3065", "text": "Merge the other HyperLogLog with this one, making this the union of the\n two.\n\n Args:\n other (datasketch.HyperLogLog):"} {"_id": "q_3066", "text": "Computes the average precision at k.\n\n This function computes the average prescision at k between two lists of\n items.\n\n Parameters\n ----------\n actual : list\n A list of elements that are to be predicted (order doesn't matter)\n predicted : list\n A list of predicted elements (order does matter)\n k : int, optional\n The maximum number of predicted elements\n\n Returns\n -------\n score : double\n The average precision at k over the input lists"} {"_id": "q_3067", "text": "Computes the mean average precision at k.\n\n This function computes the mean average prescision at k between two lists\n of lists of items.\n\n Parameters\n ----------\n actual : list\n A list of lists of elements that are to be predicted \n (order doesn't matter in the lists)\n predicted : list\n A list of lists of predicted elements\n (order matters in the lists)\n k : int, optional\n The maximum number of predicted elements\n\n Returns\n -------\n score : double\n The mean average precision at k over the input lists"} {"_id": "q_3068", "text": "Return the approximate top-k keys that have the highest\n Jaccard similarities to the query set.\n\n Args:\n minhash (datasketch.MinHash): The MinHash of the query set.\n k (int): The maximum number of keys to return.\n\n Returns:\n `list` of at most k keys."} {"_id": "q_3069", "text": "Cleanup client resources and disconnect from AsyncMinHashLSH storage."} {"_id": "q_3070", "text": "Return ordered storage system based on the specified config.\n\n The canonical example of such a storage container is\n ``defaultdict(list)``. Thus, the return value of this method contains\n keys and values. The values are ordered lists with the last added\n item at the end.\n\n Args:\n config (dict): Defines the configurations for the storage.\n For in-memory storage, the config ``{'type': 'dict'}`` will\n suffice. For Redis storage, the type should be ``'redis'`` and\n the configurations for the Redis database should be supplied\n under the key ``'redis'``. These parameters should be in a form\n suitable for `redis.Redis`. The parameters may alternatively\n contain references to environment variables, in which case\n literal configuration values should be replaced by dicts of\n the form::\n\n {'env': 'REDIS_HOSTNAME',\n 'default': 'localhost'}\n\n For a full example, see :ref:`minhash_lsh_at_scale`\n\n name (bytes, optional): A reference name for this storage container.\n For dict-type containers, this is ignored. For Redis containers,\n this name is used to prefix keys pertaining to this storage\n container within the database."} {"_id": "q_3071", "text": "Return an unordered storage system based on the specified config.\n\n The canonical example of such a storage container is\n ``defaultdict(set)``. Thus, the return value of this method contains\n keys and values. The values are unordered sets.\n\n Args:\n config (dict): Defines the configurations for the storage.\n For in-memory storage, the config ``{'type': 'dict'}`` will\n suffice. For Redis storage, the type should be ``'redis'`` and\n the configurations for the Redis database should be supplied\n under the key ``'redis'``. These parameters should be in a form\n suitable for `redis.Redis`. The parameters may alternatively\n contain references to environment variables, in which case\n literal configuration values should be replaced by dicts of\n the form::\n\n {'env': 'REDIS_HOSTNAME',\n 'default': 'localhost'}\n\n For a full example, see :ref:`minhash_lsh_at_scale`\n\n name (bytes, optional): A reference name for this storage container.\n For dict-type containers, this is ignored. For Redis containers,\n this name is used to prefix keys pertaining to this storage\n container within the database."} {"_id": "q_3072", "text": "Parses command strings and returns a Popen-ready list."} {"_id": "q_3073", "text": "Executes a given commmand and returns Response.\n\n Blocks until process is complete, or timeout is reached."} {"_id": "q_3074", "text": "Spawns a new process from the given command."} {"_id": "q_3075", "text": "Sends a line to std_in."} {"_id": "q_3076", "text": "Converts Py type to PyJs type"} {"_id": "q_3077", "text": "note py_arr elems are NOT converted to PyJs types!"} {"_id": "q_3078", "text": "note py_obj items are NOT converted to PyJs types!"} {"_id": "q_3079", "text": "Adds op_code with specified args to tape"} {"_id": "q_3080", "text": "Records locations of labels and compiles the code"} {"_id": "q_3081", "text": "returns n digit string representation of the num"} {"_id": "q_3082", "text": "Takes the replacement template and some info about the match and returns filled template"} {"_id": "q_3083", "text": "what can be either name of the op, or node, or a list of statements."} {"_id": "q_3084", "text": "Translates esprima syntax tree to python by delegating to appropriate translating node"} {"_id": "q_3085", "text": "Decorator limiting resulting line length in order to avoid python parser stack overflow -\n If expression longer than LINE_LEN_LIMIT characters then it will be moved to upper line\n USE ONLY ON EXPRESSIONS!!!"} {"_id": "q_3086", "text": "Does not chceck whether t is not resticted or internal"} {"_id": "q_3087", "text": "Translates input JS file to python and saves the it to the output path.\n It appends some convenience code at the end so that it is easy to import JS objects.\n\n For example we have a file 'example.js' with: var a = function(x) {return x}\n translate_file('example.js', 'example.py')\n\n Now example.py can be easily importend and used:\n >>> from example import example\n >>> example.a(30)\n 30"} {"_id": "q_3088", "text": "executes javascript js in current context\n\n During initial execute() the converted js is cached for re-use. That means next time you\n run the same javascript snippet you save many instructions needed to parse and convert the\n js code to python code.\n\n This cache causes minor overhead (a cache dicts is updated) but the Js=>Py conversion process\n is typically expensive compared to actually running the generated python code.\n\n Note that the cache is just a dict, it has no expiration or cleanup so when running this\n in automated situations with vast amounts of snippets it might increase memory usage."} {"_id": "q_3089", "text": "evaluates expression in current context and returns its value"} {"_id": "q_3090", "text": "Dont use this method from inside bytecode to call other bytecode."} {"_id": "q_3091", "text": "n may be the inside of block or object"} {"_id": "q_3092", "text": "n may be the inside of block or object.\n last is the code before object"} {"_id": "q_3093", "text": "returns True if regexp starts at n else returns False\n checks whether it is not a division"} {"_id": "q_3094", "text": "Returns a first index>=start of chat not in charset"} {"_id": "q_3095", "text": "checks if self is in other"} {"_id": "q_3096", "text": "Set the social login process state to connect rather than login\n Refer to the implementation of get_social_login in base class and to the\n allauth.socialaccount.helpers module complete_social_login function."} {"_id": "q_3097", "text": "Select the correct text from the Japanese number, reading and\n alternatives"} {"_id": "q_3098", "text": "Download and extract processed data and embeddings."} {"_id": "q_3099", "text": "Make a grid of images, via numpy.\n\n Args:\n tensor (Tensor or list): 4D mini-batch Tensor of shape (B x C x H x W)\n or a list of images all of the same size.\n nrow (int, optional): Number of images displayed in each row of the grid.\n The Final grid size is (B / nrow, nrow). Default is 8.\n padding (int, optional): amount of padding. Default is 2.\n pad_value (float, optional): Value for the padded pixels."} {"_id": "q_3100", "text": "Save a given Tensor into an image file.\n\n Args:\n tensor (Tensor or list): Image to be saved. If given a mini-batch tensor,\n saves the tensor as a grid of images by calling ``make_grid``.\n **kwargs: Other arguments are documented in ``make_grid``."} {"_id": "q_3101", "text": "Remove types from function arguments in cython"} {"_id": "q_3102", "text": "Parse scoped selector."} {"_id": "q_3103", "text": "Parse a single literal value.\n\n Returns:\n The parsed value."} {"_id": "q_3104", "text": "Advances to next line."} {"_id": "q_3105", "text": "Try to parse a configurable reference (@[scope/name/]fn_name[()])."} {"_id": "q_3106", "text": "Convert an operative config string to markdown format."} {"_id": "q_3107", "text": "Make sure `fn` can be wrapped cleanly by functools.wraps."} {"_id": "q_3108", "text": "Decorate a function or class with the given decorator.\n\n When `fn_or_cls` is a function, applies `decorator` to the function and\n returns the (decorated) result.\n\n When `fn_or_cls` is a class and the `subclass` parameter is `False`, this will\n replace `fn_or_cls.__init__` with the result of applying `decorator` to it.\n\n When `fn_or_cls` is a class and `subclass` is `True`, this will subclass the\n class, but with `__init__` defined to be the result of applying `decorator` to\n `fn_or_cls.__init__`. The decorated class has metadata (docstring, name, and\n module information) copied over from `fn_or_cls`. The goal is to provide a\n decorated class the behaves as much like the original as possible, without\n modifying it (for example, inspection operations using `isinstance` or\n `issubclass` should behave the same way as on the original class).\n\n Args:\n decorator: The decorator to use.\n fn_or_cls: The function or class to decorate.\n subclass: Whether to decorate classes by subclassing. This argument is\n ignored if `fn_or_cls` is not a class.\n\n Returns:\n The decorated function or class."} {"_id": "q_3109", "text": "Binds the parameter value specified by `binding_key` to `value`.\n\n The `binding_key` argument should either be a string of the form\n `maybe/scope/optional.module.names.configurable_name.parameter_name`, or a\n list or tuple of `(scope, selector, parameter_name)`, where `selector`\n corresponds to `optional.module.names.configurable_name`. Once this function\n has been called, subsequent calls (in the specified scope) to the specified\n configurable function will have `value` supplied to their `parameter_name`\n parameter.\n\n Example:\n\n @configurable('fully_connected_network')\n def network_fn(num_layers=5, units_per_layer=1024):\n ...\n\n def main(_):\n config.bind_parameter('fully_connected_network.num_layers', 3)\n network_fn() # Called with num_layers == 3, not the default of 5.\n\n Args:\n binding_key: The parameter whose value should be set. This can either be a\n string, or a tuple of the form `(scope, selector, parameter)`.\n value: The desired value.\n\n Raises:\n RuntimeError: If the config is locked.\n ValueError: If no function can be found matching the configurable name\n specified by `binding_key`, or if the specified parameter name is\n blacklisted or not in the function's whitelist (if present)."} {"_id": "q_3110", "text": "Gets cached argspec for `fn`."} {"_id": "q_3111", "text": "Returns the names of the supplied arguments to the given function."} {"_id": "q_3112", "text": "Retrieve all default values for configurable parameters of a function.\n\n Any parameters included in the supplied blacklist, or not included in the\n supplied whitelist, are excluded.\n\n Args:\n fn: The function whose parameter values should be retrieved.\n whitelist: The whitelist (or `None`) associated with the function.\n blacklist: The blacklist (or `None`) associated with the function.\n\n Returns:\n A dictionary mapping configurable parameter names to their default values."} {"_id": "q_3113", "text": "Decorator to make a function or class configurable.\n\n This decorator registers the decorated function/class as configurable, which\n allows its parameters to be supplied from the global configuration (i.e., set\n through `bind_parameter` or `parse_config`). The decorated function is\n associated with a name in the global configuration, which by default is simply\n the name of the function or class, but can be specified explicitly to avoid\n naming collisions or improve clarity.\n\n If some parameters should not be configurable, they can be specified in\n `blacklist`. If only a restricted set of parameters should be configurable,\n they can be specified in `whitelist`.\n\n The decorator can be used without any parameters as follows:\n\n @config.configurable\n def some_configurable_function(param1, param2='a default value'):\n ...\n\n In this case, the function is associated with the name\n `'some_configurable_function'` in the global configuration, and both `param1`\n and `param2` are configurable.\n\n The decorator can be supplied with parameters to specify the configurable name\n or supply a whitelist/blacklist:\n\n @config.configurable('explicit_configurable_name', whitelist='param2')\n def some_configurable_function(param1, param2='a default value'):\n ...\n\n In this case, the configurable is associated with the name\n `'explicit_configurable_name'` in the global configuration, and only `param2`\n is configurable.\n\n Classes can be decorated as well, in which case parameters of their\n constructors are made configurable:\n\n @config.configurable\n class SomeClass(object):\n def __init__(self, param1, param2='a default value'):\n ...\n\n In this case, the name of the configurable is `'SomeClass'`, and both `param1`\n and `param2` are configurable.\n\n Args:\n name_or_fn: A name for this configurable, or a function to decorate (in\n which case the name will be taken from that function). If not set,\n defaults to the name of the function/class that is being made\n configurable. If a name is provided, it may also include module components\n to be used for disambiguation (these will be appended to any components\n explicitly specified by `module`).\n module: The module to associate with the configurable, to help handle naming\n collisions. By default, the module of the function or class being made\n configurable will be used (if no module is specified as part of the name).\n whitelist: A whitelisted set of kwargs that should be configurable. All\n other kwargs will not be configurable. Only one of `whitelist` or\n `blacklist` should be specified.\n blacklist: A blacklisted set of kwargs that should not be configurable. All\n other kwargs will be configurable. Only one of `whitelist` or `blacklist`\n should be specified.\n\n Returns:\n When used with no parameters (or with a function/class supplied as the first\n parameter), it returns the decorated function or class. When used with\n parameters, it returns a function that can be applied to decorate the target\n function or class."} {"_id": "q_3114", "text": "Retrieve the \"operative\" configuration as a config string.\n\n The operative configuration consists of all parameter values used by\n configurable functions that are actually called during execution of the\n current program. Parameters associated with configurable functions that are\n not called (and so can have no effect on program execution) won't be included.\n\n The goal of the function is to return a config that captures the full set of\n relevant configurable \"hyperparameters\" used by a program. As such, the\n returned configuration will include the default values of arguments from\n configurable functions (as long as the arguments aren't blacklisted or missing\n from a supplied whitelist), as well as any parameter values overridden via\n `bind_parameter` or through `parse_config`.\n\n Any parameters that can't be represented as literals (capable of being parsed\n by `parse_config`) are excluded. The resulting config string is sorted\n lexicographically and grouped by configurable name.\n\n Args:\n max_line_length: A (soft) constraint on the maximum length of a line in the\n formatted string. Large nested structures will be split across lines, but\n e.g. long strings won't be split into a concatenation of shorter strings.\n continuation_indent: The indentation for continued lines.\n\n Returns:\n A config string capturing all parameter values used by the current program."} {"_id": "q_3115", "text": "Parse a list of config files followed by extra Gin bindings.\n\n This function is equivalent to:\n\n for config_file in config_files:\n gin.parse_config_file(config_file, skip_configurables)\n gin.parse_config(bindings, skip_configurables)\n if finalize_config:\n gin.finalize()\n\n Args:\n config_files: A list of paths to the Gin config files.\n bindings: A list of individual parameter binding strings.\n finalize_config: Whether to finalize the config after parsing and binding\n (defaults to True).\n skip_unknown: A boolean indicating whether unknown configurables and imports\n should be skipped instead of causing errors (alternatively a list of\n configurable names to skip if unknown). See `parse_config` for additional\n details."} {"_id": "q_3116", "text": "Parse and return a single Gin value."} {"_id": "q_3117", "text": "A function that should be called after parsing all Gin config files.\n\n Calling this function allows registered \"finalize hooks\" to inspect (and\n potentially modify) the Gin config, to provide additional functionality. Hooks\n should not modify the configuration object they receive directly; instead,\n they should return a dictionary mapping Gin binding keys to (new or updated)\n values. This way, all hooks see the config as originally parsed.\n\n Raises:\n RuntimeError: If the config is already locked.\n ValueError: If two or more hooks attempt to modify or introduce bindings for\n the same key. Since it is difficult to control the order in which hooks\n are registered, allowing this could yield unpredictable behavior."} {"_id": "q_3118", "text": "Provides an iterator over references in the given config.\n\n Args:\n config: A dictionary mapping scoped configurable names to argument bindings.\n to: If supplied, only yield references whose `configurable_fn` matches `to`.\n\n Yields:\n `ConfigurableReference` instances within `config`, maybe restricted to those\n matching the `to` parameter if it is supplied."} {"_id": "q_3119", "text": "Creates a constant that can be referenced from gin config files.\n\n After calling this function in Python, the constant can be referenced from\n within a Gin config file using the macro syntax. For example, in Python:\n\n gin.constant('THE_ANSWER', 42)\n\n Then, in a Gin config file:\n\n meaning.of_life = %THE_ANSWER\n\n Note that any Python object can be used as the value of a constant (including\n objects not representable as Gin literals). Values will be stored until\n program termination in a Gin-internal dictionary, so avoid creating constants\n with values that should have a limited lifetime.\n\n Optionally, a disambiguating module may be prefixed onto the constant\n name. For instance:\n\n gin.constant('some.modules.PI', 3.14159)\n\n Args:\n name: The name of the constant, possibly prepended by one or more\n disambiguating module components separated by periods. An macro with this\n name (including the modules) will be created.\n value: The value of the constant. This can be anything (including objects\n not representable as Gin literals). The value will be stored and returned\n whenever the constant is referenced.\n\n Raises:\n ValueError: If the constant's selector is invalid, or a constant with the\n given selector already exists."} {"_id": "q_3120", "text": "Decorator for an enum class that generates Gin constants from values.\n\n Generated constants have format `module.ClassName.ENUM_VALUE`. The module\n name is optional when using the constant.\n\n Args:\n cls: Class type.\n module: The module to associate with the constants, to help handle naming\n collisions. If `None`, `cls.__module__` will be used.\n\n Returns:\n Class type (identity function).\n\n Raises:\n TypeError: When applied to a non-enum class."} {"_id": "q_3121", "text": "Retrieves all selectors matching `partial_selector`.\n\n For instance, if \"one.a.b\" and \"two.a.b\" are stored in a `SelectorMap`, both\n `matching_selectors('b')` and `matching_selectors('a.b')` will return them.\n\n In the event that `partial_selector` exactly matches an existing complete\n selector, only that complete selector is returned. For instance, if\n \"a.b.c.d\" and \"c.d\" are stored, `matching_selectors('c.d')` will return only\n `['c.d']`, while `matching_selectors('d')` will return both.\n\n Args:\n partial_selector: The partial selector to find matches for.\n\n Returns:\n A list of selectors matching `partial_selector`."} {"_id": "q_3122", "text": "Returns all values matching `partial_selector` as a list."} {"_id": "q_3123", "text": "Returns the minimal selector that uniquely matches `complete_selector`.\n\n Args:\n complete_selector: A complete selector stored in the map.\n\n Returns:\n A partial selector that unambiguously matches `complete_selector`.\n\n Raises:\n KeyError: If `complete_selector` is not in the map."} {"_id": "q_3124", "text": "Sets the access permissions of the map.\n\n :param perms: the new permissions."} {"_id": "q_3125", "text": "Check if there is enough permissions for access"} {"_id": "q_3126", "text": "Creates a new mapping in the memory address space.\n\n :param addr: the starting address (took as hint). If C{addr} is C{0} the first big enough\n chunk of memory will be selected as starting address.\n :param size: the length of the mapping.\n :param perms: the access permissions to this memory.\n :param data_init: optional data to initialize this memory.\n :param name: optional name to give to this mapping\n :return: the starting address where the memory was mapped.\n :raises error:\n - 'Address shall be concrete' if C{addr} is not an integer number.\n - 'Address too big' if C{addr} goes beyond the limit of the memory.\n - 'Map already used' if the piece of memory starting in C{addr} and with length C{size} isn't free.\n :rtype: int"} {"_id": "q_3127", "text": "Translates a register ID from the disassembler object into the\n register name based on manticore's alias in the register file\n\n :param int reg_id: Register ID"} {"_id": "q_3128", "text": "Dynamic interface for writing cpu registers\n\n :param str register: register name (as listed in `self.all_registers`)\n :param value: register value\n :type value: int or long or Expression"} {"_id": "q_3129", "text": "Dynamic interface for reading cpu registers\n\n :param str register: register name (as listed in `self.all_registers`)\n :return: register value\n :rtype: int or long or Expression"} {"_id": "q_3130", "text": "Selects bytes from memory. Attempts to do so faster than via read_bytes.\n\n :param where: address to read from\n :param size: number of bytes to read\n :return: the bytes in memory"} {"_id": "q_3131", "text": "Reads int from memory\n\n :param int where: address to read from\n :param size: number of bits to read\n :return: the value read\n :rtype: int or BitVec\n :param force: whether to ignore memory permissions"} {"_id": "q_3132", "text": "Read a NUL-terminated concrete buffer from memory. Stops reading at first symbolic byte.\n\n :param int where: Address to read string from\n :param int max_length:\n The size in bytes to cap the string at, or None [default] for no\n limit.\n :param force: whether to ignore memory permissions\n :return: string read\n :rtype: str"} {"_id": "q_3133", "text": "Write `data` to the stack and decrement the stack pointer accordingly.\n\n :param str data: Data to write\n :param force: whether to ignore memory permissions"} {"_id": "q_3134", "text": "Read `nbytes` from the stack, increment the stack pointer, and return\n data.\n\n :param int nbytes: How many bytes to read\n :param force: whether to ignore memory permissions\n :return: Data read from the stack"} {"_id": "q_3135", "text": "Read a value from the stack and increment the stack pointer.\n\n :param force: whether to ignore memory permissions\n :return: Value read"} {"_id": "q_3136", "text": "Decode, and execute one instruction pointed by register PC"} {"_id": "q_3137", "text": "Notify listeners that an instruction has been executed."} {"_id": "q_3138", "text": "If we could not handle emulating an instruction, use Unicorn to emulate\n it.\n\n :param capstone.CsInsn instruction: The instruction object to emulate"} {"_id": "q_3139", "text": "remove decoded instruction from instruction cache"} {"_id": "q_3140", "text": "CPUID instruction.\n\n The ID flag (bit 21) in the EFLAGS register indicates support for the\n CPUID instruction. If a software procedure can set and clear this\n flag, the processor executing the procedure supports the CPUID\n instruction. This instruction operates the same in non-64-bit modes and\n 64-bit mode. CPUID returns processor identification and feature\n information in the EAX, EBX, ECX, and EDX registers.\n\n The instruction's output is dependent on the contents of the EAX\n register upon execution.\n\n :param cpu: current CPU."} {"_id": "q_3141", "text": "Logical inclusive OR.\n\n Performs a bitwise inclusive OR operation between the destination (first)\n and source (second) operands and stores the result in the destination operand location.\n\n Each bit of the result of the OR instruction is set to 0 if both corresponding\n bits of the first and second operands are 0; otherwise, each bit is set\n to 1.\n\n The OF and CF flags are cleared; the SF, ZF, and PF flags are set according to the result::\n\n DEST = DEST OR SRC;\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3142", "text": "ASCII adjust AX after multiply.\n\n Adjusts the result of the multiplication of two unpacked BCD values\n to create a pair of unpacked (base 10) BCD values. The AX register is\n the implied source and destination operand for this instruction. The AAM\n instruction is only useful when it follows a MUL instruction that multiplies\n (binary multiplication) two unpacked BCD values and stores a word result\n in the AX register. The AAM instruction then adjusts the contents of the\n AX register to contain the correct 2-digit unpacked (base 10) BCD result.\n\n The SF, ZF, and PF flags are set according to the resulting binary value in the AL register.\n\n This instruction executes as described in compatibility mode and legacy mode.\n It is not valid in 64-bit mode.::\n\n tempAL = AL;\n AH = tempAL / 10;\n AL = tempAL MOD 10;\n\n :param cpu: current CPU."} {"_id": "q_3143", "text": "ASCII Adjust AL after subtraction.\n\n Adjusts the result of the subtraction of two unpacked BCD values to create a unpacked\n BCD result. The AL register is the implied source and destination operand for this instruction.\n The AAS instruction is only useful when it follows a SUB instruction that subtracts\n (binary subtraction) one unpacked BCD value from another and stores a byte result in the AL\n register. The AAA instruction then adjusts the contents of the AL register to contain the\n correct 1-digit unpacked BCD result. If the subtraction produced a decimal carry, the AH register\n is decremented by 1, and the CF and AF flags are set. If no decimal carry occurred, the CF and AF\n flags are cleared, and the AH register is unchanged. In either case, the AL register is left with\n its top nibble set to 0.\n\n The AF and CF flags are set to 1 if there is a decimal borrow; otherwise, they are cleared to 0.\n\n This instruction executes as described in compatibility mode and legacy mode.\n It is not valid in 64-bit mode.::\n\n\n IF ((AL AND 0FH) > 9) Operators.OR(AF = 1)\n THEN\n AX = AX - 6;\n AH = AH - 1;\n AF = 1;\n CF = 1;\n ELSE\n CF = 0;\n AF = 0;\n FI;\n AL = AL AND 0FH;\n\n :param cpu: current CPU."} {"_id": "q_3144", "text": "Adds with carry.\n\n Adds the destination operand (first operand), the source operand (second operand),\n and the carry (CF) flag and stores the result in the destination operand. The state\n of the CF flag represents a carry from a previous addition. When an immediate value\n is used as an operand, it is sign-extended to the length of the destination operand\n format. The ADC instruction does not distinguish between signed or unsigned operands.\n Instead, the processor evaluates the result for both data types and sets the OF and CF\n flags to indicate a carry in the signed or unsigned result, respectively. The SF flag\n indicates the sign of the signed result. The ADC instruction is usually executed as\n part of a multibyte or multiword addition in which an ADD instruction is followed by an\n ADC instruction::\n\n DEST = DEST + SRC + CF;\n\n The OF, SF, ZF, AF, CF, and PF flags are set according to the result.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3145", "text": "Compares and exchanges bytes.\n\n Compares the 64-bit value in EDX:EAX (or 128-bit value in RDX:RAX if\n operand size is 128 bits) with the operand (destination operand). If\n the values are equal, the 64-bit value in ECX:EBX (or 128-bit value in\n RCX:RBX) is stored in the destination operand. Otherwise, the value in\n the destination operand is loaded into EDX:EAX (or RDX:RAX)::\n\n IF (64-Bit Mode and OperandSize = 64)\n THEN\n IF (RDX:RAX = DEST)\n THEN\n ZF = 1;\n DEST = RCX:RBX;\n ELSE\n ZF = 0;\n RDX:RAX = DEST;\n FI\n ELSE\n IF (EDX:EAX = DEST)\n THEN\n ZF = 1;\n DEST = ECX:EBX;\n ELSE\n ZF = 0;\n EDX:EAX = DEST;\n FI;\n FI;\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3146", "text": "Decimal adjusts AL after addition.\n\n Adjusts the sum of two packed BCD values to create a packed BCD result. The AL register\n is the implied source and destination operand. If a decimal carry is detected, the CF\n and AF flags are set accordingly.\n The CF and AF flags are set if the adjustment of the value results in a decimal carry in\n either digit of the result. The SF, ZF, and PF flags are set according to the result.\n\n This instruction is not valid in 64-bit mode.::\n\n IF (((AL AND 0FH) > 9) or AF = 1)\n THEN\n AL = AL + 6;\n CF = CF OR CarryFromLastAddition; (* CF OR carry from AL = AL + 6 *)\n AF = 1;\n ELSE\n AF = 0;\n FI;\n IF ((AL AND F0H) > 90H) or CF = 1)\n THEN\n AL = AL + 60H;\n CF = 1;\n ELSE\n CF = 0;\n FI;\n\n :param cpu: current CPU."} {"_id": "q_3147", "text": "Signed divide.\n\n Divides (signed) the value in the AL, AX, or EAX register by the source\n operand and stores the result in the AX, DX:AX, or EDX:EAX registers.\n The source operand can be a general-purpose register or a memory\n location. The action of this instruction depends on the operand size.::\n\n IF SRC = 0\n THEN #DE; (* divide error *)\n FI;\n IF OpernadSize = 8 (* word/byte operation *)\n THEN\n temp = AX / SRC; (* signed division *)\n IF (temp > 7FH) Operators.OR(temp < 80H)\n (* if a positive result is greater than 7FH or a negative result is\n less than 80H *)\n THEN #DE; (* divide error *) ;\n ELSE\n AL = temp;\n AH = AX SignedModulus SRC;\n FI;\n ELSE\n IF OpernadSize = 16 (* doubleword/word operation *)\n THEN\n temp = DX:AX / SRC; (* signed division *)\n IF (temp > 7FFFH) Operators.OR(temp < 8000H)\n (* if a positive result is greater than 7FFFH *)\n (* or a negative result is less than 8000H *)\n THEN #DE; (* divide error *) ;\n ELSE\n AX = temp;\n DX = DX:AX SignedModulus SRC;\n FI;\n ELSE (* quadword/doubleword operation *)\n temp = EDX:EAX / SRC; (* signed division *)\n IF (temp > 7FFFFFFFH) Operators.OR(temp < 80000000H)\n (* if a positive result is greater than 7FFFFFFFH *)\n (* or a negative result is less than 80000000H *)\n THEN #DE; (* divide error *) ;\n ELSE\n EAX = temp;\n EDX = EDX:EAX SignedModulus SRC;\n FI;\n FI;\n FI;\n\n :param cpu: current CPU.\n :param src: source operand."} {"_id": "q_3148", "text": "Signed multiply.\n\n Performs a signed multiplication of two operands. This instruction has\n three forms, depending on the number of operands.\n - One-operand form. This form is identical to that used by the MUL\n instruction. Here, the source operand (in a general-purpose\n register or memory location) is multiplied by the value in the AL,\n AX, or EAX register (depending on the operand size) and the product\n is stored in the AX, DX:AX, or EDX:EAX registers, respectively.\n - Two-operand form. With this form the destination operand (the\n first operand) is multiplied by the source operand (second\n operand). The destination operand is a general-purpose register and\n the source operand is an immediate value, a general-purpose\n register, or a memory location. The product is then stored in the\n destination operand location.\n - Three-operand form. This form requires a destination operand (the\n first operand) and two source operands (the second and the third\n operands). Here, the first source operand (which can be a\n general-purpose register or a memory location) is multiplied by the\n second source operand (an immediate value). The product is then\n stored in the destination operand (a general-purpose register).\n\n When an immediate value is used as an operand, it is sign-extended to\n the length of the destination operand format. The CF and OF flags are\n set when significant bits are carried into the upper half of the\n result. The CF and OF flags are cleared when the result fits exactly in\n the lower half of the result. The three forms of the IMUL instruction\n are similar in that the length of the product is calculated to twice\n the length of the operands. With the one-operand form, the product is\n stored exactly in the destination. With the two- and three- operand\n forms, however, result is truncated to the length of the destination\n before it is stored in the destination register. Because of this\n truncation, the CF or OF flag should be tested to ensure that no\n significant bits are lost. The two- and three-operand forms may also be\n used with unsigned operands because the lower half of the product is\n the same regardless if the operands are signed or unsigned. The CF and\n OF flags, however, cannot be used to determine if the upper half of the\n result is non-zero::\n\n IF (NumberOfOperands == 1)\n THEN\n IF (OperandSize == 8)\n THEN\n AX = AL * SRC (* Signed multiplication *)\n IF AL == AX\n THEN\n CF = 0; OF = 0;\n ELSE\n CF = 1; OF = 1;\n FI;\n ELSE\n IF OperandSize == 16\n THEN\n DX:AX = AX * SRC (* Signed multiplication *)\n IF sign_extend_to_32 (AX) == DX:AX\n THEN\n CF = 0; OF = 0;\n ELSE\n CF = 1; OF = 1;\n FI;\n ELSE\n IF OperandSize == 32\n THEN\n EDX:EAX = EAX * SRC (* Signed multiplication *)\n IF EAX == EDX:EAX\n THEN\n CF = 0; OF = 0;\n ELSE\n CF = 1; OF = 1;\n FI;\n ELSE (* OperandSize = 64 *)\n RDX:RAX = RAX * SRC (* Signed multiplication *)\n IF RAX == RDX:RAX\n THEN\n CF = 0; OF = 0;\n ELSE\n CF = 1; OF = 1;\n FI;\n FI;\n FI;\n ELSE\n IF (NumberOfOperands = 2)\n THEN\n temp = DEST * SRC (* Signed multiplication; temp is double DEST size *)\n DEST = DEST * SRC (* Signed multiplication *)\n IF temp != DEST\n THEN\n CF = 1; OF = 1;\n ELSE\n CF = 0; OF = 0;\n FI;\n ELSE (* NumberOfOperands = 3 *)\n DEST = SRC1 * SRC2 (* Signed multiplication *)\n temp = SRC1 * SRC2 (* Signed multiplication; temp is double SRC1 size *)\n IF temp != DEST\n THEN\n CF = 1; OF = 1;\n ELSE\n CF = 0; OF = 0;\n FI;\n FI;\n FI;\n\n :param cpu: current CPU.\n :param operands: variable list of operands."} {"_id": "q_3149", "text": "Unsigned multiply.\n\n Performs an unsigned multiplication of the first operand (destination\n operand) and the second operand (source operand) and stores the result\n in the destination operand. The destination operand is an implied operand\n located in register AL, AX or EAX (depending on the size of the operand);\n the source operand is located in a general-purpose register or a memory location.\n\n The result is stored in register AX, register pair DX:AX, or register\n pair EDX:EAX (depending on the operand size), with the high-order bits\n of the product contained in register AH, DX, or EDX, respectively. If\n the high-order bits of the product are 0, the CF and OF flags are cleared;\n otherwise, the flags are set::\n\n IF byte operation\n THEN\n AX = AL * SRC\n ELSE (* word or doubleword operation *)\n IF OperandSize = 16\n THEN\n DX:AX = AX * SRC\n ELSE (* OperandSize = 32 *)\n EDX:EAX = EAX * SRC\n FI;\n FI;\n\n :param cpu: current CPU.\n :param src: source operand."} {"_id": "q_3150", "text": "Two's complement negation.\n\n Replaces the value of operand (the destination operand) with its two's complement.\n (This operation is equivalent to subtracting the operand from 0.) The destination operand is\n located in a general-purpose register or a memory location::\n\n IF DEST = 0\n THEN CF = 0\n ELSE CF = 1;\n FI;\n DEST = - (DEST)\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3151", "text": "Integer subtraction with borrow.\n\n Adds the source operand (second operand) and the carry (CF) flag, and\n subtracts the result from the destination operand (first operand). The\n result of the subtraction is stored in the destination operand. The\n destination operand can be a register or a memory location; the source\n operand can be an immediate, a register, or a memory location.\n (However, two memory operands cannot be used in one instruction.) The\n state of the CF flag represents a borrow from a previous subtraction.\n When an immediate value is used as an operand, it is sign-extended to\n the length of the destination operand format.\n The SBB instruction does not distinguish between signed or unsigned\n operands. Instead, the processor evaluates the result for both data\n types and sets the OF and CF flags to indicate a borrow in the signed\n or unsigned result, respectively. The SF flag indicates the sign of the\n signed result. The SBB instruction is usually executed as part of a\n multibyte or multiword subtraction in which a SUB instruction is\n followed by a SBB instruction::\n\n DEST = DEST - (SRC + CF);\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3152", "text": "Exchanges and adds.\n\n Exchanges the first operand (destination operand) with the second operand\n (source operand), then loads the sum of the two values into the destination\n operand. The destination operand can be a register or a memory location;\n the source operand is a register.\n This instruction can be used with a LOCK prefix::\n\n TEMP = SRC + DEST\n SRC = DEST\n DEST = TEMP\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3153", "text": "Byte swap.\n\n Reverses the byte order of a 32-bit (destination) register: bits 0 through\n 7 are swapped with bits 24 through 31, and bits 8 through 15 are swapped\n with bits 16 through 23. This instruction is provided for converting little-endian\n values to big-endian format and vice versa.\n To swap bytes in a word value (16-bit register), use the XCHG instruction.\n When the BSWAP instruction references a 16-bit register, the result is\n undefined::\n\n TEMP = DEST\n DEST[7..0] = TEMP[31..24]\n DEST[15..8] = TEMP[23..16]\n DEST[23..16] = TEMP[15..8]\n DEST[31..24] = TEMP[7..0]\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3154", "text": "Conditional move - Greater.\n\n Tests the status flags in the EFLAGS register and moves the source operand\n (second operand) to the destination operand (first operand) if the given\n test condition is true.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3155", "text": "Conditional move - Overflow.\n\n Tests the status flags in the EFLAGS register and moves the source operand\n (second operand) to the destination operand (first operand) if the given\n test condition is true.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3156", "text": "Conditional move - Not overflow.\n\n Tests the status flags in the EFLAGS register and moves the source operand\n (second operand) to the destination operand (first operand) if the given\n test condition is true.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3157", "text": "Loads status flags into AH register.\n\n Moves the low byte of the EFLAGS register (which includes status flags\n SF, ZF, AF, PF, and CF) to the AH register. Reserved bits 1, 3, and 5\n of the EFLAGS register are set in the AH register::\n\n AH = EFLAGS(SF:ZF:0:AF:0:PF:1:CF);\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3158", "text": "Loads effective address.\n\n Computes the effective address of the second operand (the source operand) and stores it in the first operand\n (destination operand). The source operand is a memory address (offset part) specified with one of the processors\n addressing modes; the destination operand is a general-purpose register. The address-size and operand-size\n attributes affect the action performed by this instruction. The operand-size\n attribute of the instruction is determined by the chosen register; the address-size attribute is determined by the\n attribute of the code segment.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3159", "text": "Moves data after swapping bytes.\n\n Performs a byte swap operation on the data copied from the second operand (source operand) and store the result\n in the first operand (destination operand). The source operand can be a general-purpose register, or memory location; the destination register can be a general-purpose register, or a memory location; however, both operands can\n not be registers, and only one operand can be a memory location. Both operands must be the same size, which can\n be a word, a doubleword or quadword.\n The MOVBE instruction is provided for swapping the bytes on a read from memory or on a write to memory; thus\n providing support for converting little-endian values to big-endian format and vice versa.\n In 64-bit mode, the instruction's default operation size is 32 bits. Use of the REX.R prefix permits access to additional registers (R8-R15). Use of the REX.W prefix promotes operation to 64 bits::\n\n TEMP = SRC\n IF ( OperandSize = 16)\n THEN\n DEST[7:0] = TEMP[15:8];\n DEST[15:8] = TEMP[7:0];\n ELSE IF ( OperandSize = 32)\n DEST[7:0] = TEMP[31:24];\n DEST[15:8] = TEMP[23:16];\n DEST[23:16] = TEMP[15:8];\n DEST[31:23] = TEMP[7:0];\n ELSE IF ( OperandSize = 64)\n DEST[7:0] = TEMP[63:56];\n DEST[15:8] = TEMP[55:48];\n DEST[23:16] = TEMP[47:40];\n DEST[31:24] = TEMP[39:32];\n DEST[39:32] = TEMP[31:24];\n DEST[47:40] = TEMP[23:16];\n DEST[55:48] = TEMP[15:8];\n DEST[63:56] = TEMP[7:0];\n FI;\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3160", "text": "Stores AH into flags.\n\n Loads the SF, ZF, AF, PF, and CF flags of the EFLAGS register with values\n from the corresponding bits in the AH register (bits 7, 6, 4, 2, and 0,\n respectively). Bits 1, 3, and 5 of register AH are ignored; the corresponding\n reserved bits (1, 3, and 5) in the EFLAGS register remain as shown below::\n\n EFLAGS(SF:ZF:0:AF:0:PF:1:CF) = AH;\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3161", "text": "Sets byte if below or equal.\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3162", "text": "Sets if carry.\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3163", "text": "Sets byte if equal.\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3164", "text": "Sets byte if greater or equal.\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3165", "text": "Sets byte if not above or equal.\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3166", "text": "Sets byte if not below.\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3167", "text": "Sets byte if not less or equal.\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3168", "text": "Sets byte if not sign.\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3169", "text": "Sets byte if not zero.\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3170", "text": "Sets byte if overflow.\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3171", "text": "Sets byte if parity.\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3172", "text": "Sets byte if parity odd.\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3173", "text": "Sets byte if sign.\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3174", "text": "Sets byte if zero.\n\n :param cpu: current CPU.\n :param dest: destination operand."} {"_id": "q_3175", "text": "High level procedure exit.\n\n Releases the stack frame set up by an earlier ENTER instruction. The\n LEAVE instruction copies the frame pointer (in the EBP register) into\n the stack pointer register (ESP), which releases the stack space allocated\n to the stack frame. The old frame pointer (the frame pointer for the calling\n procedure that was saved by the ENTER instruction) is then popped from\n the stack into the EBP register, restoring the calling procedure's stack\n frame.\n A RET instruction is commonly executed following a LEAVE instruction\n to return program control to the calling procedure::\n\n IF Stackaddress_bit_size = 32\n THEN\n ESP = EBP;\n ELSE (* Stackaddress_bit_size = 16*)\n SP = BP;\n FI;\n IF OperandSize = 32\n THEN\n EBP = Pop();\n ELSE (* OperandSize = 16*)\n BP = Pop();\n FI;\n\n :param cpu: current CPU."} {"_id": "q_3176", "text": "Pushes a value onto the stack.\n\n Decrements the stack pointer and then stores the source operand on the top of the stack.\n\n :param cpu: current CPU.\n :param src: source operand."} {"_id": "q_3177", "text": "Procedure call.\n\n Saves procedure linking information on the stack and branches to the called procedure specified using the target\n operand. The target operand specifies the address of the first instruction in the called procedure. The operand can\n be an immediate value, a general-purpose register, or a memory location.\n\n :param cpu: current CPU.\n :param op0: target operand."} {"_id": "q_3178", "text": "Returns from procedure.\n\n Transfers program control to a return address located on the top of\n the stack. The address is usually placed on the stack by a CALL instruction,\n and the return is made to the instruction that follows the CALL instruction.\n The optional source operand specifies the number of stack bytes to be\n released after the return address is popped; the default is none.\n\n :param cpu: current CPU.\n :param operands: variable operands list."} {"_id": "q_3179", "text": "Jumps short if above.\n\n :param cpu: current CPU.\n :param target: destination operand."} {"_id": "q_3180", "text": "Jumps short if below.\n\n :param cpu: current CPU.\n :param target: destination operand."} {"_id": "q_3181", "text": "Jumps short if below or equal.\n\n :param cpu: current CPU.\n :param target: destination operand."} {"_id": "q_3182", "text": "Jumps short if carry.\n\n :param cpu: current CPU.\n :param target: destination operand."} {"_id": "q_3183", "text": "Jumps short if CX register is 0.\n\n :param cpu: current CPU.\n :param target: destination operand."} {"_id": "q_3184", "text": "Jumps short if ECX register is 0.\n\n :param cpu: current CPU.\n :param target: destination operand."} {"_id": "q_3185", "text": "Jumps short if greater.\n\n :param cpu: current CPU.\n :param target: destination operand."} {"_id": "q_3186", "text": "Jumps short if greater or equal.\n\n :param cpu: current CPU.\n :param target: destination operand."} {"_id": "q_3187", "text": "Jumps short if not equal.\n\n :param cpu: current CPU.\n :param target: destination operand."} {"_id": "q_3188", "text": "Jumps short if not parity.\n\n :param cpu: current CPU.\n :param target: destination operand."} {"_id": "q_3189", "text": "Jumps short if overflow.\n\n :param cpu: current CPU.\n :param target: destination operand."} {"_id": "q_3190", "text": "Jumps short if sign.\n\n :param cpu: current CPU.\n :param target: destination operand."} {"_id": "q_3191", "text": "Jumps short if zero.\n\n :param cpu: current CPU.\n :param target: destination operand."} {"_id": "q_3192", "text": "Rotates through carry left.\n\n Shifts (rotates) the bits of the first operand (destination operand) the number of bit positions specified in the\n second operand (count operand) and stores the result in the destination operand. The destination operand can be\n a register or a memory location; the count operand is an unsigned integer that can be an immediate or a value in\n the CL register. In legacy and compatibility mode, the processor restricts the count to a number between 0 and 31\n by masking all the bits in the count operand except the 5 least-significant bits.\n\n The RCL instruction shifts the CF flag into the least-significant bit and shifts the most-significant bit into the CF flag.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: count operand."} {"_id": "q_3193", "text": "Shift arithmetic right.\n\n The shift arithmetic right (SAR) and shift logical right (SHR) instructions shift the bits of the destination operand to\n the right (toward less significant bit locations). For each shift count, the least significant bit of the destination\n operand is shifted into the CF flag, and the most significant bit is either set or cleared depending on the instruction\n type. The SHR instruction clears the most significant bit. the SAR instruction sets or clears the most significant bit\n to correspond to the sign (most significant bit) of the original value in the destination operand. In effect, the SAR\n instruction fills the empty bit position's shifted value with the sign of the unshifted value\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3194", "text": "Shift logical right.\n\n The shift arithmetic right (SAR) and shift logical right (SHR)\n instructions shift the bits of the destination operand to the right\n (toward less significant bit locations). For each shift count, the\n least significant bit of the destination operand is shifted into the CF\n flag, and the most significant bit is either set or cleared depending\n on the instruction type. The SHR instruction clears the most\n significant bit.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: count operand."} {"_id": "q_3195", "text": "Double precision shift right.\n\n Shifts the first operand (destination operand) to the left the number of bits specified by the third operand\n (count operand). The second operand (source operand) provides bits to shift in from the right (starting with\n the least significant bit of the destination operand).\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand.\n :param count: count operand"} {"_id": "q_3196", "text": "Bit scan forward.\n\n Searches the source operand (second operand) for the least significant\n set bit (1 bit). If a least significant 1 bit is found, its bit index\n is stored in the destination operand (first operand). The source operand\n can be a register or a memory location; the destination operand is a register.\n The bit index is an unsigned offset from bit 0 of the source operand.\n If the contents source operand are 0, the contents of the destination\n operand is undefined::\n\n IF SRC = 0\n THEN\n ZF = 1;\n DEST is undefined;\n ELSE\n ZF = 0;\n temp = 0;\n WHILE Bit(SRC, temp) = 0\n DO\n temp = temp + 1;\n DEST = temp;\n OD;\n FI;\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3197", "text": "Bit scan reverse.\n\n Searches the source operand (second operand) for the most significant\n set bit (1 bit). If a most significant 1 bit is found, its bit index is\n stored in the destination operand (first operand). The source operand\n can be a register or a memory location; the destination operand is a register.\n The bit index is an unsigned offset from bit 0 of the source operand.\n If the contents source operand are 0, the contents of the destination\n operand is undefined::\n\n IF SRC = 0\n THEN\n ZF = 1;\n DEST is undefined;\n ELSE\n ZF = 0;\n temp = OperandSize - 1;\n WHILE Bit(SRC, temp) = 0\n DO\n temp = temp - 1;\n DEST = temp;\n OD;\n FI;\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3198", "text": "Bit test and complement.\n\n Selects the bit in a bit string (specified with the first operand, called\n the bit base) at the bit-position designated by the bit offset operand\n (second operand), stores the value of the bit in the CF flag, and complements\n the selected bit in the bit string.\n\n :param cpu: current CPU.\n :param dest: bit base operand.\n :param src: bit offset operand."} {"_id": "q_3199", "text": "Loads string.\n\n Loads a byte, word, or doubleword from the source operand into the AL, AX, or EAX register, respectively. The\n source operand is a memory location, the address of which is read from the DS:ESI or the DS:SI registers\n (depending on the address-size attribute of the instruction, 32 or 16, respectively). The DS segment may be over-\n ridden with a segment override prefix.\n After the byte, word, or doubleword is transferred from the memory location into the AL, AX, or EAX register, the\n (E)SI register is incremented or decremented automatically according to the setting of the DF flag in the EFLAGS\n register. (If the DF flag is 0, the (E)SI register is incremented; if the DF flag is 1, the ESI register is decremented.)\n The (E)SI register is incremented or decremented by 1 for byte operations, by 2 for word operations, or by 4 for\n doubleword operations.\n\n :param cpu: current CPU.\n :param dest: source operand."} {"_id": "q_3200", "text": "Moves data from string to string.\n\n Moves the byte, word, or doubleword specified with the second operand (source operand) to the location specified\n with the first operand (destination operand). Both the source and destination operands are located in memory. The\n address of the source operand is read from the DS:ESI or the DS:SI registers (depending on the address-size\n attribute of the instruction, 32 or 16, respectively). The address of the destination operand is read from the ES:EDI\n or the ES:DI registers (again depending on the address-size attribute of the instruction). The DS segment may be\n overridden with a segment override prefix, but the ES segment cannot be overridden.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3201", "text": "Stores String.\n\n Stores a byte, word, or doubleword from the AL, AX, or EAX register,\n respectively, into the destination operand. The destination operand is\n a memory location, the address of which is read from either the ES:EDI\n or the ES:DI registers (depending on the address-size attribute of the\n instruction, 32 or 16, respectively). The ES segment cannot be overridden\n with a segment override prefix.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3202", "text": "The shift arithmetic right.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: count operand."} {"_id": "q_3203", "text": "Packed shuffle words.\n\n Copies doublewords from source operand (second operand) and inserts them in the destination operand\n (first operand) at locations selected with the order operand (third operand).\n\n :param cpu: current CPU.\n :param op0: destination operand.\n :param op1: source operand.\n :param op3: order operand."} {"_id": "q_3204", "text": "Packed shuffle doublewords.\n\n Copies doublewords from source operand (second operand) and inserts them in the destination operand\n (first operand) at locations selected with the order operand (third operand).\n\n :param cpu: current CPU.\n :param op0: destination operand.\n :param op1: source operand.\n :param op3: order operand."} {"_id": "q_3205", "text": "Moves byte mask to general-purpose register.\n\n Creates an 8-bit mask made up of the most significant bit of each byte of the source operand\n (second operand) and stores the result in the low byte or word of the destination operand\n (first operand). The source operand is an MMX(TM) technology or an XXM register; the destination\n operand is a general-purpose register.\n\n :param cpu: current CPU.\n :param op0: destination operand.\n :param op1: source operand."} {"_id": "q_3206", "text": "Packed shift right logical double quadword.\n\n Shifts the destination operand (first operand) to the right by the number\n of bytes specified in the count operand (second operand). The empty high-order\n bytes are cleared (set to all 0s). If the value specified by the count\n operand is greater than 15, the destination operand is set to all 0s.\n The destination operand is an XMM register. The count operand is an 8-bit\n immediate::\n\n TEMP = SRC;\n if (TEMP > 15) TEMP = 16;\n DEST = DEST >> (temp * 8);\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: count operand."} {"_id": "q_3207", "text": "Moves with sign-extension.\n\n Copies the contents of the source operand (register or memory location) to the destination\n operand (register) and sign extends the value to 16::\n\n OP0 = SignExtend(OP1);\n\n :param cpu: current CPU.\n :param op0: destination operand.\n :param op1: source operand."} {"_id": "q_3208", "text": "Converts word to doubleword.\n\n ::\n DX = sign-extend of AX.\n\n :param cpu: current CPU."} {"_id": "q_3209", "text": "Reads time-stamp counter.\n\n Loads the current value of the processor's time-stamp counter into the\n EDX:EAX registers. The time-stamp counter is contained in a 64-bit\n MSR. The high-order 32 bits of the MSR are loaded into the EDX\n register, and the low-order 32 bits are loaded into the EAX register.\n The processor increments the time-stamp counter MSR every clock cycle\n and resets it to 0 whenever the processor is reset.\n\n :param cpu: current CPU."} {"_id": "q_3210", "text": "Moves low packed double-precision floating-point value.\n\n Moves a double-precision floating-point value from the source operand (second operand) and the\n destination operand (first operand). The source and destination operands can be an XMM register\n or a 64-bit memory location. This instruction allows double-precision floating-point values to be moved\n to and from the low quadword of an XMM register and memory. It cannot be used for register to register\n or memory to memory moves. When the destination operand is an XMM register, the high quadword of the\n register remains unchanged.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3211", "text": "Moves high packed double-precision floating-point value.\n\n Moves a double-precision floating-point value from the source operand (second operand) and the\n destination operand (first operand). The source and destination operands can be an XMM register\n or a 64-bit memory location. This instruction allows double-precision floating-point values to be moved\n to and from the high quadword of an XMM register and memory. It cannot be used for register to\n register or memory to memory moves. When the destination operand is an XMM register, the low quadword\n of the register remains unchanged.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3212", "text": "Packed subtract.\n\n Performs a SIMD subtract of the packed integers of the source operand (second operand) from the packed\n integers of the destination operand (first operand), and stores the packed integer results in the\n destination operand. The source operand can be an MMX(TM) technology register or a 64-bit memory location,\n or it can be an XMM register or a 128-bit memory location. The destination operand can be an MMX or an XMM\n register.\n The PSUBB instruction subtracts packed byte integers. When an individual result is too large or too small\n to be represented in a byte, the result is wrapped around and the low 8 bits are written to the\n destination element.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3213", "text": "Move quadword.\n\n Copies a quadword from the source operand (second operand) to the destination operand (first operand).\n The source and destination operands can be MMX(TM) technology registers, XMM registers, or 64-bit memory\n locations. This instruction can be used to move a between two MMX registers or between an MMX register\n and a 64-bit memory location, or to move data between two XMM registers or between an XMM register and\n a 64-bit memory location. The instruction cannot be used to transfer data between memory locations.\n When the source operand is an XMM register, the low quadword is moved; when the destination operand is\n an XMM register, the quadword is stored to the low quadword of the register, and the high quadword is\n cleared to all 0s::\n\n MOVQ instruction when operating on MMX registers and memory locations:\n\n DEST = SRC;\n\n MOVQ instruction when source and destination operands are XMM registers:\n\n DEST[63-0] = SRC[63-0];\n\n MOVQ instruction when source operand is XMM register and destination operand is memory location:\n\n DEST = SRC[63-0];\n\n MOVQ instruction when source operand is memory location and destination operand is XMM register:\n\n DEST[63-0] = SRC;\n DEST[127-64] = 0000000000000000H;\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3214", "text": "Move Scalar Double-Precision Floating-Point Value\n\n Moves a scalar double-precision floating-point value from the source\n operand (second operand) to the destination operand (first operand).\n The source and destination operands can be XMM registers or 64-bit memory\n locations. This instruction can be used to move a double-precision\n floating-point value to and from the low quadword of an XMM register and\n a 64-bit memory location, or to move a double-precision floating-point\n value between the low quadwords of two XMM registers. The instruction\n cannot be used to transfer data between memory locations.\n When the source and destination operands are XMM registers, the high\n quadword of the destination operand remains unchanged. When the source\n operand is a memory location and destination operand is an XMM registers,\n the high quadword of the destination operand is cleared to all 0s.\n\n :param cpu: current CPU.\n :param dest: destination operand.\n :param src: source operand."} {"_id": "q_3215", "text": "Moves a scalar single-precision floating-point value\n\n Moves a scalar single-precision floating-point value from the source operand (second operand)\n to the destination operand (first operand). The source and destination operands can be XMM\n registers or 32-bit memory locations. This instruction can be used to move a single-precision\n floating-point value to and from the low doubleword of an XMM register and a 32-bit memory\n location, or to move a single-precision floating-point value between the low doublewords of\n two XMM registers. The instruction cannot be used to transfer data between memory locations.\n When the source and destination operands are XMM registers, the three high-order doublewords of the\n destination operand remain unchanged. When the source operand is a memory location and destination\n operand is an XMM registers, the three high-order doublewords of the destination operand are cleared to all 0s.\n\n //MOVSS instruction when source and destination operands are XMM registers:\n if(IsXMM(Source) && IsXMM(Destination))\n Destination[0..31] = Source[0..31];\n //Destination[32..127] remains unchanged\n //MOVSS instruction when source operand is XMM register and destination operand is memory location:\n else if(IsXMM(Source) && IsMemory(Destination))\n Destination = Source[0..31];\n //MOVSS instruction when source operand is memory location and destination operand is XMM register:\n else {\n Destination[0..31] = Source;\n Destination[32..127] = 0;\n }"} {"_id": "q_3216", "text": "Constrain state.\n\n :param manticore.core.smtlib.Bool constraint: Constraint to add"} {"_id": "q_3217", "text": "Create and return a symbolic buffer of length `nbytes`. The buffer is\n not written into State's memory; write it to the state's memory to\n introduce it into the program state.\n\n :param int nbytes: Length of the new buffer\n :param str label: (keyword arg only) The label to assign to the buffer\n :param bool cstring: (keyword arg only) Whether or not to enforce that the buffer is a cstring\n (i.e. no NULL bytes, except for the last byte). (bool)\n :param taint: Taint identifier of the new buffer\n :type taint: tuple or frozenset\n\n :return: :class:`~manticore.core.smtlib.expression.Expression` representing the buffer."} {"_id": "q_3218", "text": "Create and return a symbolic value that is `nbits` bits wide. Assign\n the value to a register or write it into the address space to introduce\n it into the program state.\n\n :param int nbits: The bitwidth of the value returned\n :param str label: The label to assign to the value\n :param taint: Taint identifier of this value\n :type taint: tuple or frozenset\n :return: :class:`~manticore.core.smtlib.expression.Expression` representing the value"} {"_id": "q_3219", "text": "Reads `nbytes` of symbolic data from a buffer in memory at `addr` and attempts to\n concretize it\n\n :param int address: Address of buffer to concretize\n :param int nbytes: Size of buffer to concretize\n :param bool constrain: If True, constrain the buffer to the concretized value\n :return: Concrete contents of buffer\n :rtype: list[int]"} {"_id": "q_3220", "text": "Check if expression is True and that it can not be False with current constraints"} {"_id": "q_3221", "text": "Iteratively finds the minimum value for a symbol within given constraints.\n\n :param constraints: constraints that the expression must fulfil\n :param X: a symbol or expression\n :param M: maximum number of iterations allowed"} {"_id": "q_3222", "text": "Spawns z3 solver process"} {"_id": "q_3223", "text": "Auxiliary method to reset the smtlib external solver to initial defaults"} {"_id": "q_3224", "text": "Send a string to the solver.\n\n :param cmd: a SMTLIBv2 command (ex. (check-sat))"} {"_id": "q_3225", "text": "Reads the response from the solver"} {"_id": "q_3226", "text": "Check the satisfiability of the current state\n\n :return: whether current state is satisfiable or not."} {"_id": "q_3227", "text": "Auxiliary method to send an assert"} {"_id": "q_3228", "text": "Ask the solver for one possible assignment for given expression using current set of constraints.\n The current set of expressions must be sat.\n\n NOTE: This is an internal method: it uses the current solver state (set of constraints!)."} {"_id": "q_3229", "text": "Check if two potentially symbolic values can be equal"} {"_id": "q_3230", "text": "Returns a list with all the possible values for the symbol x"} {"_id": "q_3231", "text": "Ask the solver for one possible result of given expression using given set of constraints."} {"_id": "q_3232", "text": "Colors the logging level in the logging record"} {"_id": "q_3233", "text": "Helper for finding the closest NULL or, effectively NULL byte from a starting address.\n\n :param Cpu cpu:\n :param ConstraintSet constrs: Constraints for current `State`\n :param int ptr: Address to start searching for a zero from\n :return: Offset from `ptr` to first byte that is 0 or an `Expression` that must be zero"} {"_id": "q_3234", "text": "Return all events that all subclasses have so far registered to publish."} {"_id": "q_3235", "text": "Returns a pstat.Stats instance with profiling results if `run` was called with `should_profile=True`.\n Otherwise, returns `None`."} {"_id": "q_3236", "text": "Runs analysis.\n\n :param int procs: Number of parallel worker processes\n :param timeout: Analysis timeout, in seconds"} {"_id": "q_3237", "text": "Enqueue it for processing"} {"_id": "q_3238", "text": "Dequeue a state with the max priority"} {"_id": "q_3239", "text": "Fork state on expression concretizations.\n Using policy build a list of solutions for expression.\n For the state on each solution setting the new state with setstate\n\n For example if expression is a Bool it may have 2 solutions. True or False.\n\n Parent\n (expression = ??)\n\n Child1 Child2\n (expression = True) (expression = True)\n setstate(True) setstate(False)\n\n The optional setstate() function is supposed to set the concrete value\n in the child state."} {"_id": "q_3240", "text": "Entry point of the Executor; called by workers to start analysis."} {"_id": "q_3241", "text": "Constructor for Decree binary analysis.\n\n :param str path: Path to binary to analyze\n :param str concrete_start: Concrete stdin to use before symbolic input\n :param kwargs: Forwarded to the Manticore constructor\n :return: Manticore instance, initialized with a Decree State\n :rtype: Manticore"} {"_id": "q_3242", "text": "Invoke all registered generic hooks"} {"_id": "q_3243", "text": "A helper method used to resolve a symbol name into a memory address when\n injecting hooks for analysis.\n\n :param symbol: function name to be resolved\n :type symbol: string\n\n :param line: if more functions present, optional line number can be included\n :type line: int or None"} {"_id": "q_3244", "text": "helper method for getting all binary symbols with SANDSHREW_ prepended.\n We do this in order to provide the symbols Manticore should hook on to\n perform main analysis.\n\n :param binary: str for binary to instrospect.\n :rtype list: list of symbols from binary"} {"_id": "q_3245", "text": "Get a configuration variable group named |name|"} {"_id": "q_3246", "text": "Save current config state to an yml file stream identified by |f|\n\n :param f: where to write the config file"} {"_id": "q_3247", "text": "Load an yml-formatted configuration from file stream |f|\n\n :param file f: Where to read the config."} {"_id": "q_3248", "text": "Load config overrides from the yml file at |path|, or from default paths. If a path\n is provided and it does not exist, raise an exception\n\n Default paths: ./mcore.yml, ./.mcore.yml, ./manticore.yml, ./.manticore.yml."} {"_id": "q_3249", "text": "Bring in provided config values to the args parser, and import entries to the config\n from all arguments that were actually passed on the command line\n\n :param parser: The arg parser\n :param args: The value that parser.parse_args returned"} {"_id": "q_3250", "text": "Like add, but can tolerate existing values; also updates the value.\n\n Mostly used for setting fields from imported INI files and modified CLI flags."} {"_id": "q_3251", "text": "Return the description, or a help string of variable identified by |name|."} {"_id": "q_3252", "text": "Returns the tuple type signature for the arguments of the contract constructor."} {"_id": "q_3253", "text": "Returns a copy of the Solidity JSON ABI item for the contract constructor.\n\n The content of the returned dict is described at https://solidity.readthedocs.io/en/latest/abi-spec.html#json_"} {"_id": "q_3254", "text": "Returns a copy of the Solidity JSON ABI item for the function associated with the selector ``hsh``.\n\n If no normal contract function has the specified selector, a dict describing the default or non-default\n fallback function is returned.\n\n The content of the returned dict is described at https://solidity.readthedocs.io/en/latest/abi-spec.html#json_"} {"_id": "q_3255", "text": "Returns the tuple type signature for the arguments of the function associated with the selector ``hsh``.\n\n If no normal contract function has the specified selector,\n the empty tuple type signature ``'()'`` is returned."} {"_id": "q_3256", "text": "Returns the signature of the normal function with the selector ``hsh``,\n or ``None`` if no such function exists.\n\n This function returns ``None`` for any selector that will be dispatched to a fallback function."} {"_id": "q_3257", "text": "Catches did_map_memory and copies the mapping into Manticore"} {"_id": "q_3258", "text": "Unmap Unicorn maps when Manticore unmaps them"} {"_id": "q_3259", "text": "Set memory protections in Unicorn correctly"} {"_id": "q_3260", "text": "Unicorn hook that transfers control to Manticore so it can execute the syscall"} {"_id": "q_3261", "text": "Wrapper that runs the _step function in a loop while handling exceptions"} {"_id": "q_3262", "text": "Copy registers and written memory back into Manticore"} {"_id": "q_3263", "text": "Copy memory writes from Manticore back into Unicorn in real-time"} {"_id": "q_3264", "text": "Sync register state from Manticore -> Unicorn"} {"_id": "q_3265", "text": "Only useful for setting FS right now."} {"_id": "q_3266", "text": "A decorator for marking functions as deprecated."} {"_id": "q_3267", "text": "Produce permutations of `lst`, where permutations are mutated by `func`. Used for flipping constraints. highly\n possible that returned constraints can be unsat this does it blindly, without any attention to the constraints\n themselves\n\n Considering lst as a list of constraints, e.g.\n\n [ C1, C2, C3 ]\n\n we'd like to consider scenarios of all possible permutations of flipped constraints, excluding the original list.\n So we'd like to generate:\n\n [ func(C1), C2 , C3 ],\n [ C1 , func(C2), C3 ],\n [ func(C1), func(C2), C3 ],\n [ C1 , C2 , func(C3)],\n .. etc\n\n This is effectively treating the list of constraints as a bitmask of width len(lst) and counting up, skipping the\n 0th element (unmodified array).\n\n The code below yields lists of constraints permuted as above by treating list indeces as bitmasks from 1 to\n 2**len(lst) and applying func to all the set bit offsets."} {"_id": "q_3268", "text": "solve bytes in |datas| based on"} {"_id": "q_3269", "text": "Execute a symbolic run that follows a concrete run; return constraints generated\n and the stdin data produced"} {"_id": "q_3270", "text": "Load `program` and establish program state, such as stack and arguments.\n\n :param program str: The ELF binary to load\n :param argv list: argv array\n :param envp list: envp array"} {"_id": "q_3271", "text": "ARM kernel helpers\n\n https://www.kernel.org/doc/Documentation/arm/kernel_user_helpers.txt"} {"_id": "q_3272", "text": "Adds a file descriptor to the current file descriptor list\n\n :rtype: int\n :param f: the file descriptor to add.\n :return: the index of the file descriptor in the file descr. list"} {"_id": "q_3273", "text": "Openat SystemCall - Similar to open system call except dirfd argument\n when path contained in buf is relative, dirfd is referred to set the relative path\n Special value AT_FDCWD set for dirfd to set path relative to current directory\n\n :param dirfd: directory file descriptor to refer in case of relative path at buf\n :param buf: address of zero-terminated pathname\n :param flags: file access bits\n :param mode: file permission mode"} {"_id": "q_3274", "text": "Synchronize a file's in-core state with that on disk."} {"_id": "q_3275", "text": "Wrapper for sys_sigaction"} {"_id": "q_3276", "text": "An implementation of chroot that does perform some basic error checking,\n but does not actually chroot.\n\n :param path: Path to chroot"} {"_id": "q_3277", "text": "Wrapper for mmap2"} {"_id": "q_3278", "text": "Syscall dispatcher."} {"_id": "q_3279", "text": "Yield CPU.\n This will choose another process from the running list and change\n current running process. May give the same cpu if only one running\n process."} {"_id": "q_3280", "text": "Wait for file descriptors or timeout.\n Adds the current process in the correspondent waiting list and\n yield the cpu to another running process."} {"_id": "q_3281", "text": "Awake process if timer has expired"} {"_id": "q_3282", "text": "Compute total load size of interpreter.\n\n :param ELFFile interp: interpreter ELF .so\n :return: total load size of interpreter, not aligned\n :rtype: int"} {"_id": "q_3283", "text": "A version of openat that includes a symbolic path and symbolic directory file descriptor\n\n :param dirfd: directory file descriptor\n :param buf: address of zero-terminated pathname\n :param flags: file access bits\n :param mode: file permission mode"} {"_id": "q_3284", "text": "receive - receive bytes from a file descriptor\n\n The receive system call reads up to count bytes from file descriptor fd to the\n buffer pointed to by buf. If count is zero, receive returns 0 and optionally\n sets *rx_bytes to zero.\n\n :param cpu: current CPU.\n :param fd: a valid file descriptor\n :param buf: a memory buffer\n :param count: max number of bytes to receive\n :param rx_bytes: if valid, points to the actual number of bytes received\n :return: 0 Success\n EBADF fd is not a valid file descriptor or is not open\n EFAULT buf or rx_bytes points to an invalid address."} {"_id": "q_3285", "text": "deallocate - remove allocations\n The deallocate system call deletes the allocations for the specified\n address range, and causes further references to the addresses within the\n range to generate invalid memory accesses. The region is also\n automatically deallocated when the process is terminated.\n\n The address addr must be a multiple of the page size. The length parameter\n specifies the size of the region to be deallocated in bytes. All pages\n containing a part of the indicated range are deallocated, and subsequent\n references will terminate the process. It is not an error if the indicated\n range does not contain any allocated pages.\n\n The deallocate function is invoked through system call number 6.\n\n :param cpu: current CPU\n :param addr: the starting address to unmap.\n :param size: the size of the portion to unmap.\n :return 0 On success\n EINVAL addr is not page aligned.\n EINVAL length is zero.\n EINVAL any part of the region being deallocated is outside the valid\n address range of the process.\n\n :param cpu: current CPU.\n :return: C{0} on success."} {"_id": "q_3286", "text": "Yield CPU.\n This will choose another process from the RUNNNIG list and change\n current running process. May give the same cpu if only one running\n process."} {"_id": "q_3287", "text": "Wait for filedescriptors or timeout.\n Adds the current process to the corresponding waiting list and\n yields the cpu to another running process."} {"_id": "q_3288", "text": "Symbolic version of Decree.sys_receive"} {"_id": "q_3289", "text": "Symbolic version of Decree.sys_transmit"} {"_id": "q_3290", "text": "Synchronization decorator."} {"_id": "q_3291", "text": "Save an arbitrary, serializable `value` under `key`.\n\n :param str key: A string identifier under which to store the value.\n :param value: A serializable value\n :return:"} {"_id": "q_3292", "text": "Load an arbitrary value identified by `key`.\n\n :param str key: The key that identifies the value\n :return: The loaded value"} {"_id": "q_3293", "text": "Return a managed file-like object from which the calling code can read\n previously-serialized data.\n\n :param key:\n :return: A managed stream-like object"} {"_id": "q_3294", "text": "Yield a file object representing `key`\n\n :param str key: The file to save to\n :param bool binary: Whether we should treat it as binary\n :return:"} {"_id": "q_3295", "text": "Return just the filenames that match `glob_str` inside the store directory.\n\n :param str glob_str: A glob string, i.e. 'state_*'\n :return: list of matched keys"} {"_id": "q_3296", "text": "Save a state to storage, return identifier.\n\n :param state: The state to save\n :param int state_id: If not None force the state id potentially overwriting old states\n :return: New state id\n :rtype: int"} {"_id": "q_3297", "text": "Create an indexed output stream i.e. 'test_00000001.name'\n\n :param name: Identifier for the stream\n :return: A context-managed stream-like object"} {"_id": "q_3298", "text": "Compare registers from a remote gdb session to current mcore.\n\n :param manticore.core.cpu Cpu: Current cpu\n :param bool should_print: Whether to print values to stdout\n :return: Whether or not any differences were detected\n :rtype: bool"} {"_id": "q_3299", "text": "Mirror some service calls in manticore. Happens after qemu executed a SVC\n instruction, but before manticore did."} {"_id": "q_3300", "text": "The entry point of the visitor.\n The exploration algorithm is a DFS post-order traversal\n The implementation used two stacks instead of a recursion\n The final result is store in self.result\n\n :param node: Node to explore\n :type node: Expression\n :param use_fixed_point: if True, it runs _methods until a fixed point is found\n :type use_fixed_point: Bool"} {"_id": "q_3301", "text": "Overload Visitor._method because we want to stop to iterate over the\n visit_ functions as soon as a valid visit_ function is found"} {"_id": "q_3302", "text": "a + 0 ==> a\n 0 + a ==> a"} {"_id": "q_3303", "text": "a | 0 => a\n 0 | a => a\n 0xffffffff & a => 0xffffffff\n a & 0xffffffff => 0xffffffff"} {"_id": "q_3304", "text": "Build transaction data from function signature and arguments"} {"_id": "q_3305", "text": "Makes a function hash id from a method signature"} {"_id": "q_3306", "text": "Translates a python integral or a BitVec into a 32 byte string, MSB first"} {"_id": "q_3307", "text": "Translates a signed python integral or a BitVec into a 32 byte string, MSB first"} {"_id": "q_3308", "text": "Make sure an EVM instruction has all of its arguments concretized according to\n provided policies.\n\n Example decoration:\n\n @concretized_args(size='ONE', address='')\n def LOG(self, address, size, *topics):\n ...\n\n The above will make sure that the |size| parameter to LOG is Concretized when symbolic\n according to the 'ONE' policy and concretize |address| with the default policy.\n\n :param policies: A kwargs list of argument names and their respective policies.\n Provide None or '' as policy to use default.\n :return: A function decorator"} {"_id": "q_3309", "text": "This calculates the amount of extra gas needed for accessing to\n previously unused memory.\n\n :param address: base memory offset\n :param size: size of the memory access"} {"_id": "q_3310", "text": "Read size byte from bytecode.\n If less than size bytes are available result will be pad with \\x00"} {"_id": "q_3311", "text": "Push into the stack\n\n ITEM0\n ITEM1\n ITEM2\n sp-> {empty}"} {"_id": "q_3312", "text": "Read a value from the top of the stack without removing it"} {"_id": "q_3313", "text": "Revert the stack, gas, pc and memory allocation so it looks like before executing the instruction"} {"_id": "q_3314", "text": "Integer division operation"} {"_id": "q_3315", "text": "Signed modulo remainder operation"} {"_id": "q_3316", "text": "Calculate extra gas fee"} {"_id": "q_3317", "text": "Extend length of two's complement signed integer"} {"_id": "q_3318", "text": "Less-than comparison"} {"_id": "q_3319", "text": "Greater-than comparison"} {"_id": "q_3320", "text": "Signed greater-than comparison"} {"_id": "q_3321", "text": "Compute Keccak-256 hash"} {"_id": "q_3322", "text": "Get input data of current environment"} {"_id": "q_3323", "text": "Copy input data in current environment to memory"} {"_id": "q_3324", "text": "Copy an account's code to memory"} {"_id": "q_3325", "text": "Load word from memory"} {"_id": "q_3326", "text": "Save byte to memory"} {"_id": "q_3327", "text": "Load word from storage"} {"_id": "q_3328", "text": "Save word to storage"} {"_id": "q_3329", "text": "Conditionally alter the program counter"} {"_id": "q_3330", "text": "Exchange 1st and 2nd stack items"} {"_id": "q_3331", "text": "Message-call into this account with alternative account's code"} {"_id": "q_3332", "text": "Halt execution returning output data"} {"_id": "q_3333", "text": "Current ongoing human transaction"} {"_id": "q_3334", "text": "Read a value from a storage slot on the specified account\n\n :param storage_address: an account address\n :param offset: the storage slot to use.\n :type offset: int or BitVec\n :return: the value\n :rtype: int or BitVec"} {"_id": "q_3335", "text": "Writes a value to a storage slot in specified account\n\n :param storage_address: an account address\n :param offset: the storage slot to use.\n :type offset: int or BitVec\n :param value: the value to write\n :type value: int or BitVec"} {"_id": "q_3336", "text": "Gets all items in an account storage\n\n :param address: account address\n :return: all items in account storage. items are tuple of (index, value). value can be symbolic\n :rtype: list[(storage_index, storage_value)]"} {"_id": "q_3337", "text": "Create a fresh 160bit address"} {"_id": "q_3338", "text": "Toggle between ARM and Thumb mode"} {"_id": "q_3339", "text": "MRC moves to ARM register from coprocessor.\n\n :param Armv7Operand coprocessor: The name of the coprocessor; immediate\n :param Armv7Operand opcode1: coprocessor specific opcode; 3-bit immediate\n :param Armv7Operand dest: the destination operand: register\n :param Armv7Operand coprocessor_reg_n: the coprocessor register; immediate\n :param Armv7Operand coprocessor_reg_m: the coprocessor register; immediate\n :param Armv7Operand opcode2: coprocessor specific opcode; 3-bit immediate"} {"_id": "q_3340", "text": "Loads double width data from memory."} {"_id": "q_3341", "text": "Writes the contents of two registers to memory."} {"_id": "q_3342", "text": "Address to Register adds an immediate value to the PC value, and writes the result to the destination register.\n\n :param ARMv7Operand dest: Specifies the destination register.\n :param ARMv7Operand src:\n Specifies the label of an instruction or literal data item whose address is to be loaded into\n . The assembler calculates the required value of the offset from the Align(PC,4)\n value of the ADR instruction to this label."} {"_id": "q_3343", "text": "Compare and Branch on Zero compares the value in a register with zero, and conditionally branches forward\n a constant value. It does not affect the condition flags.\n\n :param ARMv7Operand op: Specifies the register that contains the first operand.\n :param ARMv7Operand dest:\n Specifies the label of the instruction that is to be branched to. The assembler calculates the\n required value of the offset from the PC value of the CBZ instruction to this label, then\n selects an encoding that will set imm32 to that offset. Allowed offsets are even numbers in\n the range 0 to 126."} {"_id": "q_3344", "text": "Get next instruction using the Capstone disassembler\n\n :param str code: binary blob to be disassembled\n :param long pc: program counter"} {"_id": "q_3345", "text": "Add a constraint to the set\n\n :param constraint: The constraint to add to the set.\n :param check: Currently unused.\n :return:"} {"_id": "q_3346", "text": "Declare the variable `var`"} {"_id": "q_3347", "text": "True if expression_var is declared in this constraint set"} {"_id": "q_3348", "text": "Perform the inverse transformation to encoded data. Will attempt best case reconstruction, which means\n it will return nan for handle_missing and handle_unknown settings that break the bijection. We issue\n warnings when some of those cases occur.\n\n Parameters\n ----------\n X_in : array-like, shape = [n_samples, n_features]\n\n Returns\n -------\n p: array, the same size of X_in"} {"_id": "q_3349", "text": "Convert basen code as integers.\n\n Parameters\n ----------\n X : DataFrame\n encoded data\n cols : list-like\n Column names in the DataFrame that be encoded\n base : int\n The base of transform\n\n Returns\n -------\n numerical: DataFrame"} {"_id": "q_3350", "text": "The lambda body to transform the column values"} {"_id": "q_3351", "text": "Returns names of 'object' columns in the DataFrame."} {"_id": "q_3352", "text": "Unite target data type into a Series.\n If the target is a Series or a DataFrame, we preserve its index.\n But if the target does not contain index attribute, we use the index from the argument."} {"_id": "q_3353", "text": "Here we iterate through the datasets and score them with a classifier using different encodings."} {"_id": "q_3354", "text": "A wrapper around click.secho that disables any coloring being used\n if colors have been disabled."} {"_id": "q_3355", "text": "Associate a notification template from this job template.\n\n =====API DOCS=====\n Associate a notification template from this job template.\n\n :param job_template: The job template to associate to.\n :type job_template: str\n :param notification_template: The notification template to be associated.\n :type notification_template: str\n :param status: type of notification this notification template should be associated to.\n :type status: str\n :returns: Dictionary of only one key \"changed\", which indicates whether the association succeeded.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3356", "text": "Disassociate a notification template from this job template.\n\n =====API DOCS=====\n Disassociate a notification template from this job template.\n\n :param job_template: The job template to disassociate from.\n :type job_template: str\n :param notification_template: The notification template to be disassociated.\n :type notification_template: str\n :param status: type of notification this notification template should be disassociated from.\n :type status: str\n :returns: Dictionary of only one key \"changed\", which indicates whether the disassociation succeeded.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3357", "text": "Contact Tower and request a configuration update using this job template.\n\n =====API DOCS=====\n Contact Tower and request a provisioning callback using this job template.\n\n :param pk: Primary key of the job template to run provisioning callback against.\n :type pk: int\n :param host_config_key: Key string used to authenticate the callback host.\n :type host_config_key: str\n :param extra_vars: Extra variables that are passed to provisioning callback.\n :type extra_vars: array of str\n :returns: A dictionary of a single key \"changed\", which indicates whether the provisioning callback\n is successful.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3358", "text": "Decorator to aggregate unified_jt-related fields.\n\n Args:\n func: The CURD method to be decorated.\n is_create: Boolean flag showing whether this method is create.\n has_pk: Boolean flag showing whether this method uses pk as argument.\n\n Returns:\n A function with necessary click-related attributes whose keyworded\n arguments are aggregated.\n\n Raises:\n exc.UsageError: Either more than one unified jt fields are\n provided, or none is provided when is_create flag is set."} {"_id": "q_3359", "text": "Internal method that lies to our `monitor` method by returning\n a scorecard for the workflow job where the standard out\n would have been expected."} {"_id": "q_3360", "text": "Monkey-patch click's format_options method to support option categorization."} {"_id": "q_3361", "text": "Return one and exactly one object\n\n =====API DOCS=====\n Return one and exactly one Tower setting.\n\n :param pk: Primary key of the Tower setting to retrieve\n :type pk: int\n :returns: loaded JSON of the retrieved Tower setting object.\n :rtype: dict\n :raises tower_cli.exceptions.NotFound: When no specified Tower setting exists.\n\n =====API DOCS====="} {"_id": "q_3362", "text": "Return one and exactly one object.\n\n Lookups may be through a primary key, specified as a positional argument, and/or through filters specified\n through keyword arguments.\n\n If the number of results does not equal one, raise an exception.\n\n =====API DOCS=====\n Retrieve one and exactly one object.\n\n :param pk: Primary key of the resource to be read. Tower CLI will only attempt to read *that* object\n if ``pk`` is provided (not ``None``).\n :type pk: int\n :param `**kwargs`: Keyword arguments used to look up resource object to retrieve if ``pk`` is not provided.\n :returns: loaded JSON of the retrieved resource object.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3363", "text": "Disassociate the `other` record from the `me` record."} {"_id": "q_3364", "text": "Copy an object.\n\n Only the ID is used for the lookup. All provided fields are used to override the old data from the\n copied resource.\n\n =====API DOCS=====\n Copy an object.\n\n :param pk: Primary key of the resource object to be copied\n :param new_name: The new name to give the resource if deep copying via the API\n :type pk: int\n :param `**kwargs`: Keyword arguments of fields whose given value will override the original value.\n :returns: loaded JSON of the copied new resource object.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3365", "text": "Internal utility function to return standard out. Requires the pk of a unified job."} {"_id": "q_3366", "text": "Print out the standard out of a unified job to the command line or output file.\n For Projects, print the standard out of most recent update.\n For Inventory Sources, print standard out of most recent sync.\n For Jobs, print the job's standard out.\n For Workflow Jobs, print a status table of its jobs.\n\n =====API DOCS=====\n Print out the standard out of a unified job to the command line or output file.\n For Projects, print the standard out of most recent update.\n For Inventory Sources, print standard out of most recent sync.\n For Jobs, print the job's standard out.\n For Workflow Jobs, print a status table of its jobs.\n\n :param pk: Primary key of the job resource object to be monitored.\n :type pk: int\n :param start_line: Line at which to start printing job output\n :param end_line: Line at which to end printing job output\n :param outfile: Alternative file than stdout to write job stdout to.\n :type outfile: file\n :param `**kwargs`: Keyword arguments used to look up job resource object to monitor if ``pk`` is\n not provided.\n :returns: A dictionary containing changed=False\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3367", "text": "Stream the standard output from a job, project update, or inventory udpate.\n\n =====API DOCS=====\n Stream the standard output from a job run to stdout.\n\n :param pk: Primary key of the job resource object to be monitored.\n :type pk: int\n :param parent_pk: Primary key of the unified job template resource object whose latest job run will be\n monitored if ``pk`` is not set.\n :type parent_pk: int\n :param timeout: Number in seconds after which this method will time out.\n :type timeout: float\n :param interval: Polling interval to refresh content from Tower.\n :type interval: float\n :param outfile: Alternative file than stdout to write job stdout to.\n :type outfile: file\n :param `**kwargs`: Keyword arguments used to look up job resource object to monitor if ``pk`` is\n not provided.\n :returns: A dictionary combining the JSON output of the finished job resource object, as well as\n two extra fields: \"changed\", a flag indicating if the job resource object is finished\n as expected; \"id\", an integer which is the primary key of the job resource object being\n monitored.\n :rtype: dict\n :raises tower_cli.exceptions.Timeout: When monitor time reaches time out.\n :raises tower_cli.exceptions.JobFailure: When the job being monitored runs into failure.\n\n =====API DOCS====="} {"_id": "q_3368", "text": "Print the current job status. This is used to check a running job. You can look up the job with\n the same parameters used for a get request.\n\n =====API DOCS=====\n Retrieve the current job status.\n\n :param pk: Primary key of the resource to retrieve status from.\n :type pk: int\n :param detail: Flag that if set, return the full JSON of the job resource rather than a status summary.\n :type detail: bool\n :param `**kwargs`: Keyword arguments used to look up resource object to retrieve status from if ``pk``\n is not provided.\n :returns: full loaded JSON of the specified unified job if ``detail`` flag is on; trimed JSON containing\n only \"elapsed\", \"failed\" and \"status\" fields of the unified job if ``detail`` flag is off.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3369", "text": "Cancel a currently running job.\n\n Fails with a non-zero exit status if the job cannot be canceled.\n You must provide either a pk or parameters in the job's identity.\n\n =====API DOCS=====\n Cancel a currently running job.\n\n :param pk: Primary key of the job resource to restart.\n :type pk: int\n :param fail_if_not_running: Flag that if set, raise exception if the job resource cannot be canceled.\n :type fail_if_not_running: bool\n :param `**kwargs`: Keyword arguments used to look up job resource object to restart if ``pk`` is not\n provided.\n :returns: A dictionary of two keys: \"status\", which is \"canceled\", and \"changed\", which indicates if\n the job resource has been successfully canceled.\n :rtype: dict\n :raises tower_cli.exceptions.TowerCLIError: When the job resource cannot be canceled and\n ``fail_if_not_running`` flag is on.\n =====API DOCS====="} {"_id": "q_3370", "text": "Relaunch a stopped job.\n\n Fails with a non-zero exit status if the job cannot be relaunched.\n You must provide either a pk or parameters in the job's identity.\n\n =====API DOCS=====\n Relaunch a stopped job resource.\n\n :param pk: Primary key of the job resource to relaunch.\n :type pk: int\n :param `**kwargs`: Keyword arguments used to look up job resource object to relaunch if ``pk`` is not\n provided.\n :returns: A dictionary combining the JSON output of the relaunched job resource object, as well\n as an extra field \"changed\", a flag indicating if the job resource object is status-changed\n as expected.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3371", "text": "Update all related inventory sources of the given inventory.\n\n Note global option --format is not available here, as the output would always be JSON-formatted.\n\n =====API DOCS=====\n Update all related inventory sources of the given inventory.\n\n :param pk: Primary key of the given inventory.\n :type pk: int\n :param `**kwargs`: Keyword arguments list of available fields used for searching resource objects.\n :returns: A JSON object of update status of the given inventory.\n :rtype: dict\n =====API DOCS====="} {"_id": "q_3372", "text": "Do extra processing so we can display the actor field as\n a top-level field"} {"_id": "q_3373", "text": "Log the given output to stderr if and only if we are in\n verbose mode.\n\n If we are not in verbose mode, this is a no-op."} {"_id": "q_3374", "text": "Hook for ResourceMeta class to call when initializing model class.\n Saves fields obtained from resource class backlinks"} {"_id": "q_3375", "text": "Returns a callable which becomes the associate or disassociate\n method for the related field.\n Method can be overridden to add additional functionality, but\n `_produce_method` may also need to be subclassed to decorate\n it appropriately."} {"_id": "q_3376", "text": "Create a new label.\n\n There are two types of label creation: isolatedly creating a new label and creating a new label under\n a job template. Here the two types are discriminated by whether to provide --job-template option.\n\n Fields in the resource's `identity` tuple are used for a lookup; if a match is found, then no-op (unless\n `force_on_exists` is set) but do not fail (unless `fail_on_found` is set).\n\n =====API DOCS=====\n Create a label.\n\n :param job_template: Primary key or name of the job template for the created label to associate to.\n :type job_template: str\n :param fail_on_found: Flag that if set, the operation fails if an object matching the unique criteria\n already exists.\n :type fail_on_found: bool\n :param force_on_exists: Flag that if set, then if a match is found on unique fields, other fields will\n be updated to the provided values.; If unset, a match causes the request to be\n a no-op.\n :type force_on_exists: bool\n :param `**kwargs`: Keyword arguments which, all together, will be used as POST body to create the\n resource object.\n :returns: A dictionary combining the JSON output of the created resource, as well as two extra fields:\n \"changed\", a flag indicating if the resource is created successfully; \"id\", an integer which\n is the primary key of the created object.\n :rtype: dict\n :raises tower_cli.exceptions.TowerCLIError: When the label already exists and ``fail_on_found`` flag is on.\n\n =====API DOCS====="} {"_id": "q_3377", "text": "Echo a setting to the CLI."} {"_id": "q_3378", "text": "Read or write tower-cli configuration.\n\n `tower config` saves the given setting to the appropriate Tower CLI;\n either the user's ~/.tower_cli.cfg file, or the /etc/tower/tower_cli.cfg\n file if --global is used.\n\n Writing to /etc/tower/tower_cli.cfg is likely to require heightened\n permissions (in other words, sudo)."} {"_id": "q_3379", "text": "Export assets from Tower.\n\n 'tower receive' exports one or more assets from a Tower instance\n\n For all of the possible assets types the TEXT can either be the assets name\n (or username for the case of a user) or the keyword all. Specifying all\n will export all of the assets of that type."} {"_id": "q_3380", "text": "Modify an already existing.\n\n To edit the project's organizations, see help for organizations.\n\n Fields in the resource's `identity` tuple can be used in lieu of a\n primary key for a lookup; in such a case, only other fields are\n written.\n\n To modify unique fields, you must use the primary key for the lookup.\n\n =====API DOCS=====\n Modify an already existing project.\n\n :param pk: Primary key of the resource to be modified.\n :type pk: int\n :param create_on_missing: Flag that if set, a new object is created if ``pk`` is not set and objects\n matching the appropriate unique criteria is not found.\n :type create_on_missing: bool\n :param `**kwargs`: Keyword arguments which, all together, will be used as PATCH body to modify the\n resource object. if ``pk`` is not set, key-value pairs of ``**kwargs`` which are\n also in resource's identity will be used to lookup existing reosource.\n :returns: A dictionary combining the JSON output of the modified resource, as well as two extra fields:\n \"changed\", a flag indicating if the resource is successfully updated; \"id\", an integer which\n is the primary key of the updated object.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3381", "text": "Print the status of the most recent update.\n\n =====API DOCS=====\n Print the status of the most recent update.\n\n :param pk: Primary key of the resource to retrieve status from.\n :type pk: int\n :param detail: Flag that if set, return the full JSON of the job resource rather than a status summary.\n :type detail: bool\n :param `**kwargs`: Keyword arguments used to look up resource object to retrieve status from if ``pk``\n is not provided.\n :returns: full loaded JSON of the specified unified job if ``detail`` flag is on; trimed JSON containing\n only \"elapsed\", \"failed\" and \"status\" fields of the unified job if ``detail`` flag is off.\n :rtype: dict\n =====API DOCS====="} {"_id": "q_3382", "text": "Match against the appropriate choice value using the superclass\n implementation, and then return the actual choice."} {"_id": "q_3383", "text": "Return the appropriate integer value. If a non-integer is\n provided, attempt a name-based lookup and return the primary key."} {"_id": "q_3384", "text": "Remove a failure node link.\n The resulatant 2 nodes will both become root nodes.\n\n =====API DOCS=====\n Remove a failure node link.\n\n :param parent: Primary key of parent node to disassociate failure node from.\n :type parent: int\n :param child: Primary key of child node to be disassociated.\n :type child: int\n :returns: Dictionary of only one key \"changed\", which indicates whether the disassociation succeeded.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3385", "text": "Converts a set of CLI input arguments, `in_data`, into\n request data and an endpoint that can be used to look\n up a role or list of roles.\n\n Also changes the format of `type` in data to what the server\n expects for the role model, as it exists in the database."} {"_id": "q_3386", "text": "Add or remove columns from the output."} {"_id": "q_3387", "text": "Populates columns and sets display attribute as needed.\n Operates on data."} {"_id": "q_3388", "text": "Return a list of roles.\n\n =====API DOCS=====\n Retrieve a list of objects.\n\n :param all_pages: Flag that if set, collect all pages of content from the API when returning results.\n :type all_pages: bool\n :param page: The page to show. Ignored if all_pages is set.\n :type page: int\n :param query: Contains 2-tuples used as query parameters to filter resulting resource objects.\n :type query: list\n :param `**kwargs`: Keyword arguments list of available fields used for searching resource objects.\n :returns: A JSON object containing details of all resource objects returned by Tower backend.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3389", "text": "Get information about a role.\n\n =====API DOCS=====\n Retrieve one and exactly one object.\n\n :param pk: Primary key of the resource to be read. Tower CLI will only attempt to read *that* object\n if ``pk`` is provided (not ``None``).\n :type pk: int\n :param `**kwargs`: Keyword arguments used to look up resource object to retrieve if ``pk`` is not provided.\n :returns: loaded JSON of the retrieved resource object.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3390", "text": "Investigate two lists of workflow TreeNodes and categorize them.\n\n There will be three types of nodes after categorization:\n 1. Nodes that only exists in the new list. These nodes will later be\n created recursively.\n 2. Nodes that only exists in the old list. These nodes will later be\n deleted recursively.\n 3. Node pairs that makes an exact match. These nodes will be further\n investigated.\n\n Corresponding nodes of old and new lists will be distinguished by their\n unified_job_template value. A special case is that both the old and the new\n lists contain one type of node, say A, and at least one of them contains\n duplicates. In this case all A nodes in the old list will be categorized as\n to-be-deleted and all A nodes in the new list will be categorized as\n to-be-created."} {"_id": "q_3391", "text": "Takes the list results from the API in `node_results` and\n translates this data into a dictionary organized in a\n human-readable heirarchial structure"} {"_id": "q_3392", "text": "Returns a dictionary that represents the node network of the\n workflow job template"} {"_id": "q_3393", "text": "Disassociate a notification template from this workflow.\n\n =====API DOCS=====\n Disassociate a notification template from this workflow job template.\n\n :param job_template: The workflow job template to disassociate from.\n :type job_template: str\n :param notification_template: The notification template to be disassociated.\n :type notification_template: str\n :param status: type of notification this notification template should be disassociated from.\n :type status: str\n :returns: Dictionary of only one key \"changed\", which indicates whether the disassociation succeeded.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3394", "text": "Create a group.\n\n =====API DOCS=====\n Create a group.\n\n :param parent: Primary key or name of the group which will be the parent of created group.\n :type parent: str\n :param fail_on_found: Flag that if set, the operation fails if an object matching the unique criteria\n already exists.\n :type fail_on_found: bool\n :param force_on_exists: Flag that if set, then if a match is found on unique fields, other fields will\n be updated to the provided values.; If unset, a match causes the request to be\n a no-op.\n :type force_on_exists: bool\n :param `**kwargs`: Keyword arguments which, all together, will be used as POST body to create the\n resource object.\n :returns: A dictionary combining the JSON output of the created resource, as well as two extra fields:\n \"changed\", a flag indicating if the resource is created successfully; \"id\", an integer which\n is the primary key of the created object.\n :rtype: dict\n :raises tower_cli.exceptions.UsageError: When inventory is not provided in ``**kwargs`` and ``parent``\n is not provided.\n\n =====API DOCS====="} {"_id": "q_3395", "text": "Return a list of groups.\n\n =====API DOCS=====\n Retrieve a list of groups.\n\n :param root: Flag that if set, only root groups of a specific inventory will be listed.\n :type root: bool\n :param parent: Primary key or name of the group whose child groups will be listed.\n :type parent: str\n :param all_pages: Flag that if set, collect all pages of content from the API when returning results.\n :type all_pages: bool\n :param page: The page to show. Ignored if all_pages is set.\n :type page: int\n :param query: Contains 2-tuples used as query parameters to filter resulting resource objects.\n :type query: list\n :param `**kwargs`: Keyword arguments list of available fields used for searching resource objects.\n :returns: A JSON object containing details of all resource objects returned by Tower backend.\n :rtype: dict\n :raises tower_cli.exceptions.UsageError: When ``root`` flag is on and ``inventory`` is not present in\n ``**kwargs``.\n\n =====API DOCS====="} {"_id": "q_3396", "text": "Associate this group with the specified group.\n\n =====API DOCS=====\n Associate this group with the specified group.\n\n :param group: Primary key or name of the child group to associate.\n :type group: str\n :param parent: Primary key or name of the parent group to associate to.\n :type parent: str\n :param inventory: Primary key or name of the inventory the association should happen in.\n :type inventory: str\n :returns: Dictionary of only one key \"changed\", which indicates whether the association succeeded.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3397", "text": "Similar to the Ansible function of the same name, parses file\n with a key=value pattern and stores information in a dictionary,\n but not as fully featured as the corresponding Ansible code."} {"_id": "q_3398", "text": "Expand PyYAML's built-in dumper to support parsing OrderedDict. Return\n a string as parse result of the original data structure, which includes\n OrderedDict.\n\n Args:\n data: the data structure to be dumped(parsed) which is supposed to\n contain OrderedDict.\n Dumper: the yaml serializer to be expanded and used.\n kws: extra key-value arguments to be passed to yaml.dump."} {"_id": "q_3399", "text": "Remove None-valued and configuration-related keyworded arguments"} {"_id": "q_3400", "text": "Combine configuration-related keyworded arguments into\n notification_configuration."} {"_id": "q_3401", "text": "Create a notification template.\n\n All required configuration-related fields (required according to\n notification_type) must be provided.\n\n There are two types of notification template creation: isolatedly\n creating a new notification template and creating a new notification\n template under a job template. Here the two types are discriminated by\n whether to provide --job-template option. --status option controls\n more specific, job-run-status-related association.\n\n Fields in the resource's `identity` tuple are used for a lookup;\n if a match is found, then no-op (unless `force_on_exists` is set) but\n do not fail (unless `fail_on_found` is set).\n\n =====API DOCS=====\n Create an object.\n\n :param fail_on_found: Flag that if set, the operation fails if an object matching the unique criteria\n already exists.\n :type fail_on_found: bool\n :param force_on_exists: Flag that if set, then if a match is found on unique fields, other fields will\n be updated to the provided values.; If unset, a match causes the request to be\n a no-op.\n :type force_on_exists: bool\n :param `**kwargs`: Keyword arguments which, all together, will be used as POST body to create the\n resource object.\n :returns: A dictionary combining the JSON output of the created resource, as well as two extra fields:\n \"changed\", a flag indicating if the resource is created successfully; \"id\", an integer which\n is the primary key of the created object.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3402", "text": "Modify an existing notification template.\n\n Not all required configuration-related fields (required according to\n notification_type) should be provided.\n\n Fields in the resource's `identity` tuple can be used in lieu of a\n primary key for a lookup; in such a case, only other fields are\n written.\n\n To modify unique fields, you must use the primary key for the lookup.\n\n =====API DOCS=====\n Modify an already existing object.\n\n :param pk: Primary key of the resource to be modified.\n :type pk: int\n :param create_on_missing: Flag that if set, a new object is created if ``pk`` is not set and objects\n matching the appropriate unique criteria is not found.\n :type create_on_missing: bool\n :param `**kwargs`: Keyword arguments which, all together, will be used as PATCH body to modify the\n resource object. if ``pk`` is not set, key-value pairs of ``**kwargs`` which are\n also in resource's identity will be used to lookup existing reosource.\n :returns: A dictionary combining the JSON output of the modified resource, as well as two extra fields:\n \"changed\", a flag indicating if the resource is successfully updated; \"id\", an integer which\n is the primary key of the updated object.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3403", "text": "Return one and exactly one notification template.\n\n Note here configuration-related fields like\n 'notification_configuration' and 'channels' will not be\n used even provided.\n\n Lookups may be through a primary key, specified as a positional\n argument, and/or through filters specified through keyword arguments.\n\n If the number of results does not equal one, raise an exception.\n\n =====API DOCS=====\n Retrieve one and exactly one object.\n\n :param pk: Primary key of the resource to be read. Tower CLI will only attempt to read *that* object\n if ``pk`` is provided (not ``None``).\n :type pk: int\n :param `**kwargs`: Keyword arguments used to look up resource object to retrieve if ``pk`` is not provided.\n :returns: loaded JSON of the retrieved resource object.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3404", "text": "Read tower-cli config values from the environment if present, being\n careful not to override config values that were explicitly passed in."} {"_id": "q_3405", "text": "Read the configuration from the given file.\n\n If the file lacks any section header, add a [general] section\n header that encompasses the whole thing."} {"_id": "q_3406", "text": "Launch a new ad-hoc command.\n\n Runs a user-defined command from Ansible Tower, immediately starts it,\n and returns back an ID in order for its status to be monitored.\n\n =====API DOCS=====\n Launch a new ad-hoc command.\n\n :param monitor: Flag that if set, immediately calls ``monitor`` on the newly launched command rather\n than exiting with a success.\n :type monitor: bool\n :param wait: Flag that if set, monitor the status of the job, but do not print while job is in progress.\n :type wait: bool\n :param timeout: If provided with ``monitor`` flag set, this attempt will time out after the given number\n of seconds.\n :type timeout: int\n :param `**kwargs`: Fields needed to create and launch an ad hoc command.\n :returns: Result of subsequent ``monitor`` call if ``monitor`` flag is on; Result of subsequent ``wait``\n call if ``wait`` flag is on; dictionary of \"id\" and \"changed\" if none of the two flags are on.\n :rtype: dict\n :raises tower_cli.exceptions.TowerCLIError: When ad hoc commands are not available in Tower backend.\n\n =====API DOCS====="} {"_id": "q_3407", "text": "Given a method with a docstring, convert the docstring\n to more CLI appropriate wording, and also disambiguate the\n word \"object\" on the base class docstrings."} {"_id": "q_3408", "text": "Given a method, return a method that runs the internal\n method and echos the result."} {"_id": "q_3409", "text": "Echos only the id"} {"_id": "q_3410", "text": "Retrieve the appropriate method from the Resource,\n decorate it as a click command, and return that method."} {"_id": "q_3411", "text": "Update the given inventory source.\n\n =====API DOCS=====\n Update the given inventory source.\n\n :param inventory_source: Primary key or name of the inventory source to be updated.\n :type inventory_source: str\n :param monitor: Flag that if set, immediately calls ``monitor`` on the newly launched inventory update\n rather than exiting with a success.\n :type monitor: bool\n :param wait: Flag that if set, monitor the status of the inventory update, but do not print while it is\n in progress.\n :type wait: bool\n :param timeout: If provided with ``monitor`` flag set, this attempt will time out after the given number\n of seconds.\n :type timeout: int\n :param `**kwargs`: Fields used to override underlyingl inventory source fields when creating and launching\n an inventory update.\n :returns: Result of subsequent ``monitor`` call if ``monitor`` flag is on; Result of subsequent ``wait``\n call if ``wait`` flag is on; dictionary of \"status\" if none of the two flags are on.\n :rtype: dict\n :raises tower_cli.exceptions.BadRequest: When the inventory source cannot be updated.\n\n =====API DOCS====="} {"_id": "q_3412", "text": "Return a list of hosts.\n\n =====API DOCS=====\n Retrieve a list of hosts.\n\n :param group: Primary key or name of the group whose hosts will be listed.\n :type group: str\n :param all_pages: Flag that if set, collect all pages of content from the API when returning results.\n :type all_pages: bool\n :param page: The page to show. Ignored if all_pages is set.\n :type page: int\n :param query: Contains 2-tuples used as query parameters to filter resulting resource objects.\n :type query: list\n :param `**kwargs`: Keyword arguments list of available fields used for searching resource objects.\n :returns: A JSON object containing details of all resource objects returned by Tower backend.\n :rtype: dict\n\n =====API DOCS====="} {"_id": "q_3413", "text": "Extra format methods for multi methods that adds all the commands\n after the options."} {"_id": "q_3414", "text": "Return a list of commands present in the commands and resources\n folders, but not subcommands."} {"_id": "q_3415", "text": "Returns a list of multi-commands for each resource type."} {"_id": "q_3416", "text": "Returns a list of global commands, realted to CLI\n configuration or system management in general."} {"_id": "q_3417", "text": "Given a command identified by its name, import the appropriate\n module and return the decorated command.\n\n Resources are automatically commands, but if both a resource and\n a command are defined, the command takes precedence."} {"_id": "q_3418", "text": "Adds the decorators for all types of unified job templates,\n and if the non-unified type is specified, converts it into the\n unified_job_template kwarg."} {"_id": "q_3419", "text": "Translate a Mopidy search query to a Spotify search query"} {"_id": "q_3420", "text": "Parse Retry-After header from response if it is set."} {"_id": "q_3421", "text": "Generates a state string to be used in authorizations."} {"_id": "q_3422", "text": "Parse token from the URI fragment, used by MobileApplicationClients.\n\n :param authorization_response: The full URL of the redirect back to you\n :return: A token dict"} {"_id": "q_3423", "text": "Fetch a new access token using a refresh token.\n\n :param token_url: The token endpoint, must be HTTPS.\n :param refresh_token: The refresh_token to use.\n :param body: Optional application/x-www-form-urlencoded body to add the\n include in the token request. Prefer kwargs over body.\n :param auth: An auth tuple or method as accepted by `requests`.\n :param timeout: Timeout of the request in seconds.\n :param headers: A dict of headers to be used by `requests`.\n :param verify: Verify SSL certificate.\n :param proxies: The `proxies` argument will be passed to `requests`.\n :param kwargs: Extra parameters to include in the token request.\n :return: A token dict"} {"_id": "q_3424", "text": "Boolean that indicates whether this session has an OAuth token\n or not. If `self.authorized` is True, you can reasonably expect\n OAuth-protected requests to the resource to succeed. If\n `self.authorized` is False, you need the user to go through the OAuth\n authentication dance before OAuth-protected requests to the resource\n will succeed."} {"_id": "q_3425", "text": "Extract parameters from the post authorization redirect response URL.\n\n :param url: The full URL that resulted from the user being redirected\n back from the OAuth provider to you, the client.\n :returns: A dict of parameters extracted from the URL.\n\n >>> redirect_response = 'https://127.0.0.1/callback?oauth_token=kjerht2309uf&oauth_token_secret=lsdajfh923874&oauth_verifier=w34o8967345'\n >>> oauth_session = OAuth1Session('client-key', client_secret='secret')\n >>> oauth_session.parse_authorization_response(redirect_response)\n {\n 'oauth_token: 'kjerht2309u',\n 'oauth_token_secret: 'lsdajfh923874',\n 'oauth_verifier: 'w34o8967345',\n }"} {"_id": "q_3426", "text": "When being redirected we should always strip Authorization\n header, since nonce may not be reused as per OAuth spec."} {"_id": "q_3427", "text": "Validate new property value before setting it.\n\n value -- New value"} {"_id": "q_3428", "text": "Get the property description.\n\n Returns a dictionary describing the property."} {"_id": "q_3429", "text": "Set the current value of the property.\n\n value -- the value to set"} {"_id": "q_3430", "text": "Get the thing at the given index.\n\n idx -- the index"} {"_id": "q_3431", "text": "Initialize the handler.\n\n things -- list of Things managed by this server\n hosts -- list of allowed hostnames"} {"_id": "q_3432", "text": "Set the default headers for all requests."} {"_id": "q_3433", "text": "Validate Host header."} {"_id": "q_3434", "text": "Handle a GET request, including websocket requests.\n\n thing_id -- ID of the thing this request is for"} {"_id": "q_3435", "text": "Handle an incoming message.\n\n message -- message to handle"} {"_id": "q_3436", "text": "Set a new value for this thing.\n\n value -- value to set"} {"_id": "q_3437", "text": "Notify observers of a new value.\n\n value -- new value"} {"_id": "q_3438", "text": "Return the thing state as a Thing Description.\n\n Returns the state as a dictionary."} {"_id": "q_3439", "text": "Set the prefix of any hrefs associated with this thing.\n\n prefix -- the prefix"} {"_id": "q_3440", "text": "Get the thing's properties as a dictionary.\n\n Returns the properties as a dictionary, i.e. name -> description."} {"_id": "q_3441", "text": "Get the thing's actions as an array.\n\n action_name -- Optional action name to get descriptions for\n\n Returns the action descriptions."} {"_id": "q_3442", "text": "Get the thing's events as an array.\n\n event_name -- Optional event name to get descriptions for\n\n Returns the event descriptions."} {"_id": "q_3443", "text": "Add a property to this thing.\n\n property_ -- property to add"} {"_id": "q_3444", "text": "Remove a property from this thing.\n\n property_ -- property to remove"} {"_id": "q_3445", "text": "Get a property's value.\n\n property_name -- the property to get the value of\n\n Returns the properties value, if found, else None."} {"_id": "q_3446", "text": "Get a mapping of all properties and their values.\n\n Returns a dictionary of property_name -> value."} {"_id": "q_3447", "text": "Get an action.\n\n action_name -- name of the action\n action_id -- ID of the action\n\n Returns the requested action if found, else None."} {"_id": "q_3448", "text": "Add a new event and notify subscribers.\n\n event -- the event that occurred"} {"_id": "q_3449", "text": "Add an available event.\n\n name -- name of the event\n metadata -- event metadata, i.e. type, description, etc., as a dict"} {"_id": "q_3450", "text": "Remove an existing action.\n\n action_name -- name of the action\n action_id -- ID of the action\n\n Returns a boolean indicating the presence of the action."} {"_id": "q_3451", "text": "Remove a websocket subscriber.\n\n ws -- the websocket"} {"_id": "q_3452", "text": "Remove a websocket subscriber from an event.\n\n name -- name of the event\n ws -- the websocket"} {"_id": "q_3453", "text": "Notify all subscribers of an action status change.\n\n action -- the action whose status changed"} {"_id": "q_3454", "text": "Notify all subscribers of an event.\n\n event -- the event that occurred"} {"_id": "q_3455", "text": "Returns data with different cfgstr values that were previously computed\n with this cacher.\n\n Example:\n >>> from ubelt.util_cache import Cacher\n >>> # Ensure that some data exists\n >>> known_fnames = set()\n >>> cacher = Cacher('versioned_data', cfgstr='1')\n >>> cacher.ensure(lambda: 'data1')\n >>> known_fnames.add(cacher.get_fpath())\n >>> cacher = Cacher('versioned_data', cfgstr='2')\n >>> cacher.ensure(lambda: 'data2')\n >>> known_fnames.add(cacher.get_fpath())\n >>> # List previously computed configs for this type\n >>> from os.path import basename\n >>> cacher = Cacher('versioned_data', cfgstr='2')\n >>> exist_fpaths = set(cacher.existing_versions())\n >>> exist_fnames = list(map(basename, exist_fpaths))\n >>> print(exist_fnames)\n >>> assert exist_fpaths == known_fnames\n\n ['versioned_data_1.pkl', 'versioned_data_2.pkl']"} {"_id": "q_3456", "text": "Removes the saved cache and metadata from disk"} {"_id": "q_3457", "text": "Like load, but returns None if the load fails due to a cache miss.\n\n Args:\n on_error (str): How to handle non-io errors errors. Either raise,\n which re-raises the exception, or clear which deletes the cache\n and returns None."} {"_id": "q_3458", "text": "Loads the data\n\n Raises:\n IOError - if the data is unable to be loaded. This could be due to\n a cache miss or because the cache is disabled.\n\n Example:\n >>> from ubelt.util_cache import * # NOQA\n >>> # Setting the cacher as enabled=False turns it off\n >>> cacher = Cacher('test_disabled_load', '', enabled=True)\n >>> cacher.save('data')\n >>> assert cacher.load() == 'data'\n >>> cacher.enabled = False\n >>> assert cacher.tryload() is None"} {"_id": "q_3459", "text": "r\"\"\"\n Wraps around a function. A cfgstr must be stored in the base cacher.\n\n Args:\n func (callable): function that will compute data on cache miss\n *args: passed to func\n **kwargs: passed to func\n\n Example:\n >>> from ubelt.util_cache import * # NOQA\n >>> def func():\n >>> return 'expensive result'\n >>> fname = 'test_cacher_ensure'\n >>> cfgstr = 'func params'\n >>> cacher = Cacher(fname, cfgstr)\n >>> cacher.clear()\n >>> data1 = cacher.ensure(func)\n >>> data2 = cacher.ensure(func)\n >>> assert data1 == 'expensive result'\n >>> assert data1 == data2\n >>> cacher.clear()"} {"_id": "q_3460", "text": "Returns the stamp certificate if it exists"} {"_id": "q_3461", "text": "Get the hash of the each product file"} {"_id": "q_3462", "text": "Check to see if a previously existing stamp is still valid and if the\n expected result of that computation still exists.\n\n Args:\n cfgstr (str, optional): override the default cfgstr if specified\n product (PathLike or Sequence[PathLike], optional): override the\n default product if specified"} {"_id": "q_3463", "text": "Recertify that the product has been recomputed by writing a new\n certificate to disk."} {"_id": "q_3464", "text": "Returns true of the redirect is a terminal.\n\n Notes:\n Needed for IPython.embed to work properly when this class is used\n to override stdout / stderr."} {"_id": "q_3465", "text": "Gets the encoding of the `redirect` IO object\n\n Doctest:\n >>> redirect = io.StringIO()\n >>> assert TeeStringIO(redirect).encoding is None\n >>> assert TeeStringIO(None).encoding is None\n >>> assert TeeStringIO(sys.stdout).encoding is sys.stdout.encoding\n >>> redirect = io.TextIOWrapper(io.StringIO())\n >>> assert TeeStringIO(redirect).encoding is redirect.encoding"} {"_id": "q_3466", "text": "Write to this and the redirected stream"} {"_id": "q_3467", "text": "Returns path for user-specific data files\n\n Returns:\n PathLike : path to the data dir used by the current operating system"} {"_id": "q_3468", "text": "Returns a directory which should be writable for any application\n This should be used for persistent configuration files.\n\n Returns:\n PathLike : path to the cahce dir used by the current operating system"} {"_id": "q_3469", "text": "Calls `get_app_cache_dir` but ensures the directory exists.\n\n Args:\n appname (str): the name of the application\n *args: any other subdirectories may be specified\n\n SeeAlso:\n get_app_cache_dir\n\n Example:\n >>> import ubelt as ub\n >>> dpath = ub.ensure_app_cache_dir('ubelt')\n >>> assert exists(dpath)"} {"_id": "q_3470", "text": "Locate a command.\n\n Search your local filesystem for an executable and return the first\n matching file with executable permission.\n\n Args:\n name (str): globstr of matching filename\n\n multi (bool): if True return all matches instead of just the first.\n Defaults to False.\n\n path (str or Iterable[PathLike]): overrides the system PATH variable.\n\n Returns:\n PathLike or List[PathLike] or None: returns matching executable(s).\n\n SeeAlso:\n shutil.which - which is available in Python 3.3+.\n\n Notes:\n This is essentially the `which` UNIX command\n\n References:\n https://stackoverflow.com/questions/377017/test-if-executable-exists-in-python/377028#377028\n https://docs.python.org/dev/library/shutil.html#shutil.which\n\n Example:\n >>> find_exe('ls')\n >>> find_exe('ping')\n >>> assert find_exe('which') == find_exe(find_exe('which'))\n >>> find_exe('which', multi=True)\n >>> find_exe('ping', multi=True)\n >>> find_exe('cmake', multi=True)\n >>> find_exe('nvcc', multi=True)\n >>> find_exe('noexist', multi=True)\n\n Example:\n >>> assert not find_exe('noexist', multi=False)\n >>> assert find_exe('ping', multi=False)\n >>> assert not find_exe('noexist', multi=True)\n >>> assert find_exe('ping', multi=True)\n\n Benchmark:\n >>> # xdoctest: +IGNORE_WANT\n >>> import ubelt as ub\n >>> import shutil\n >>> for timer in ub.Timerit(100, bestof=10, label='ub.find_exe'):\n >>> ub.find_exe('which')\n >>> for timer in ub.Timerit(100, bestof=10, label='shutil.which'):\n >>> shutil.which('which')\n Timed best=58.71 \u00b5s, mean=59.64 \u00b1 0.96 \u00b5s for ub.find_exe\n Timed best=72.75 \u00b5s, mean=73.07 \u00b1 0.22 \u00b5s for shutil.which"} {"_id": "q_3471", "text": "Returns the user's home directory.\n If `username` is None, this is the directory for the current user.\n\n Args:\n username (str): name of a user on the system\n\n Returns:\n PathLike: userhome_dpath: path to the home directory\n\n Example:\n >>> import getpass\n >>> username = getpass.getuser()\n >>> assert userhome() == expanduser('~')\n >>> assert userhome(username) == expanduser('~')"} {"_id": "q_3472", "text": "Inverse of `os.path.expanduser`\n\n Args:\n path (PathLike): path in system file structure\n home (str): symbol used to replace the home path. Defaults to '~', but\n you might want to use '$HOME' or '%USERPROFILE%' instead.\n\n Returns:\n PathLike: path: shortened path replacing the home directory with a tilde\n\n CommandLine:\n xdoctest -m ubelt.util_path compressuser\n\n Example:\n >>> path = expanduser('~')\n >>> assert path != '~'\n >>> assert compressuser(path) == '~'\n >>> assert compressuser(path + '1') == path + '1'\n >>> assert compressuser(path + '/1') == join('~', '1')\n >>> assert compressuser(path + '/1', '$HOME') == join('$HOME', '1')"} {"_id": "q_3473", "text": "Normalizes a string representation of a path and does shell-like expansion.\n\n Args:\n path (PathLike): string representation of a path\n real (bool): if True, all symbolic links are followed. (default: False)\n\n Returns:\n PathLike : normalized path\n\n Note:\n This function is similar to the composition of expanduser, expandvars,\n normpath, and (realpath if `real` else abspath). However, on windows\n backslashes are then replaced with forward slashes to offer a\n consistent unix-like experience across platforms.\n\n On windows expanduser will expand environment variables formatted as\n %name%, whereas on unix, this will not occur.\n\n CommandLine:\n python -m ubelt.util_path truepath\n\n Example:\n >>> import ubelt as ub\n >>> assert ub.truepath('~/foo') == join(ub.userhome(), 'foo')\n >>> assert ub.truepath('~/foo') == ub.truepath('~/foo/bar/..')\n >>> assert ub.truepath('~/foo', real=True) == ub.truepath('~/foo')"} {"_id": "q_3474", "text": "r\"\"\"\n Ensures that directory will exist. Creates new dir with sticky bits by\n default\n\n Args:\n dpath (PathLike): dir to ensure. Can also be a tuple to send to join\n mode (int): octal mode of directory (default 0o1777)\n verbose (int): verbosity (default 0)\n\n Returns:\n PathLike: path: the ensured directory\n\n Notes:\n This function is not thread-safe in Python2\n\n Example:\n >>> from ubelt.util_platform import * # NOQA\n >>> import ubelt as ub\n >>> cache_dpath = ub.ensure_app_cache_dir('ubelt')\n >>> dpath = join(cache_dpath, 'ensuredir')\n >>> if exists(dpath):\n ... os.rmdir(dpath)\n >>> assert not exists(dpath)\n >>> ub.ensuredir(dpath)\n >>> assert exists(dpath)\n >>> os.rmdir(dpath)"} {"_id": "q_3475", "text": "pip install requirements-parser\n fname='requirements.txt'"} {"_id": "q_3476", "text": "Parse the package dependencies listed in a requirements file but strips\n specific versioning information.\n\n TODO:\n perhaps use https://github.com/davidfischer/requirements-parser instead\n\n CommandLine:\n python -c \"import setup; print(setup.parse_requirements())\""} {"_id": "q_3477", "text": "Injects a function into an object instance as a bound method\n\n The main use case of this function is for monkey patching. While monkey\n patching is sometimes necessary it should generally be avoided. Thus, we\n simply remind the developer that there might be a better way.\n\n Args:\n self (object): instance to inject a function into\n func (func): the function to inject (must contain an arg for self)\n name (str): name of the method. optional. If not specified the name\n of the function is used.\n\n Example:\n >>> class Foo(object):\n >>> def bar(self):\n >>> return 'bar'\n >>> def baz(self):\n >>> return 'baz'\n >>> self = Foo()\n >>> assert self.bar() == 'bar'\n >>> assert not hasattr(self, 'baz')\n >>> inject_method(self, baz)\n >>> assert not hasattr(Foo, 'baz'), 'should only change one instance'\n >>> assert self.baz() == 'baz'\n >>> inject_method(self, baz, 'bar')\n >>> assert self.bar() == 'baz'"} {"_id": "q_3478", "text": "change file timestamps\n\n Works like the touch unix utility\n\n Args:\n fpath (PathLike): name of the file\n mode (int): file permissions (python3 and unix only)\n dir_fd (file): optional directory file descriptor. If specified, fpath\n is interpreted as relative to this descriptor (python 3 only).\n verbose (int): verbosity\n **kwargs : extra args passed to `os.utime` (python 3 only).\n\n Returns:\n PathLike: path to the file\n\n References:\n https://stackoverflow.com/questions/1158076/implement-touch-using-python\n\n Example:\n >>> import ubelt as ub\n >>> dpath = ub.ensure_app_cache_dir('ubelt')\n >>> fpath = join(dpath, 'touch_file')\n >>> assert not exists(fpath)\n >>> ub.touch(fpath)\n >>> assert exists(fpath)\n >>> os.unlink(fpath)"} {"_id": "q_3479", "text": "Removes a file or recursively removes a directory.\n If a path does not exist, then this is does nothing.\n\n Args:\n path (PathLike): file or directory to remove\n verbose (bool): if True prints what is being done\n\n SeeAlso:\n send2trash - A cross-platform Python package for sending files\n to the trash instead of irreversibly deleting them.\n https://github.com/hsoft/send2trash\n\n Doctest:\n >>> import ubelt as ub\n >>> base = ub.ensure_app_cache_dir('ubelt', 'delete_test')\n >>> dpath1 = ub.ensuredir(join(base, 'dir'))\n >>> ub.ensuredir(join(base, 'dir', 'subdir'))\n >>> ub.touch(join(base, 'dir', 'to_remove1.txt'))\n >>> fpath1 = join(base, 'dir', 'subdir', 'to_remove3.txt')\n >>> fpath2 = join(base, 'dir', 'subdir', 'to_remove2.txt')\n >>> ub.touch(fpath1)\n >>> ub.touch(fpath2)\n >>> assert all(map(exists, (dpath1, fpath1, fpath2)))\n >>> ub.delete(fpath1)\n >>> assert all(map(exists, (dpath1, fpath2)))\n >>> assert not exists(fpath1)\n >>> ub.delete(dpath1)\n >>> assert not any(map(exists, (dpath1, fpath1, fpath2)))\n\n Doctest:\n >>> import ubelt as ub\n >>> dpath = ub.ensure_app_cache_dir('ubelt', 'delete_test2')\n >>> dpath1 = ub.ensuredir(join(dpath, 'dir'))\n >>> fpath1 = ub.touch(join(dpath1, 'to_remove.txt'))\n >>> assert exists(fpath1)\n >>> ub.delete(dpath)\n >>> assert not exists(fpath1)"} {"_id": "q_3480", "text": "Joins string-ified items with separators newlines and container-braces."} {"_id": "q_3481", "text": "Create a string representation for each item in a list."} {"_id": "q_3482", "text": "Registers a custom formatting function with ub.repr2"} {"_id": "q_3483", "text": "Returns an appropriate function to format `data` if one has been\n registered."} {"_id": "q_3484", "text": "Convert a string-based key into a hasher class\n\n Notes:\n In terms of speed on 64bit systems, sha1 is the fastest followed by md5\n and sha512. The slowest algorithm is sha256. If xxhash is installed\n the fastest algorithm is xxh64.\n\n Example:\n >>> assert _rectify_hasher(NoParam) is DEFAULT_HASHER\n >>> assert _rectify_hasher('sha1') is hashlib.sha1\n >>> assert _rectify_hasher('sha256') is hashlib.sha256\n >>> assert _rectify_hasher('sha512') is hashlib.sha512\n >>> assert _rectify_hasher('md5') is hashlib.md5\n >>> assert _rectify_hasher(hashlib.sha1) is hashlib.sha1\n >>> assert _rectify_hasher(hashlib.sha1())().name == 'sha1'\n >>> import pytest\n >>> assert pytest.raises(KeyError, _rectify_hasher, '42')\n >>> #assert pytest.raises(TypeError, _rectify_hasher, object)\n >>> if xxhash:\n >>> assert _rectify_hasher('xxh64') is xxhash.xxh64\n >>> assert _rectify_hasher('xxh32') is xxhash.xxh32"} {"_id": "q_3485", "text": "transforms base shorthand into the full list representation\n\n Example:\n >>> assert _rectify_base(NoParam) is DEFAULT_ALPHABET\n >>> assert _rectify_base('hex') is _ALPHABET_16\n >>> assert _rectify_base('abc') is _ALPHABET_26\n >>> assert _rectify_base(10) is _ALPHABET_10\n >>> assert _rectify_base(['1', '2']) == ['1', '2']\n >>> import pytest\n >>> assert pytest.raises(TypeError, _rectify_base, 'uselist')"} {"_id": "q_3486", "text": "r\"\"\"\n Extracts the sequence of bytes that would be hashed by hash_data\n\n Example:\n >>> data = [2, (3, 4)]\n >>> result1 = (b''.join(_hashable_sequence(data, types=False)))\n >>> result2 = (b''.join(_hashable_sequence(data, types=True)))\n >>> assert result1 == b'_[_\\x02_,__[_\\x03_,_\\x04_,__]__]_'\n >>> assert result2 == b'_[_INT\\x02_,__[_INT\\x03_,_INT\\x04_,__]__]_'"} {"_id": "q_3487", "text": "Converts `data` into a byte representation and calls update on the hasher\n `hashlib.HASH` algorithm.\n\n Args:\n hasher (HASH): instance of a hashlib algorithm\n data (object): ordered data with structure\n types (bool): include type prefixes in the hash\n\n Example:\n >>> hasher = hashlib.sha512()\n >>> data = [1, 2, ['a', 2, 'c']]\n >>> _update_hasher(hasher, data)\n >>> print(hasher.hexdigest()[0:8])\n e2c67675\n\n 2ba8d82b"} {"_id": "q_3488", "text": "r\"\"\"\n Packs a long hexstr into a shorter length string with a larger base.\n\n Args:\n hexstr (str): string of hexidecimal symbols to convert\n base (list): symbols of the conversion base\n\n Example:\n >>> print(_convert_hexstr_base('ffffffff', _ALPHABET_26))\n nxmrlxv\n >>> print(_convert_hexstr_base('0', _ALPHABET_26))\n 0\n >>> print(_convert_hexstr_base('-ffffffff', _ALPHABET_26))\n -nxmrlxv\n >>> print(_convert_hexstr_base('aafffff1', _ALPHABET_16))\n aafffff1\n\n Sympy:\n >>> import sympy as sy\n >>> # Determine the length savings with lossless conversion\n >>> consts = dict(hexbase=16, hexlen=256, baselen=27)\n >>> symbols = sy.symbols('hexbase, hexlen, baselen, newlen')\n >>> haexbase, hexlen, baselen, newlen = symbols\n >>> eqn = sy.Eq(16 ** hexlen, baselen ** newlen)\n >>> newlen_ans = sy.solve(eqn, newlen)[0].subs(consts).evalf()\n >>> print('newlen_ans = %r' % (newlen_ans,))\n >>> # for a 26 char base we can get 216\n >>> print('Required length for lossless conversion len2 = %r' % (len2,))\n >>> def info(base, len):\n ... bits = base ** len\n ... print('base = %r' % (base,))\n ... print('len = %r' % (len,))\n ... print('bits = %r' % (bits,))\n >>> info(16, 256)\n >>> info(27, 16)\n >>> info(27, 64)\n >>> info(27, 216)"} {"_id": "q_3489", "text": "Registers a function to generate a hash for data of the appropriate\n types. This can be used to register custom classes. Internally this is\n used to define how to hash non-builtin objects like ndarrays and uuids.\n\n The registered function should return a tuple of bytes. First a small\n prefix hinting at the data type, and second the raw bytes that can be\n hashed.\n\n Args:\n hash_types (class or tuple of classes):\n\n Returns:\n func: closure to be used as the decorator\n\n Example:\n >>> # xdoctest: +SKIP\n >>> # Skip this doctest because we dont want tests to modify\n >>> # the global state.\n >>> import ubelt as ub\n >>> import pytest\n >>> class MyType(object):\n ... def __init__(self, id):\n ... self.id = id\n >>> data = MyType(1)\n >>> # Custom types wont work with ub.hash_data by default\n >>> with pytest.raises(TypeError):\n ... ub.hash_data(data)\n >>> # You can register your functions with ubelt's internal\n >>> # hashable_extension registery.\n >>> @ub.util_hash._HASHABLE_EXTENSIONS.register(MyType)\n >>> def hash_my_type(data):\n ... return b'mytype', six.b(ub.hash_data(data.id))\n >>> # TODO: allow hash_data to take an new instance of\n >>> # HashableExtensions, so we dont have to modify the global\n >>> # ubelt state when we run tests.\n >>> my_instance = MyType(1)\n >>> ub.hash_data(my_instance)"} {"_id": "q_3490", "text": "Returns an appropriate function to hash `data` if one has been\n registered.\n\n Raises:\n TypeError : if data has no registered hash methods\n\n Example:\n >>> import ubelt as ub\n >>> import pytest\n >>> if not ub.modname_to_modpath('numpy'):\n ... raise pytest.skip('numpy is optional')\n >>> self = HashableExtensions()\n >>> self._register_numpy_extensions()\n >>> self._register_builtin_class_extensions()\n\n >>> import numpy as np\n >>> data = np.array([1, 2, 3])\n >>> self.lookup(data[0])\n\n >>> class Foo(object):\n >>> def __init__(f):\n >>> f.attr = 1\n >>> data = Foo()\n >>> assert pytest.raises(TypeError, self.lookup, data)\n\n >>> # If ub.hash_data doesnt support your object,\n >>> # then you can register it.\n >>> @self.register(Foo)\n >>> def _hashfoo(data):\n >>> return b'FOO', data.attr\n >>> func = self.lookup(data)\n >>> assert func(data)[1] == 1\n\n >>> data = uuid.uuid4()\n >>> self.lookup(data)"} {"_id": "q_3491", "text": "Numpy extensions are builtin"} {"_id": "q_3492", "text": "Register hashing extensions for a selection of classes included in\n python stdlib.\n\n Example:\n >>> data = uuid.UUID('7e9d206b-dc02-4240-8bdb-fffe858121d0')\n >>> print(hash_data(data, base='abc', hasher='sha512', types=True)[0:8])\n cryarepd\n >>> data = OrderedDict([('a', 1), ('b', 2), ('c', [1, 2, 3]),\n >>> (4, OrderedDict())])\n >>> print(hash_data(data, base='abc', hasher='sha512', types=True)[0:8])\n qjspicvv\n\n gpxtclct"} {"_id": "q_3493", "text": "Reads output from a process in a separate thread"} {"_id": "q_3494", "text": "make an iso8601 timestamp\n\n Args:\n method (str): type of timestamp\n\n Example:\n >>> stamp = timestamp()\n >>> print('stamp = {!r}'.format(stamp))\n stamp = ...-...-...T..."} {"_id": "q_3495", "text": "Imports a module via its path\n\n Args:\n modpath (PathLike): path to the module on disk or within a zipfile.\n\n Returns:\n module: the imported module\n\n References:\n https://stackoverflow.com/questions/67631/import-module-given-path\n\n Notes:\n If the module is part of a package, the package will be imported first.\n These modules may cause problems when reloading via IPython magic\n\n This can import a module from within a zipfile. To do this modpath\n should specify the path to the zipfile and the path to the module\n within that zipfile separated by a colon or pathsep.\n E.g. `/path/to/archive.zip:mymodule.py`\n\n Warning:\n It is best to use this with paths that will not conflict with\n previously existing modules.\n\n If the modpath conflicts with a previously existing module name. And\n the target module does imports of its own relative to this conflicting\n path. In this case, the module that was loaded first will win.\n\n For example if you try to import '/foo/bar/pkg/mod.py' from the folder\n structure:\n - foo/\n +- bar/\n +- pkg/\n + __init__.py\n |- mod.py\n |- helper.py\n\n If there exists another module named `pkg` already in sys.modules\n and mod.py does something like `from . import helper`, Python will\n assume helper belongs to the `pkg` module already in sys.modules.\n This can cause a NameError or worse --- a incorrect helper module.\n\n Example:\n >>> import xdoctest\n >>> modpath = xdoctest.__file__\n >>> module = import_module_from_path(modpath)\n >>> assert module is xdoctest\n\n Example:\n >>> # Test importing a module from within a zipfile\n >>> import zipfile\n >>> from xdoctest import utils\n >>> from os.path import join, expanduser\n >>> dpath = expanduser('~/.cache/xdoctest')\n >>> dpath = utils.ensuredir(dpath)\n >>> #dpath = utils.TempDir().ensure()\n >>> # Write to an external module named bar\n >>> external_modpath = join(dpath, 'bar.py')\n >>> open(external_modpath, 'w').write('testvar = 1')\n >>> internal = 'folder/bar.py'\n >>> # Move the external bar module into a zipfile\n >>> zippath = join(dpath, 'myzip.zip')\n >>> with zipfile.ZipFile(zippath, 'w') as myzip:\n >>> myzip.write(external_modpath, internal)\n >>> # Import the bar module from within the zipfile\n >>> modpath = zippath + ':' + internal\n >>> modpath = zippath + os.path.sep + internal\n >>> module = import_module_from_path(modpath)\n >>> assert module.__name__ == os.path.normpath('folder/bar')\n >>> assert module.testvar == 1\n\n Doctest:\n >>> import pytest\n >>> with pytest.raises(IOError):\n >>> import_module_from_path('does-not-exist')\n >>> with pytest.raises(IOError):\n >>> import_module_from_path('does-not-exist.zip/')"} {"_id": "q_3496", "text": "syspath version of modname_to_modpath\n\n Args:\n modname (str): name of module to find\n sys_path (List[PathLike], default=None):\n if specified overrides `sys.path`\n exclude (List[PathLike], default=None):\n list of directory paths. if specified prevents these directories\n from being searched.\n\n Notes:\n This is much slower than the pkgutil mechanisms.\n\n CommandLine:\n python -m xdoctest.static_analysis _syspath_modname_to_modpath\n\n Example:\n >>> print(_syspath_modname_to_modpath('xdoctest.static_analysis'))\n ...static_analysis.py\n >>> print(_syspath_modname_to_modpath('xdoctest'))\n ...xdoctest\n >>> print(_syspath_modname_to_modpath('_ctypes'))\n ..._ctypes...\n >>> assert _syspath_modname_to_modpath('xdoctest', sys_path=[]) is None\n >>> assert _syspath_modname_to_modpath('xdoctest.static_analysis', sys_path=[]) is None\n >>> assert _syspath_modname_to_modpath('_ctypes', sys_path=[]) is None\n >>> assert _syspath_modname_to_modpath('this', sys_path=[]) is None\n\n Example:\n >>> # test what happens when the module is not visible in the path\n >>> modname = 'xdoctest.static_analysis'\n >>> modpath = _syspath_modname_to_modpath(modname)\n >>> exclude = [split_modpath(modpath)[0]]\n >>> found = _syspath_modname_to_modpath(modname, exclude=exclude)\n >>> # this only works if installed in dev mode, pypi fails\n >>> assert found is None, 'should not have found {}'.format(found)"} {"_id": "q_3497", "text": "Finds the path to a python module from its name.\n\n Determines the path to a python module without directly import it\n\n Converts the name of a module (__name__) to the path (__file__) where it is\n located without importing the module. Returns None if the module does not\n exist.\n\n Args:\n modname (str): module filepath\n hide_init (bool): if False, __init__.py will be returned for packages\n hide_main (bool): if False, and hide_init is True, __main__.py will be\n returned for packages, if it exists.\n sys_path (list): if specified overrides `sys.path` (default None)\n\n Returns:\n str: modpath - path to the module, or None if it doesn't exist\n\n CommandLine:\n python -m xdoctest.static_analysis modname_to_modpath:0\n pytest /home/joncrall/code/xdoctest/xdoctest/static_analysis.py::modname_to_modpath:0\n\n Example:\n >>> modname = 'xdoctest.__main__'\n >>> modpath = modname_to_modpath(modname, hide_main=False)\n >>> assert modpath.endswith('__main__.py')\n >>> modname = 'xdoctest'\n >>> modpath = modname_to_modpath(modname, hide_init=False)\n >>> assert modpath.endswith('__init__.py')\n >>> modpath = basename(modname_to_modpath('_ctypes'))\n >>> assert 'ctypes' in modpath"} {"_id": "q_3498", "text": "Determines importable name from file path\n\n Converts the path to a module (__file__) to the importable python name\n (__name__) without importing the module.\n\n The filename is converted to a module name, and parent directories are\n recursively included until a directory without an __init__.py file is\n encountered.\n\n Args:\n modpath (str): module filepath\n hide_init (bool): removes the __init__ suffix (default True)\n hide_main (bool): removes the __main__ suffix (default False)\n check (bool): if False, does not raise an error if modpath is a dir\n and does not contain an __init__ file.\n relativeto (str, optional): if specified, all checks are ignored and\n this is considered the path to the root module.\n\n Returns:\n str: modname\n\n Raises:\n ValueError: if check is True and the path does not exist\n\n CommandLine:\n xdoctest -m xdoctest.static_analysis modpath_to_modname\n\n Example:\n >>> from xdoctest import static_analysis\n >>> modpath = static_analysis.__file__.replace('.pyc', '.py')\n >>> modpath = modpath.replace('.pyc', '.py')\n >>> modname = modpath_to_modname(modpath)\n >>> assert modname == 'xdoctest.static_analysis'\n\n Example:\n >>> import xdoctest\n >>> assert modpath_to_modname(xdoctest.__file__.replace('.pyc', '.py')) == 'xdoctest'\n >>> assert modpath_to_modname(dirname(xdoctest.__file__.replace('.pyc', '.py'))) == 'xdoctest'\n\n Example:\n >>> modpath = modname_to_modpath('_ctypes')\n >>> modname = modpath_to_modname(modpath)\n >>> assert modname == '_ctypes'"} {"_id": "q_3499", "text": "Determines if a key is specified on the command line\n\n Args:\n key (str or tuple): string or tuple of strings. Each key should be\n prefixed with two hyphens (i.e. `--`)\n argv (Optional[list]): overrides `sys.argv` if specified\n\n Returns:\n bool: flag : True if the key (or any of the keys) was specified\n\n Example:\n >>> import ubelt as ub\n >>> argv = ['--spam', '--eggs', 'foo']\n >>> assert ub.argflag('--eggs', argv=argv) is True\n >>> assert ub.argflag('--ans', argv=argv) is False\n >>> assert ub.argflag('foo', argv=argv) is True\n >>> assert ub.argflag(('bar', '--spam'), argv=argv) is True"} {"_id": "q_3500", "text": "Horizontally concatenates strings preserving indentation\n\n Concatenates a list of objects ensuring that the next item in the list is\n all the way to the right of any previous items.\n\n Args:\n args (List[str]): strings to concatenate\n sep (str): separator (defaults to '')\n\n CommandLine:\n python -m ubelt.util_str hzcat\n\n Example1:\n >>> import ubelt as ub\n >>> B = ub.repr2([[1, 2], [3, 457]], nl=1, cbr=True, trailsep=False)\n >>> C = ub.repr2([[5, 6], [7, 8]], nl=1, cbr=True, trailsep=False)\n >>> args = ['A = ', B, ' * ', C]\n >>> print(ub.hzcat(args))\n A = [[1, 2], * [[5, 6],\n [3, 457]] [7, 8]]\n\n Example2:\n >>> from ubelt.util_str import *\n >>> import ubelt as ub\n >>> import unicodedata\n >>> aa = unicodedata.normalize('NFD', '\u00e1') # a unicode char with len2\n >>> B = ub.repr2([['\u03b8', aa], [aa, aa, aa]], nl=1, si=True, cbr=True, trailsep=False)\n >>> C = ub.repr2([[5, 6], [7, '\u03b8']], nl=1, si=True, cbr=True, trailsep=False)\n >>> args = ['A', '=', B, '*', C]\n >>> print(ub.hzcat(args, sep='\uff5c'))\n A\uff5c=\uff5c[[\u03b8, \u00e1], \uff5c*\uff5c[[5, 6],\n \uff5c \uff5c [\u00e1, \u00e1, \u00e1]]\uff5c \uff5c [7, \u03b8]]"} {"_id": "q_3501", "text": "Create a symbolic link.\n\n This will work on linux or windows, however windows does have some corner\n cases. For more details see notes in `ubelt._win32_links`.\n\n Args:\n path (PathLike): path to real file or directory\n link_path (PathLike): path to desired location for symlink\n overwrite (bool): overwrite existing symlinks.\n This will not overwrite real files on systems with proper symlinks.\n However, on older versions of windows junctions are\n indistinguishable from real files, so we cannot make this\n guarantee. (default = False)\n verbose (int): verbosity level (default=0)\n\n Returns:\n PathLike: link path\n\n CommandLine:\n python -m ubelt.util_links symlink:0\n\n Example:\n >>> import ubelt as ub\n >>> dpath = ub.ensure_app_cache_dir('ubelt', 'test_symlink0')\n >>> real_path = join(dpath, 'real_file.txt')\n >>> link_path = join(dpath, 'link_file.txt')\n >>> [ub.delete(p) for p in [real_path, link_path]]\n >>> ub.writeto(real_path, 'foo')\n >>> result = symlink(real_path, link_path)\n >>> assert ub.readfrom(result) == 'foo'\n >>> [ub.delete(p) for p in [real_path, link_path]]\n\n Example:\n >>> import ubelt as ub\n >>> from os.path import dirname\n >>> dpath = ub.ensure_app_cache_dir('ubelt', 'test_symlink1')\n >>> ub.delete(dpath)\n >>> ub.ensuredir(dpath)\n >>> _dirstats(dpath)\n >>> real_dpath = ub.ensuredir((dpath, 'real_dpath'))\n >>> link_dpath = ub.augpath(real_dpath, base='link_dpath')\n >>> real_path = join(dpath, 'afile.txt')\n >>> link_path = join(dpath, 'afile.txt')\n >>> [ub.delete(p) for p in [real_path, link_path]]\n >>> ub.writeto(real_path, 'foo')\n >>> result = symlink(real_dpath, link_dpath)\n >>> assert ub.readfrom(link_path) == 'foo', 'read should be same'\n >>> ub.writeto(link_path, 'bar')\n >>> _dirstats(dpath)\n >>> assert ub.readfrom(link_path) == 'bar', 'very bad bar'\n >>> assert ub.readfrom(real_path) == 'bar', 'changing link did not change real'\n >>> ub.writeto(real_path, 'baz')\n >>> _dirstats(dpath)\n >>> assert ub.readfrom(real_path) == 'baz', 'very bad baz'\n >>> assert ub.readfrom(link_path) == 'baz', 'changing real did not change link'\n >>> ub.delete(link_dpath, verbose=1)\n >>> _dirstats(dpath)\n >>> assert not exists(link_dpath), 'link should not exist'\n >>> assert exists(real_path), 'real path should exist'\n >>> _dirstats(dpath)\n >>> ub.delete(dpath, verbose=1)\n >>> _dirstats(dpath)\n >>> assert not exists(real_path)"} {"_id": "q_3502", "text": "Transforms function args into a key that can be used by the cache\n\n CommandLine:\n xdoctest -m ubelt.util_memoize _make_signature_key\n\n Example:\n >>> args = (4, [1, 2])\n >>> kwargs = {'a': 'b'}\n >>> key = _make_signature_key(args, kwargs)\n >>> print('key = {!r}'.format(key))\n >>> # Some mutable types cannot be handled by ub.hash_data\n >>> import pytest\n >>> import six\n >>> if six.PY2:\n >>> import collections as abc\n >>> else:\n >>> from collections import abc\n >>> with pytest.raises(TypeError):\n >>> _make_signature_key((4, [1, 2], {1: 2, 'a': 'b'}), kwargs={})\n >>> class Dummy(abc.MutableSet):\n >>> def __contains__(self, item): return None\n >>> def __iter__(self): return iter([])\n >>> def __len__(self): return 0\n >>> def add(self, item, loc): return None\n >>> def discard(self, item): return None\n >>> with pytest.raises(TypeError):\n >>> _make_signature_key((Dummy(),), kwargs={})"} {"_id": "q_3503", "text": "r\"\"\"\n Colorizes text a single color using ansii tags.\n\n Args:\n text (str): text to colorize\n color (str): may be one of the following: yellow, blink, lightgray,\n underline, darkyellow, blue, darkblue, faint, fuchsia, black,\n white, red, brown, turquoise, bold, darkred, darkgreen, reset,\n standout, darkteal, darkgray, overline, purple, green, teal, fuscia\n\n Returns:\n str: text : colorized text.\n If pygments is not installed plain text is returned.\n\n CommandLine:\n python -c \"import pygments.console; print(sorted(pygments.console.codes.keys()))\"\n python -m ubelt.util_colors color_text\n\n Example:\n >>> text = 'raw text'\n >>> import pytest\n >>> import ubelt as ub\n >>> if ub.modname_to_modpath('pygments'):\n >>> # Colors text only if pygments is installed\n >>> assert color_text(text, 'red') == '\\x1b[31;01mraw text\\x1b[39;49;00m'\n >>> assert color_text(text, None) == 'raw text'\n >>> else:\n >>> # Otherwise text passes through unchanged\n >>> assert color_text(text, 'red') == 'raw text'\n >>> assert color_text(text, None) == 'raw text'"} {"_id": "q_3504", "text": "Generates unique items in the order they appear.\n\n Args:\n items (Iterable): list of items\n\n key (Callable, optional): custom normalization function.\n If specified returns items where `key(item)` is unique.\n\n Yields:\n object: a unique item from the input sequence\n\n CommandLine:\n python -m utool.util_list --exec-unique_ordered\n\n Example:\n >>> import ubelt as ub\n >>> items = [4, 6, 6, 0, 6, 1, 0, 2, 2, 1]\n >>> unique_items = list(ub.unique(items))\n >>> assert unique_items == [4, 6, 0, 1, 2]\n\n Example:\n >>> import ubelt as ub\n >>> items = ['A', 'a', 'b', 'B', 'C', 'c', 'D', 'e', 'D', 'E']\n >>> unique_items = list(ub.unique(items, key=six.text_type.lower))\n >>> assert unique_items == ['A', 'b', 'C', 'D', 'e']\n >>> unique_items = list(ub.unique(items))\n >>> assert unique_items == ['A', 'a', 'b', 'B', 'C', 'c', 'D', 'e', 'E']"} {"_id": "q_3505", "text": "Returns indices corresponding to the first instance of each unique item.\n\n Args:\n items (Sequence): indexable collection of items\n\n key (Callable, optional): custom normalization function.\n If specified returns items where `key(item)` is unique.\n\n Yields:\n int : indices of the unique items\n\n Example:\n >>> items = [0, 2, 5, 1, 1, 0, 2, 4]\n >>> indices = list(argunique(items))\n >>> assert indices == [0, 1, 2, 3, 7]\n >>> indices = list(argunique(items, key=lambda x: x % 2 == 0))\n >>> assert indices == [0, 2]"} {"_id": "q_3506", "text": "Returns a list of booleans corresponding to the first instance of each\n unique item.\n\n Args:\n items (Sequence): indexable collection of items\n\n key (Callable, optional): custom normalization function.\n If specified returns items where `key(item)` is unique.\n\n Returns:\n List[bool] : flags the items that are unique\n\n Example:\n >>> import ubelt as ub\n >>> items = [0, 2, 1, 1, 0, 9, 2]\n >>> flags = unique_flags(items)\n >>> assert flags == [True, True, True, False, False, True, False]\n >>> flags = unique_flags(items, key=lambda x: x % 2 == 0)\n >>> assert flags == [True, False, True, False, False, False, False]"} {"_id": "q_3507", "text": "Constructs a list of booleans where an item is True if its position is in\n `indices` otherwise it is False.\n\n Args:\n indices (list): list of integer indices\n\n maxval (int): length of the returned list. If not specified\n this is inferred from `indices`\n\n Note:\n In the future the arg `maxval` may change its name to `shape`\n\n Returns:\n list: mask: list of booleans. mask[idx] is True if idx in indices\n\n Example:\n >>> import ubelt as ub\n >>> indices = [0, 1, 4]\n >>> mask = ub.boolmask(indices, maxval=6)\n >>> assert mask == [True, True, False, False, True, False]\n >>> mask = ub.boolmask(indices)\n >>> assert mask == [True, True, False, False, True]"} {"_id": "q_3508", "text": "Determine if all items in a sequence are the same\n\n Args:\n iterable (Iterable): items to determine if they are all the same\n\n eq (Callable, optional): function to determine equality\n (default: operator.eq)\n\n Example:\n >>> allsame([1, 1, 1, 1])\n True\n >>> allsame([])\n True\n >>> allsame([0, 1])\n False\n >>> iterable = iter([0, 1, 1, 1])\n >>> next(iterable)\n >>> allsame(iterable)\n True\n >>> allsame(range(10))\n False\n >>> allsame(range(10), lambda a, b: True)\n True"} {"_id": "q_3509", "text": "Returns the indices that would sort a indexable object.\n\n This is similar to `numpy.argsort`, but it is written in pure python and\n works on both lists and dictionaries.\n\n Args:\n indexable (Iterable or Mapping): indexable to sort by\n\n key (Callable, optional): customizes the ordering of the indexable\n\n reverse (bool, optional): if True returns in descending order\n\n Returns:\n list: indices: list of indices such that sorts the indexable\n\n Example:\n >>> import ubelt as ub\n >>> # argsort works on dicts by returning keys\n >>> dict_ = {'a': 3, 'b': 2, 'c': 100}\n >>> indices = ub.argsort(dict_)\n >>> assert list(ub.take(dict_, indices)) == sorted(dict_.values())\n >>> # argsort works on lists by returning indices\n >>> indexable = [100, 2, 432, 10]\n >>> indices = ub.argsort(indexable)\n >>> assert list(ub.take(indexable, indices)) == sorted(indexable)\n >>> # Can use iterators, but be careful. It exhausts them.\n >>> indexable = reversed(range(100))\n >>> indices = ub.argsort(indexable)\n >>> assert indices[0] == 99\n >>> # Can use key just like sorted\n >>> indexable = [[0, 1, 2], [3, 4], [5]]\n >>> indices = ub.argsort(indexable, key=len)\n >>> assert indices == [2, 1, 0]\n >>> # Can use reverse just like sorted\n >>> indexable = [0, 2, 1]\n >>> indices = ub.argsort(indexable, reverse=True)\n >>> assert indices == [1, 2, 0]"} {"_id": "q_3510", "text": "Zips elementwise pairs between items1 and items2 into a dictionary. Values\n from items2 can be broadcast onto items1.\n\n Args:\n items1 (Iterable): full sequence\n items2 (Iterable): can either be a sequence of one item or a sequence\n of equal length to `items1`\n cls (Type[dict]): dictionary type to use. Defaults to dict, but could\n be ordered dict instead.\n\n Returns:\n dict: similar to dict(zip(items1, items2))\n\n Example:\n >>> assert dzip([1, 2, 3], [4]) == {1: 4, 2: 4, 3: 4}\n >>> assert dzip([1, 2, 3], [4, 4, 4]) == {1: 4, 2: 4, 3: 4}\n >>> assert dzip([], [4]) == {}"} {"_id": "q_3511", "text": "r\"\"\"\n Groups a list of items by group id.\n\n Args:\n items (Iterable): a list of items to group\n groupids (Iterable or Callable): a corresponding list of item groupids\n or a function mapping an item to a groupid.\n\n Returns:\n dict: groupid_to_items: maps a groupid to a list of items\n\n CommandLine:\n python -m ubelt.util_dict group_items\n\n Example:\n >>> import ubelt as ub\n >>> items = ['ham', 'jam', 'spam', 'eggs', 'cheese', 'banana']\n >>> groupids = ['protein', 'fruit', 'protein', 'protein', 'dairy', 'fruit']\n >>> groupid_to_items = ub.group_items(items, groupids)\n >>> print(ub.repr2(groupid_to_items, nl=0))\n {'dairy': ['cheese'], 'fruit': ['jam', 'banana'], 'protein': ['ham', 'spam', 'eggs']}"} {"_id": "q_3512", "text": "Builds a histogram of items, counting the number of time each item appears\n in the input.\n\n Args:\n item_list (Iterable): hashable items (usually containing duplicates)\n weight_list (Iterable): corresponding weights for each item\n ordered (bool): if True the result is ordered by frequency\n labels (Iterable, optional): expected labels (default None)\n Allows this function to pre-initialize the histogram.\n If specified the frequency of each label is initialized to\n zero and item_list can only contain items specified in labels.\n\n Returns:\n dict : dictionary where the keys are items in item_list, and the values\n are the number of times the item appears in item_list.\n\n CommandLine:\n python -m ubelt.util_dict dict_hist\n\n Example:\n >>> import ubelt as ub\n >>> item_list = [1, 2, 39, 900, 1232, 900, 1232, 2, 2, 2, 900]\n >>> hist = ub.dict_hist(item_list)\n >>> print(ub.repr2(hist, nl=0))\n {1: 1, 2: 4, 39: 1, 900: 3, 1232: 2}\n\n Example:\n >>> import ubelt as ub\n >>> item_list = [1, 2, 39, 900, 1232, 900, 1232, 2, 2, 2, 900]\n >>> hist1 = ub.dict_hist(item_list)\n >>> hist2 = ub.dict_hist(item_list, ordered=True)\n >>> try:\n >>> hist3 = ub.dict_hist(item_list, labels=[])\n >>> except KeyError:\n >>> pass\n >>> else:\n >>> raise AssertionError('expected key error')\n >>> #result = ub.repr2(hist_)\n >>> weight_list = [1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1]\n >>> hist4 = ub.dict_hist(item_list, weight_list=weight_list)\n >>> print(ub.repr2(hist1, nl=0))\n {1: 1, 2: 4, 39: 1, 900: 3, 1232: 2}\n >>> print(ub.repr2(hist4, nl=0))\n {1: 1, 2: 4, 39: 1, 900: 1, 1232: 0}"} {"_id": "q_3513", "text": "Find all duplicate items in a list.\n\n Search for all items that appear more than `k` times and return a mapping\n from each (k)-duplicate item to the positions it appeared in.\n\n Args:\n items (Iterable): hashable items possibly containing duplicates\n k (int): only return items that appear at least `k` times (default=2)\n key (Callable, optional): Returns indices where `key(items[i])`\n maps to a particular value at least k times.\n\n Returns:\n dict: maps each duplicate item to the indices at which it appears\n\n CommandLine:\n python -m ubelt.util_dict find_duplicates\n\n Example:\n >>> import ubelt as ub\n >>> items = [0, 0, 1, 2, 3, 3, 0, 12, 2, 9]\n >>> duplicates = ub.find_duplicates(items)\n >>> print('items = %r' % (items,))\n >>> print('duplicates = %r' % (duplicates,))\n >>> assert duplicates == {0: [0, 1, 6], 2: [3, 8], 3: [4, 5]}\n >>> assert ub.find_duplicates(items, 3) == {0: [0, 1, 6]}\n\n Example:\n >>> import ubelt as ub\n >>> items = [0, 0, 1, 2, 3, 3, 0, 12, 2, 9]\n >>> # note: k can be 0\n >>> duplicates = ub.find_duplicates(items, k=0)\n >>> print(ub.repr2(duplicates, nl=0))\n {0: [0, 1, 6], 1: [2], 2: [3, 8], 3: [4, 5], 9: [9], 12: [7]}\n\n Example:\n >>> import ubelt as ub\n >>> items = [10, 11, 12, 13, 14, 15, 16]\n >>> duplicates = ub.find_duplicates(items, key=lambda x: x // 2)\n >>> print(ub.repr2(duplicates, nl=0))\n {5: [0, 1], 6: [2, 3], 7: [4, 5]}"} {"_id": "q_3514", "text": "Constructs a dictionary that contains keys common between all inputs.\n The returned values will only belong to the first dictionary.\n\n Args:\n *args : a sequence of dictionaries (or sets of keys)\n\n Returns:\n Dict | OrderedDict :\n OrderedDict if the first argument is an OrderedDict, otherwise dict\n\n Notes:\n This function can be used as an alternative to `dict_subset` where any\n key not in the dictionary is ignored. See the following example:\n\n >>> dict_isect({'a': 1, 'b': 2, 'c': 3}, ['a', 'c', 'd'])\n {'a': 1, 'c': 3}\n\n Example:\n >>> dict_isect({'a': 1, 'b': 1}, {'b': 2, 'c': 2})\n {'b': 1}\n >>> dict_isect(odict([('a', 1), ('b', 2)]), odict([('c', 3)]))\n OrderedDict()\n >>> dict_isect()\n {}"} {"_id": "q_3515", "text": "applies a function to each of the keys in a dictionary\n\n Args:\n func (callable): a function or indexable object\n dict_ (dict): a dictionary\n\n Returns:\n newdict: transformed dictionary\n\n CommandLine:\n python -m ubelt.util_dict map_vals\n\n Example:\n >>> import ubelt as ub\n >>> dict_ = {'a': [1, 2, 3], 'b': []}\n >>> func = len\n >>> newdict = ub.map_vals(func, dict_)\n >>> assert newdict == {'a': 3, 'b': 0}\n >>> print(newdict)\n >>> # Can also use indexables as `func`\n >>> dict_ = {'a': 0, 'b': 1}\n >>> func = [42, 21]\n >>> newdict = ub.map_vals(func, dict_)\n >>> assert newdict == {'a': 42, 'b': 21}\n >>> print(newdict)"} {"_id": "q_3516", "text": "r\"\"\"\n Swaps the keys and values in a dictionary.\n\n Args:\n dict_ (dict): dictionary to invert\n unique_vals (bool): if False, inverted keys are returned in a set.\n The default is True.\n\n Returns:\n dict: inverted\n\n Notes:\n The must values be hashable.\n\n If the original dictionary contains duplicate values, then only one of\n the corresponding keys will be returned and the others will be\n discarded. This can be prevented by setting `unique_vals=True`,\n causing the inverted keys to be returned in a set.\n\n CommandLine:\n python -m ubelt.util_dict invert_dict\n\n Example:\n >>> import ubelt as ub\n >>> dict_ = {'a': 1, 'b': 2}\n >>> inverted = ub.invert_dict(dict_)\n >>> assert inverted == {1: 'a', 2: 'b'}\n\n Example:\n >>> import ubelt as ub\n >>> dict_ = ub.odict([(2, 'a'), (1, 'b'), (0, 'c'), (None, 'd')])\n >>> inverted = ub.invert_dict(dict_)\n >>> assert list(inverted.keys())[0] == 'a'\n\n Example:\n >>> import ubelt as ub\n >>> dict_ = {'a': 1, 'b': 0, 'c': 0, 'd': 0, 'f': 2}\n >>> inverted = ub.invert_dict(dict_, unique_vals=False)\n >>> assert inverted == {0: {'b', 'c', 'd'}, 1: {'a'}, 2: {'f'}}"} {"_id": "q_3517", "text": "Recursively casts a AutoDict into a regular dictionary. All nested\n AutoDict values are also converted.\n\n Returns:\n dict: a copy of this dict without autovivification\n\n Example:\n >>> from ubelt.util_dict import AutoDict\n >>> auto = AutoDict()\n >>> auto[1] = 1\n >>> auto['n1'] = AutoDict()\n >>> static = auto.to_dict()\n >>> assert not isinstance(static, AutoDict)\n >>> assert not isinstance(static['n1'], AutoDict)"} {"_id": "q_3518", "text": "Perform a real symbolic link if possible. However, on most versions of\n windows you need special privledges to create a real symlink. Therefore, we\n try to create a symlink, but if that fails we fallback to using a junction.\n\n AFAIK, the main difference between symlinks and junctions are that symlinks\n can reference relative or absolute paths, where as junctions always\n reference absolute paths. Not 100% on this though. Windows is weird.\n\n Note that junctions will not register as links via `islink`, but I\n believe real symlinks will."} {"_id": "q_3519", "text": "Creates real symlink. This will only work in versions greater than Windows\n Vista. Creating real symlinks requires admin permissions or at least\n specially enabled symlink permissions. On Windows 10 enabling developer\n mode should give you these permissions."} {"_id": "q_3520", "text": "Determines if a path is a win32 junction\n\n CommandLine:\n python -m ubelt._win32_links _win32_is_junction\n\n Example:\n >>> # xdoc: +REQUIRES(WIN32)\n >>> import ubelt as ub\n >>> root = ub.ensure_app_cache_dir('ubelt', 'win32_junction')\n >>> ub.delete(root)\n >>> ub.ensuredir(root)\n >>> dpath = join(root, 'dpath')\n >>> djunc = join(root, 'djunc')\n >>> ub.ensuredir(dpath)\n >>> _win32_junction(dpath, djunc)\n >>> assert _win32_is_junction(djunc) is True\n >>> assert _win32_is_junction(dpath) is False\n >>> assert _win32_is_junction('notafile') is False"} {"_id": "q_3521", "text": "Returns the location that the junction points, raises ValueError if path is\n not a junction.\n\n CommandLine:\n python -m ubelt._win32_links _win32_read_junction\n\n Example:\n >>> # xdoc: +REQUIRES(WIN32)\n >>> import ubelt as ub\n >>> root = ub.ensure_app_cache_dir('ubelt', 'win32_junction')\n >>> ub.delete(root)\n >>> ub.ensuredir(root)\n >>> dpath = join(root, 'dpath')\n >>> djunc = join(root, 'djunc')\n >>> ub.ensuredir(dpath)\n >>> _win32_junction(dpath, djunc)\n >>> path = djunc\n >>> pointed = _win32_read_junction(path)\n >>> print('pointed = {!r}'.format(pointed))"} {"_id": "q_3522", "text": "rmtree for win32 that treats junctions like directory symlinks.\n The junction removal portion may not be safe on race conditions.\n\n There is a known issue that prevents shutil.rmtree from\n deleting directories with junctions.\n https://bugs.python.org/issue31226"} {"_id": "q_3523", "text": "Test if two hard links point to the same location\n\n CommandLine:\n python -m ubelt._win32_links _win32_is_hardlinked\n\n Example:\n >>> # xdoc: +REQUIRES(WIN32)\n >>> import ubelt as ub\n >>> root = ub.ensure_app_cache_dir('ubelt', 'win32_hardlink')\n >>> ub.delete(root)\n >>> ub.ensuredir(root)\n >>> fpath1 = join(root, 'fpath1')\n >>> fpath2 = join(root, 'fpath2')\n >>> ub.touch(fpath1)\n >>> ub.touch(fpath2)\n >>> fjunc1 = _win32_junction(fpath1, join(root, 'fjunc1'))\n >>> fjunc2 = _win32_junction(fpath2, join(root, 'fjunc2'))\n >>> assert _win32_is_hardlinked(fjunc1, fpath1)\n >>> assert _win32_is_hardlinked(fjunc2, fpath2)\n >>> assert not _win32_is_hardlinked(fjunc2, fpath1)\n >>> assert not _win32_is_hardlinked(fjunc1, fpath2)"} {"_id": "q_3524", "text": "Using the windows cmd shell to get information about a directory"} {"_id": "q_3525", "text": "Returns generators that double with each value returned\n Config includes optional start value"} {"_id": "q_3526", "text": "Retrieve the adjacency matrix from the nx.DiGraph or numpy array."} {"_id": "q_3527", "text": "Apply causal discovery on observational data using CCDr.\n\n Args:\n data (pandas.DataFrame): DataFrame containing the data\n\n Returns:\n networkx.DiGraph: Solution given by the CCDR algorithm."} {"_id": "q_3528", "text": "Save data to the csv format by default, in two separate files.\n\n Optional keyword arguments can be passed to pandas."} {"_id": "q_3529", "text": "Launch an R script, starting from a template and replacing text in file\n before execution.\n\n Args:\n template (str): path to the template of the R script\n arguments (dict): Arguments that modify the template's placeholders\n with arguments\n output_function (function): Function to execute **after** the execution\n of the R script, and its output is returned by this function. Used\n traditionally as a function to retrieve the results of the\n execution.\n verbose (bool): Sets the verbosity of the R subprocess.\n debug (bool): If True, the generated scripts are not deleted.\n\n Return:\n Returns the output of the ``output_function`` if not `None`\n else `True` or `False` depending on whether the execution was\n successful."} {"_id": "q_3530", "text": "Execute a subprocess to check the package's availability.\n\n Args:\n package (str): Name of the package to be tested.\n\n Returns:\n bool: `True` if the package is available, `False` otherwise"} {"_id": "q_3531", "text": "Perform the independence test.\n\n :param a: input data\n :param b: input data\n :type a: array-like, numerical data\n :type b: array-like, numerical data\n :return: dependency statistic (1=Highly dependent, 0=Not dependent)\n :rtype: float"} {"_id": "q_3532", "text": "Evaluate a graph taking account of the hardware."} {"_id": "q_3533", "text": "Generate according to the topological order of the graph."} {"_id": "q_3534", "text": "Use CGNN to create a graph from scratch. All the possible structures\n are tested, which leads to a super exponential complexity. It would be\n preferable to start from a graph skeleton for large graphs.\n\n Args:\n data (pandas.DataFrame): Observational data on which causal\n discovery has to be performed.\n Returns:\n networkx.DiGraph: Solution given by CGNN."} {"_id": "q_3535", "text": "Modify and improve a directed acyclic graph solution using CGNN.\n\n Args:\n data (pandas.DataFrame): Observational data on which causal\n discovery has to be performed.\n dag (nx.DiGraph): Graph that provides the initial solution,\n on which the CGNN algorithm will be applied.\n alg (str): Exploration heuristic to use, among [\"HC\", \"HCr\",\n \"tabu\", \"EHC\"]\n Returns:\n networkx.DiGraph: Solution given by CGNN."} {"_id": "q_3536", "text": "Orient the undirected graph using GNN and apply CGNN to improve the graph.\n\n Args:\n data (pandas.DataFrame): Observational data on which causal\n discovery has to be performed.\n umg (nx.Graph): Graph that provides the skeleton, on which the GNN\n then the CGNN algorithm will be applied.\n alg (str): Exploration heuristic to use, among [\"HC\", \"HCr\",\n \"tabu\", \"EHC\"]\n Returns:\n networkx.DiGraph: Solution given by CGNN.\n \n .. note::\n GNN (``cdt.causality.pairwise.GNN``) is first used to orient the\n undirected graph and output a DAG before applying CGNN."} {"_id": "q_3537", "text": "Evaluate the entropy of the input variable.\n\n :param x: input variable 1D\n :return: entropy of x"} {"_id": "q_3538", "text": "Evaluate a pair using the IGCI model.\n\n :param a: Input variable 1D\n :param b: Input variable 1D\n :param kwargs: {refMeasure: Scaling method (gaussian, integral or None),\n estimator: method used to evaluate the pairs (entropy or integral)}\n :return: Return value of the IGCI model >0 if a->b otherwise if return <0"} {"_id": "q_3539", "text": "Train the model.\n\n Args:\n x_tr (pd.DataFrame): CEPC format dataframe containing the pairs\n y_tr (pd.DataFrame or np.ndarray): labels associated to the pairs"} {"_id": "q_3540", "text": "Predict the causal score using a trained RCC model\n\n Args:\n x (numpy.array or pandas.DataFrame or pandas.Series): First variable or dataset.\n args (numpy.array): second variable (optional depending on the 1st argument).\n\n Returns:\n float: Causation score (Value : 1 if a->b and -1 if b->a)"} {"_id": "q_3541", "text": "For one variable, predict its neighbours.\n\n Args:\n df_features (pandas.DataFrame):\n df_target (pandas.Series):\n nh (int): number of hidden units\n idx (int): (optional) for printing purposes\n dropout (float): probability of dropout (between 0 and 1)\n activation_function (torch.nn.Module): activation function of the NN\n lr (float): learning rate of Adam\n l1 (float): L1 penalization coefficient\n batch_size (int): batch size, defaults to full-batch\n train_epochs (int): number of train epochs\n test_epochs (int): number of test epochs\n device (str): cuda or cpu device (defaults to ``cdt.SETTINGS.default_device``)\n verbose (bool): verbosity (defaults to ``cdt.SETTINGS.verbose``)\n nb_runs (int): number of bootstrap runs\n\n Returns:\n list: scores of each feature relatively to the target"} {"_id": "q_3542", "text": "Build a skeleton using a pairwise independence criterion.\n\n Args:\n data (pandas.DataFrame): Raw data table\n\n Returns:\n networkx.Graph: Undirected graph representing the skeleton."} {"_id": "q_3543", "text": "Run GIES on an undirected graph.\n\n Args:\n data (pandas.DataFrame): DataFrame containing the data\n graph (networkx.Graph): Skeleton of the graph to orient\n\n Returns:\n networkx.DiGraph: Solution given by the GIES algorithm."} {"_id": "q_3544", "text": "Feed-forward through the network."} {"_id": "q_3545", "text": "Execute SAM on a dataset given a skeleton or not.\n\n Args:\n data (pandas.DataFrame): Observational data for estimation of causal relationships by SAM\n skeleton (numpy.ndarray): A priori knowledge about the causal relationships as an adjacency matrix.\n Can be fed either directed or undirected links.\n nruns (int): Number of runs to be made for causal estimation.\n Recommended: >=12 for optimal performance.\n njobs (int): Numbers of jobs to be run in Parallel.\n Recommended: 1 if no GPU available, 2*number of GPUs else.\n gpus (int): Number of available GPUs for the algorithm.\n verbose (bool): verbose mode\n plot (bool): Plot losses interactively. Not recommended if nruns>1\n plot_generated_pair (bool): plots a generated pair interactively. Not recommended if nruns>1\n Returns:\n networkx.DiGraph: Graph estimated by SAM, where A[i,j] is the term\n of the ith variable for the jth generator."} {"_id": "q_3546", "text": "Infer causal relationships between 2 variables using the CDS statistic\n\n Args:\n a (numpy.ndarray): Variable 1\n b (numpy.ndarray): Variable 2\n\n Returns:\n float: Causation score (Value : 1 if a->b and -1 if b->a)"} {"_id": "q_3547", "text": "Prediction method for pairwise causal inference using the ANM model.\n\n Args:\n a (numpy.ndarray): Variable 1\n b (numpy.ndarray): Variable 2\n\n Returns:\n float: Causation score (Value : 1 if a->b and -1 if b->a)"} {"_id": "q_3548", "text": "Compute the fitness score of the ANM model in the x->y direction.\n\n Args:\n a (numpy.ndarray): Variable seen as cause\n b (numpy.ndarray): Variable seen as effect\n\n Returns:\n float: ANM fit score"} {"_id": "q_3549", "text": "Predict the graph skeleton.\n\n Args:\n data (pandas.DataFrame): observational data\n alpha (float): regularization parameter\n max_iter (int): maximum number of iterations\n\n Returns:\n networkx.Graph: Graph skeleton"} {"_id": "q_3550", "text": "Autoset GPU parameters using CUDA_VISIBLE_DEVICES variables.\n\n Return default config if variable not set.\n :param set_var: Variable to set. Must be of type ConfigSettings"} {"_id": "q_3551", "text": "Generic predict method, chooses which subfunction to use for a more\n suited.\n\n Depending on the type of `x` and of `*args`, this function process to execute\n different functions in the priority order:\n\n 1. If ``args[0]`` is a ``networkx.(Di)Graph``, then ``self.orient_graph`` is executed.\n 2. If ``args[0]`` exists, then ``self.predict_proba`` is executed.\n 3. If ``x`` is a ``pandas.DataFrame``, then ``self.predict_dataset`` is executed.\n 4. If ``x`` is a ``pandas.Series``, then ``self.predict_proba`` is executed.\n\n Args:\n x (numpy.array or pandas.DataFrame or pandas.Series): First variable or dataset.\n args (numpy.array or networkx.Graph): graph or second variable.\n\n Returns:\n pandas.Dataframe or networkx.Digraph: predictions output"} {"_id": "q_3552", "text": "Generic dataset prediction function.\n\n Runs the score independently on all pairs.\n\n Args:\n x (pandas.DataFrame): a CEPC format Dataframe.\n kwargs (dict): additional arguments for the algorithms\n\n Returns:\n pandas.DataFrame: a Dataframe with the predictions."} {"_id": "q_3553", "text": "Run the algorithm on a directed_graph.\n\n Args:\n data (pandas.DataFrame): DataFrame containing the data\n graph (networkx.DiGraph): Skeleton of the graph to orient\n\n Returns:\n networkx.DiGraph: Solution on the given skeleton.\n\n .. warning::\n The algorithm is ran on the skeleton of the given graph."} {"_id": "q_3554", "text": "Compute the gaussian kernel on a 1D vector."} {"_id": "q_3555", "text": "Init a noise variable."} {"_id": "q_3556", "text": "Runs Jarfo independently on all pairs.\n\n Args:\n x (pandas.DataFrame): a CEPC format Dataframe.\n kwargs (dict): additional arguments for the algorithms\n\n Returns:\n pandas.DataFrame: a Dataframe with the predictions."} {"_id": "q_3557", "text": "Use Jarfo to predict the causal direction of a pair of vars.\n\n Args:\n a (numpy.ndarray): Variable 1\n b (numpy.ndarray): Variable 2\n idx (int): (optional) index number for printing purposes\n\n Returns:\n float: Causation score (Value : 1 if a->b and -1 if b->a)"} {"_id": "q_3558", "text": "Implementation of the ARACNE algorithm.\n\n Args:\n mat (numpy.ndarray): matrix, if it is a square matrix, the program assumes\n it is a relevance matrix where mat(i,j) represents the similarity content\n between nodes i and j. Elements of matrix should be\n non-negative.\n\n Returns:\n mat_nd (numpy.ndarray): Output deconvolved matrix (direct dependency matrix). Its components\n represent direct edge weights of observed interactions.\n\n .. note::\n Ref: ARACNE: An Algorithm for the Reconstruction of Gene Regulatory Networks in a Mammalian Cellular Context\n Adam A Margolin, Ilya Nemenman, Katia Basso, Chris Wiggins, Gustavo Stolovitzky, Riccardo Dalla Favera and Andrea Califano\n DOI: https://doi.org/10.1186/1471-2105-7-S1-S7"} {"_id": "q_3559", "text": "Apply deconvolution to a networkx graph.\n\n Args:\n g (networkx.Graph): Graph to apply deconvolution to\n alg (str): Algorithm to use ('aracne', 'clr', 'nd')\n kwargs (dict): extra options for algorithms\n\n Returns:\n networkx.Graph: graph with undirected links removed."} {"_id": "q_3560", "text": "Input a graph and output a DAG.\n\n The heuristic is to reverse the edge with the lowest score of the cycle\n if possible, else remove it.\n\n Args:\n g (networkx.DiGraph): Graph to modify to output a DAG\n\n Returns:\n networkx.DiGraph: DAG made out of the input graph."} {"_id": "q_3561", "text": "Returns the weighted average and standard deviation.\n\n values, weights -- numpy ndarrays with the same shape."} {"_id": "q_3562", "text": "Pass data through the net structure.\n\n :param x: input data: shape (:,1)\n :type x: torch.Variable\n :return: output of the shallow net\n :rtype: torch.Variable"} {"_id": "q_3563", "text": "Run the GNN on a pair x,y of FloatTensor data."} {"_id": "q_3564", "text": "Run multiple times GNN to estimate the causal direction.\n\n Args:\n a (np.ndarray): Variable 1\n b (np.ndarray): Variable 2\n nb_runs (int): number of runs to execute per batch (before testing for significance with t-test).\n nb_jobs (int): number of runs to execute in parallel. (Initialized with ``cdt.SETTINGS.NB_JOBS``)\n gpu (bool): use gpu (Initialized with ``cdt.SETTINGS.GPU``)\n idx (int): (optional) index of the pair, for printing purposes\n verbose (bool): verbosity (Initialized with ``cdt.SETTINGS.verbose``)\n ttest_threshold (float): threshold to stop the boostraps before ``nb_max_runs`` if the difference is significant\n nb_max_runs (int): Max number of bootstraps\n train_epochs (int): Number of epochs during which the model is going to be trained\n test_epochs (int): Number of epochs during which the model is going to be tested\n\n Returns:\n float: Causal score of the pair (Value : 1 if a->b and -1 if b->a)"} {"_id": "q_3565", "text": "Passing data through the network.\n\n :param x: 2d tensor containing both (x,y) Variables\n :return: output of the net"} {"_id": "q_3566", "text": "Updates all rows that match the filter."} {"_id": "q_3567", "text": "Creates multiple new records in the database.\n\n This allows specifying custom conflict behavior using .on_conflict().\n If no special behavior was specified, this uses the normal Django create(..)\n\n Arguments:\n rows:\n An array of dictionaries, where each dictionary\n describes the fields to insert.\n\n return_model (default: False):\n If model instances should be returned rather than\n just dicts.\n\n Returns:\n A list of either the dicts of the rows inserted, including the pk or\n the models of the rows inserted with defaults for any fields not specified"} {"_id": "q_3568", "text": "Creates a new record in the database.\n\n This allows specifying custom conflict behavior using .on_conflict().\n If no special behavior was specified, this uses the normal Django create(..)\n\n Arguments:\n fields:\n The fields of the row to create.\n\n Returns:\n The primary key of the record that was created."} {"_id": "q_3569", "text": "Creates a new record in the database and then gets\n the entire row.\n\n This allows specifying custom conflict behavior using .on_conflict().\n If no special behavior was specified, this uses the normal Django create(..)\n\n Arguments:\n fields:\n The fields of the row to create.\n\n Returns:\n The model instance representing the row that was created."} {"_id": "q_3570", "text": "Verifies whether this field is gonna modify something\n on its own.\n\n \"Magical\" means that a field modifies the field value\n during the pre_save.\n\n Arguments:\n model_instance:\n The model instance the field is defined on.\n\n field:\n The field to get of whether the field is\n magical.\n\n is_insert:\n Pretend whether this is an insert?\n\n Returns:\n True when this field modifies something."} {"_id": "q_3571", "text": "Gets the fields to use in an upsert.\n\n This some nice magic. We'll split the fields into\n a group of \"insert fields\" and \"update fields\":\n\n INSERT INTO bla (\"val1\", \"val2\") ON CONFLICT DO UPDATE SET val1 = EXCLUDED.val1\n\n ^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^\n insert_fields update_fields\n\n Often, fields appear in both lists. But, for example,\n a :see:DateTime field with `auto_now_add=True` set, will\n only appear in \"insert_fields\", since it won't be set\n on existing rows.\n\n Other than that, the user specificies a list of fields\n in the upsert() call. That migt not be all fields. The\n user could decide to leave out optional fields. If we\n end up doing an update, we don't want to overwrite\n those non-specified fields.\n\n We cannot just take the list of fields the user\n specifies, because as mentioned, some fields\n make modifications to the model on their own.\n\n We'll have to detect which fields make modifications\n and include them in the list of insert/update fields."} {"_id": "q_3572", "text": "When a model gets created or updated."} {"_id": "q_3573", "text": "Compiles the HStore value into SQL.\n\n Compiles expressions contained in the values\n of HStore entries as well.\n\n Given a dictionary like:\n\n dict(key1='val1', key2='val2')\n\n The resulting SQL will be:\n\n hstore(hstore('key1', 'val1'), hstore('key2', 'val2'))"} {"_id": "q_3574", "text": "Adds an extra condition to an existing JOIN.\n\n This allows you to for example do:\n\n INNER JOIN othertable ON (mytable.id = othertable.other_id AND [extra conditions])\n\n This does not work if nothing else in your query doesn't already generate the\n initial join in the first place."} {"_id": "q_3575", "text": "Sets the values to be used in this query.\n\n Insert fields are fields that are definitely\n going to be inserted, and if an existing row\n is found, are going to be overwritten with the\n specified value.\n\n Update fields are fields that should be overwritten\n in case an update takes place rather than an insert.\n If we're dealing with a INSERT, these will not be used.\n\n Arguments:\n objs:\n The objects to apply this query to.\n\n insert_fields:\n The fields to use in the INSERT statement\n\n update_fields:\n The fields to only use in the UPDATE statement."} {"_id": "q_3576", "text": "Creates a REQUIRED CONSTRAINT for the specified hstore key."} {"_id": "q_3577", "text": "Renames an existing REQUIRED CONSTRAINT for the specified\n hstore key."} {"_id": "q_3578", "text": "Drops a REQUIRED CONSTRAINT for the specified hstore key."} {"_id": "q_3579", "text": "Gets the name for a CONSTRAINT that applies\n to a single hstore key.\n\n Arguments:\n table:\n The name of the table the field is\n a part of.\n\n field:\n The hstore field to create a\n UNIQUE INDEX for.\n\n key:\n The name of the hstore key\n to create the name for.\n\n Returns:\n The name for the UNIQUE index."} {"_id": "q_3580", "text": "Creates the actual SQL used when applying the migration."} {"_id": "q_3581", "text": "Ran to prepare the configured database.\n\n This is where we enable the `hstore` extension\n if it wasn't enabled yet."} {"_id": "q_3582", "text": "Override the base class so it doesn't cast all values\n to strings.\n\n psqlextra supports expressions in hstore fields, so casting\n all values to strings is a bad idea."} {"_id": "q_3583", "text": "Rewrites a formed SQL INSERT query to include\n the ON CONFLICT clause.\n\n Arguments:\n sql:\n The SQL INSERT query to rewrite.\n\n params:\n The parameters passed to the query.\n\n returning:\n What to put in the `RETURNING` clause\n of the resulting query.\n\n Returns:\n A tuple of the rewritten SQL query and new params."} {"_id": "q_3584", "text": "Rewrites a formed SQL INSERT query to include\n the ON CONFLICT DO UPDATE clause."} {"_id": "q_3585", "text": "Rewrites a formed SQL INSERT query to include\n the ON CONFLICT DO NOTHING clause."} {"_id": "q_3586", "text": "Builds the `conflict_target` for the ON CONFLICT\n clause."} {"_id": "q_3587", "text": "Formats a field's name for usage in SQL.\n\n Arguments:\n field_name:\n The field name to format.\n\n Returns:\n The specified field name formatted for\n usage in SQL."} {"_id": "q_3588", "text": "Formats a field's value for usage in SQL.\n\n Arguments:\n field_name:\n The name of the field to format\n the value of.\n\n Returns:\n The field's value formatted for usage\n in SQL."} {"_id": "q_3589", "text": "Creates a UNIQUE constraint for the specified hstore keys."} {"_id": "q_3590", "text": "Renames an existing UNIQUE constraint for the specified\n hstore keys."} {"_id": "q_3591", "text": "Gets the name for a UNIQUE INDEX that applies\n to one or more keys in a hstore field.\n\n Arguments:\n table:\n The name of the table the field is\n a part of.\n\n field:\n The hstore field to create a\n UNIQUE INDEX for.\n\n key:\n The name of the hstore key\n to create the name for.\n\n This can also be a tuple\n of multiple names.\n\n Returns:\n The name for the UNIQUE index."} {"_id": "q_3592", "text": "Iterates over the keys marked as \"unique\"\n in the specified field.\n\n Arguments:\n field:\n The field of which key's to\n iterate over."} {"_id": "q_3593", "text": "Adds an extra condition to this join.\n\n Arguments:\n field:\n The field that the condition will apply to.\n\n value:\n The value to compare."} {"_id": "q_3594", "text": "Compiles this JOIN into a SQL string."} {"_id": "q_3595", "text": "Approximate the 95% confidence interval for Student's T distribution.\n\n Given the degrees of freedom, returns an approximation to the 95%\n confidence interval for the Student's T distribution.\n\n Args:\n df: An integer, the number of degrees of freedom.\n\n Returns:\n A float."} {"_id": "q_3596", "text": "Find the pooled sample variance for two samples.\n\n Args:\n sample1: one sample.\n sample2: the other sample.\n\n Returns:\n Pooled sample variance, as a float."} {"_id": "q_3597", "text": "Determine whether two samples differ significantly.\n\n This uses a Student's two-sample, two-tailed t-test with alpha=0.95.\n\n Args:\n sample1: one sample.\n sample2: the other sample.\n\n Returns:\n (significant, t_score) where significant is a bool indicating whether\n the two samples differ significantly; t_score is the score from the\n two-sample T test."} {"_id": "q_3598", "text": "Return a topological sorting of nodes in a graph.\n\n roots - list of root nodes to search from\n getParents - function which returns the parents of a given node"} {"_id": "q_3599", "text": "N-Queens solver.\n\n Args:\n queen_count: the number of queens to solve for. This is also the\n board size.\n\n Yields:\n Solutions to the problem. Each yielded value is looks like\n (3, 8, 2, 1, 4, ..., 6) where each number is the column position for the\n queen, and the index into the tuple indicates the row."} {"_id": "q_3600", "text": "uct tree search"} {"_id": "q_3601", "text": "random play until both players pass"} {"_id": "q_3602", "text": "Filters out benchmarks not supported by both Pythons.\n\n Args:\n benchmarks: a set() of benchmark names\n bench_funcs: dict mapping benchmark names to functions\n python: the interpereter commands (as lists)\n\n Returns:\n The filtered set of benchmark names"} {"_id": "q_3603", "text": "Recursively expand name benchmark names.\n\n Args:\n bm_name: string naming a benchmark or benchmark group.\n\n Yields:\n Names of actual benchmarks, with all group names fully expanded."} {"_id": "q_3604", "text": "Initialize the strings we'll run the regexes against.\n\n The strings used in the benchmark are prefixed and suffixed by\n strings that are repeated n times.\n\n The sequence n_values contains the values for n.\n If n_values is None the values of n from the original benchmark\n are used.\n\n The generated list of strings is cached in the string_tables\n variable, which is indexed by n.\n\n Returns:\n A list of string prefix/suffix lengths."} {"_id": "q_3605", "text": "Returns the domain of the B-Spline"} {"_id": "q_3606", "text": "Fetch the messages.\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"} {"_id": "q_3607", "text": "Fetch the entries from the url.\n\n The method retrieves all entries from a RSS url\n\n :param category: the category of items to fetch\n\n :returns: a generator of entries"} {"_id": "q_3608", "text": "Fetch the entries\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"} {"_id": "q_3609", "text": "Returns the RSS argument parser."} {"_id": "q_3610", "text": "Fetch the bugs from the repository.\n\n The method retrieves, from a Bugzilla repository, the bugs\n updated since the given date.\n\n :param category: the category of items to fetch\n :param from_date: obtain bugs updated since this date\n\n :returns: a generator of bugs"} {"_id": "q_3611", "text": "Get issue notes"} {"_id": "q_3612", "text": "Get merge versions"} {"_id": "q_3613", "text": "Get the merge requests from pagination"} {"_id": "q_3614", "text": "Get the merge versions from pagination"} {"_id": "q_3615", "text": "Get the notes from pagination"} {"_id": "q_3616", "text": "Get emojis of a note"} {"_id": "q_3617", "text": "Initialize rate limit information"} {"_id": "q_3618", "text": "Returns the GitLab argument parser."} {"_id": "q_3619", "text": "Fetch the messages from the channel.\n\n This method fetches the messages stored on the channel that were\n sent since the given date.\n\n :param category: the category of items to fetch\n :param from_date: obtain messages sent since this date\n\n :returns: a generator of messages"} {"_id": "q_3620", "text": "Fetch the number of members in a conversation, which is a supertype for public and\n private ones, DM and group DM.\n\n :param conversation: the ID of the conversation"} {"_id": "q_3621", "text": "Fetch user info."} {"_id": "q_3622", "text": "Returns the Slack argument parser."} {"_id": "q_3623", "text": "Extracts and coverts the update time from a Bugzilla item.\n\n The timestamp is extracted from 'delta_ts' field. This date is\n converted to UNIX timestamp format. Due Bugzilla servers ignore\n the timezone on HTTP requests, it will be ignored during the\n conversion, too.\n\n :param item: item generated by the backend\n\n :returns: a UNIX timestamp"} {"_id": "q_3624", "text": "Parse a Bugilla bugs details XML stream.\n\n This method returns a generator which parses the given XML,\n producing an iterator of dictionaries. Each dictionary stores\n the information related to a parsed bug.\n\n If the given XML is invalid or does not contains any bug, the\n method will raise a ParseError exception.\n\n :param raw_xml: XML string to parse\n\n :returns: a generator of parsed bugs\n\n :raises ParseError: raised when an error occurs parsing\n the given XML stream"} {"_id": "q_3625", "text": "Logout from the server."} {"_id": "q_3626", "text": "Get metadata information in XML format."} {"_id": "q_3627", "text": "Get a summary of bugs in CSV format.\n\n :param from_date: retrieve bugs that where updated from that date"} {"_id": "q_3628", "text": "Get the information of a list of bugs in XML format.\n\n :param bug_ids: list of bug identifiers"} {"_id": "q_3629", "text": "Get the activity of a bug in HTML format.\n\n :param bug_id: bug identifier"} {"_id": "q_3630", "text": "Fetch the events from the server.\n\n This method fetches those events of a group stored on the server\n that were updated since the given date. Data comments and rsvps\n are included within each event.\n\n :param category: the category of items to fetch\n :param from_date: obtain events updated since this date\n :param to_date: obtain events updated before this date\n :param filter_classified: remove classified fields from the resulting items\n\n :returns: a generator of events"} {"_id": "q_3631", "text": "Fetch the events\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"} {"_id": "q_3632", "text": "Fetch the rsvps of a given event."} {"_id": "q_3633", "text": "Fetch an Askbot HTML question body.\n\n The method fetchs the HTML question retrieving the\n question body of the item question received\n\n :param question: item with the question itself\n\n :returns: a list of HTML page/s for the question"} {"_id": "q_3634", "text": "Fetch all the comments of an Askbot question and answers.\n\n The method fetchs the list of every comment existing in a question and\n its answers.\n\n :param question: item with the question itself\n\n :returns: a list of comments with the ids as hashes"} {"_id": "q_3635", "text": "Build an Askbot HTML response.\n\n The method puts together all the information regarding a question\n\n :param html_question: array of HTML raw pages\n :param question: question object from the API\n :param comments: list of comments to add\n\n :returns: a dict item with the parsed question information"} {"_id": "q_3636", "text": "Retrieve a question page using the API.\n\n :param page: page to retrieve"} {"_id": "q_3637", "text": "Retrieve a raw HTML question and all it's information.\n\n :param question_id: question identifier\n :param page: page to retrieve"} {"_id": "q_3638", "text": "Retrieve a list of comments by a given id.\n\n :param object_id: object identifiere"} {"_id": "q_3639", "text": "Parse the question info container of a given HTML question.\n\n The method parses the information available in the question information\n container. The container can have up to 2 elements: the first one\n contains the information related with the user who generated the question\n and the date (if any). The second one contains the date of the updated,\n and the user who updated it (if not the same who generated the question).\n\n :param html_question: raw HTML question element\n\n :returns: an object with the parsed information"} {"_id": "q_3640", "text": "Parse the answers of a given HTML question.\n\n The method parses the answers related with a given HTML question,\n as well as all the comments related to the answer.\n\n :param html_question: raw HTML question element\n\n :returns: a list with the answers"} {"_id": "q_3641", "text": "Parse number of answer pages to paginate over them.\n\n :param html_question: raw HTML question element\n\n :returns: an integer with the number of pages"} {"_id": "q_3642", "text": "Parse the user information of a given HTML container.\n\n The method parses all the available user information in the container.\n If the class \"user-info\" exists, the method will get all the available\n information in the container. If not, if a class \"tip\" exists, it will be\n a wiki post with no user associated. Else, it can be an empty container.\n\n :param update_info: beautiful soup answer container element\n\n :returns: an object with the parsed information"} {"_id": "q_3643", "text": "Specific fetch for gerrit 2.8 version.\n\n Get open and closed reviews in different queries.\n Take the newer review from both lists and iterate."} {"_id": "q_3644", "text": "Return the Gerrit server version."} {"_id": "q_3645", "text": "Get the reviews starting from last_item."} {"_id": "q_3646", "text": "Execute gerrit command against the archive"} {"_id": "q_3647", "text": "Execute gerrit command with retry if it fails"} {"_id": "q_3648", "text": "Get data associated to an issue"} {"_id": "q_3649", "text": "Get attachments of an issue"} {"_id": "q_3650", "text": "Get activities on an issue"} {"_id": "q_3651", "text": "Get data associated to an user"} {"_id": "q_3652", "text": "Get the user data by URL"} {"_id": "q_3653", "text": "Get the issue data by its ID"} {"_id": "q_3654", "text": "Get a collection list of a given issue"} {"_id": "q_3655", "text": "Build URL project"} {"_id": "q_3656", "text": "Fetch the groupsio paginated subscriptions for a given token\n\n :param per_page: number of subscriptions per page\n\n :returns: an iterator of subscriptions"} {"_id": "q_3657", "text": "Fetch requests from groupsio API"} {"_id": "q_3658", "text": "Generate a UUID based on the given parameters.\n\n The UUID will be the SHA1 of the concatenation of the values\n from the list. The separator bewteedn these values is ':'.\n Each value must be a non-empty string, otherwise, the function\n will raise an exception.\n\n :param *args: list of arguments used to generate the UUID\n\n :returns: a universal unique identifier\n\n :raises ValueError: when anyone of the values is not a string,\n is empty or `None`."} {"_id": "q_3659", "text": "Fetch items from an archive manager.\n\n Generator to get the items of a category (previously fetched\n by the given backend class) from an archive manager. Only those\n items archived after the given date will be returned.\n\n The parameters needed to initialize `backend` and get the\n items are given using `backend_args` dict parameter.\n\n :param backend_class: backend class to retrive items\n :param backend_args: dict of arguments needed to retrieve the items\n :param manager: archive manager where the items will be retrieved\n :param category: category of the items to retrieve\n :param archived_after: return items archived after this date\n\n :returns: a generator of archived items"} {"_id": "q_3660", "text": "Find available backends.\n\n Look for the Perceval backends and commands under `top_package`\n and its sub-packages. When `top_package` defines a namespace,\n backends under that same namespace will be found too.\n\n :param top_package: package storing backends\n\n :returns: a tuple with two dicts: one with `Backend` classes and one\n with `BackendCommand` classes"} {"_id": "q_3661", "text": "Fetch items from the repository.\n\n The method retrieves items from a repository.\n\n To removed classified fields from the resulting items, set\n the parameter `filter_classified`. Take into account this\n parameter is incompatible with archiving items. Raw client\n data are archived before any other process. Therefore,\n classified data are stored within the archive. To prevent\n from possible data leaks or security issues when users do\n not need these fields, archiving and filtering are not\n compatible.\n\n :param category: the category of the items fetched\n :param filter_classified: remove classified fields from the resulting items\n :param kwargs: a list of other parameters (e.g., from_date, offset, etc.\n specific for each backend)\n\n :returns: a generator of items\n\n :raises BackendError: either when the category is not valid or\n 'filter_classified' and 'archive' are active at the same time."} {"_id": "q_3662", "text": "Fetch the questions from an archive.\n\n It returns the items stored within an archive. If this method is called but\n no archive was provided, the method will raise a `ArchiveError` exception.\n\n :returns: a generator of items\n\n :raises ArchiveError: raised when an error occurs accessing an archive"} {"_id": "q_3663", "text": "Remove classified or confidential data from an item.\n\n It removes those fields that contain data considered as classified.\n Classified fields are defined in `CLASSIFIED_FIELDS` class attribute.\n\n :param item: fields will be removed from this item\n\n :returns: the same item but with confidential data filtered"} {"_id": "q_3664", "text": "Parse a list of arguments.\n\n Parse argument strings needed to run a backend command. The result\n will be a `argparse.Namespace` object populated with the values\n obtained after the validation of the parameters.\n\n :param args: argument strings\n\n :result: an object with the parsed values"} {"_id": "q_3665", "text": "Activate archive arguments parsing"} {"_id": "q_3666", "text": "Activate output arguments parsing"} {"_id": "q_3667", "text": "Fetch and write items.\n\n This method runs the backend to fetch the items from the given\n origin. Items are converted to JSON objects and written to the\n defined output.\n\n If `fetch-archive` parameter was given as an argument during\n the inizialization of the instance, the items will be retrieved\n using the archive manager."} {"_id": "q_3668", "text": "Initialize archive based on the parsed parameters"} {"_id": "q_3669", "text": "Extracts the update time from a MBox item.\n\n The timestamp used is extracted from 'Date' field in its\n several forms. This date is converted to UNIX timestamp\n format.\n\n :param item: item generated by the backend\n\n :returns: a UNIX timestamp"} {"_id": "q_3670", "text": "Fetch and parse the messages from a mailing list"} {"_id": "q_3671", "text": "Copy the contents of a mbox to a temporary file"} {"_id": "q_3672", "text": "Fetch the commits\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"} {"_id": "q_3673", "text": "Returns the Git argument parser."} {"_id": "q_3674", "text": "Parse the Git log stream."} {"_id": "q_3675", "text": "Clone a Git repository.\n\n Make a bare copy of the repository stored in `uri` into `dirpath`.\n The repository would be either local or remote.\n\n :param uri: URI of the repository\n :param dirtpath: directory where the repository will be cloned\n\n :returns: a `GitRepository` class having cloned the repository\n\n :raises RepositoryError: when an error occurs cloning the given\n repository"} {"_id": "q_3676", "text": "Count the objects of a repository.\n\n The method returns the total number of objects (packed and unpacked)\n available on the repository.\n\n :raises RepositoryError: when an error occurs counting the objects\n of a repository"} {"_id": "q_3677", "text": "Check if the repo is in a detached state.\n\n The repository is in a detached state when HEAD is not a symbolic\n reference.\n\n :returns: whether the repository is detached or not\n\n :raises RepositoryError: when an error occurs checking the state\n of the repository"} {"_id": "q_3678", "text": "Keep the repository in sync.\n\n This method will synchronize the repository with its 'origin',\n fetching newest objects and updating references. It uses low\n level commands which allow to keep track of which things\n have changed in the repository.\n\n The method also returns a list of hashes related to the new\n commits fetched during the process.\n\n :returns: list of new commits\n\n :raises RepositoryError: when an error occurs synchronizing\n the repository"} {"_id": "q_3679", "text": "Read the commit log from the repository.\n\n The method returns the Git log of the repository using the\n following options:\n\n git log --raw --numstat --pretty=fuller --decorate=full\n --all --reverse --topo-order --parents -M -C -c\n --remotes=origin\n\n When `from_date` is given, it gets the commits equal or older\n than that date. This date is given in a datetime object.\n\n The list of branches is a list of strings, with the names of the\n branches to fetch. If the list of branches is empty, no commit\n is fetched. If the list of branches is None, all commits\n for all branches will be fetched.\n\n :param from_date: fetch commits newer than a specific\n date (inclusive)\n :param branches: names of branches to fetch from (default: None)\n :param encoding: encode the log using this format\n\n :returns: a generator where each item is a line from the log\n\n :raises EmptyRepositoryError: when the repository is empty and\n the action cannot be performed\n :raises RepositoryError: when an error occurs fetching the log"} {"_id": "q_3680", "text": "Show the data of a set of commits.\n\n The method returns the output of Git show command for a\n set of commits using the following options:\n\n git show --raw --numstat --pretty=fuller --decorate=full\n --parents -M -C -c [...]\n\n When the list of commits is empty, the command will return\n data about the last commit, like the default behaviour of\n `git show`.\n\n :param commits: list of commits to show data\n :param encoding: encode the output using this format\n\n :returns: a generator where each item is a line from the show output\n\n :raises EmptyRepositoryError: when the repository is empty and\n the action cannot be performed\n :raises RepositoryError: when an error occurs fetching the show output"} {"_id": "q_3681", "text": "Update references removing old ones."} {"_id": "q_3682", "text": "Get the current list of local or remote refs."} {"_id": "q_3683", "text": "Reads self.proc.stderr.\n\n Usually, this should be read in a thread, to prevent blocking\n the read from stdout of the stderr buffer is filled, and this\n function is not called becuase the program is busy in the\n stderr reading loop.\n\n Reads self.proc.stderr (self.proc is the subprocess running\n the git command), and reads / writes self.failed_message\n (the message sent to stderr when git fails, usually one line)."} {"_id": "q_3684", "text": "Run a command.\n\n Execute `cmd` command in the directory set by `cwd`. Environment\n variables can be set using the `env` dictionary. The output\n data is returned as encoded bytes.\n\n Commands which their returning status codes are non-zero will\n be treated as failed. Error codes considered as valid can be\n ignored giving them in the `ignored_error_codes` list.\n\n :returns: the output of the command as encoded bytes\n\n :raises RepositoryError: when an error occurs running the command"} {"_id": "q_3685", "text": "Fetch the tweets from the server.\n\n This method fetches tweets from the TwitterSearch API published in the last seven days.\n\n :param category: the category of items to fetch\n :param since_id: if not null, it returns results with an ID greater than the specified ID\n :param max_id: when it is set or if not None, it returns results with an ID less than the specified ID\n :param geocode: if enabled, returns tweets by users located at latitude,longitude,\"mi\"|\"km\"\n :param lang: if enabled, restricts tweets to the given language, given by an ISO 639-1 code\n :param include_entities: if disabled, it excludes entities node\n :param tweets_type: type of tweets returned. Default is \u201cmixed\u201d, others are \"recent\" and \"popular\"\n\n :returns: a generator of tweets"} {"_id": "q_3686", "text": "Fetch the tweets\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"} {"_id": "q_3687", "text": "Returns the Twitter argument parser."} {"_id": "q_3688", "text": "Fetch data from Google API.\n\n The method retrieves a list of hits for some\n given keywords using the Google API.\n\n :param category: the category of items to fetch\n\n :returns: a generator of data"} {"_id": "q_3689", "text": "Fetch Google hit items\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"} {"_id": "q_3690", "text": "Parse the hits returned by the Google Search API"} {"_id": "q_3691", "text": "Get repo info about stars, watchers and forks"} {"_id": "q_3692", "text": "Get issue reactions"} {"_id": "q_3693", "text": "Get reactions on issue comments"} {"_id": "q_3694", "text": "Get issue assignees"} {"_id": "q_3695", "text": "Get pull request requested reviewers"} {"_id": "q_3696", "text": "Get pull request commit hashes"} {"_id": "q_3697", "text": "Get pull review comment reactions"} {"_id": "q_3698", "text": "Get reactions of an issue"} {"_id": "q_3699", "text": "Fetch the issues from the repository.\n\n The method retrieves, from a GitHub repository, the issues\n updated since the given date.\n\n :param from_date: obtain issues updated since this date\n\n :returns: a generator of issues"} {"_id": "q_3700", "text": "Get pull requested reviewers"} {"_id": "q_3701", "text": "Get pull request commits"} {"_id": "q_3702", "text": "Get reactions of a review comment"} {"_id": "q_3703", "text": "Get the user information and update the user cache"} {"_id": "q_3704", "text": "Return array of all tokens remaining API points"} {"_id": "q_3705", "text": "Check if we need to switch GitHub API tokens"} {"_id": "q_3706", "text": "Update rate limits data for the current token"} {"_id": "q_3707", "text": "Init metadata information.\n\n Metatada is composed by basic information needed to identify\n where archived data came from and how it can be retrieved\n and built into Perceval items.\n\n :param: origin: identifier of the repository\n :param: backend_name: name of the backend\n :param: backend_version: version of the backend\n :param: category: category of the items fetched\n :param: backend_params: dict representation of the fetch parameters\n\n raises ArchiveError: when an error occurs initializing the metadata"} {"_id": "q_3708", "text": "Store a raw item in this archive.\n\n The method will store `data` content in this archive. The unique\n identifier for that item will be generated using the rest of the\n parameters.\n\n :param uri: request URI\n :param payload: request payload\n :param headers: request headers\n :param data: data to store in this archive\n\n :raises ArchiveError: when an error occurs storing the given data"} {"_id": "q_3709", "text": "Retrieve a raw item from the archive.\n\n The method will return the `data` content corresponding to the\n hascode derived from the given parameters.\n\n :param uri: request URI\n :param payload: request payload\n :param headers: request headers\n\n :returns: the archived data\n\n :raises ArchiveError: when an error occurs retrieving data"} {"_id": "q_3710", "text": "Create a brand new archive.\n\n Call this method to create a new and empty archive. It will initialize\n the storage file in the path defined by `archive_path`.\n\n :param archive_path: absolute path where the archive file will be created\n\n :raises ArchiveError: when the archive file already exists"} {"_id": "q_3711", "text": "Generate a SHA1 based on the given arguments.\n\n Hashcodes created by this method will used as unique identifiers\n for the raw items or resources stored by this archive.\n\n :param uri: URI to the resource\n :param payload: payload of the request needed to fetch the resource\n :param headers: headers of the request needed to fetch the resource\n\n :returns: a SHA1 hash code"} {"_id": "q_3712", "text": "Check whether the archive is valid or not.\n\n This method will check if tables were created and if they\n contain valid data."} {"_id": "q_3713", "text": "Fetch the number of rows in a table"} {"_id": "q_3714", "text": "Remove an archive.\n\n This method deletes from the filesystem the archive stored\n in `archive_path`.\n\n :param archive_path: path to the archive\n\n :raises ArchiveManangerError: when an error occurs removing the\n archive"} {"_id": "q_3715", "text": "Search archives.\n\n Get the archives which store data based on the given parameters.\n These parameters define which the origin was (`origin`), how data\n was fetched (`backend_name`) and data type ('category').\n Only those archives created on or after `archived_after` will be\n returned.\n\n The method returns a list with the file paths to those archives.\n The list is sorted by the date of creation of each archive.\n\n :param origin: data origin\n :param backend_name: backed used to fetch data\n :param category: type of the items fetched by the backend\n :param archived_after: get archives created on or after this date\n\n :returns: a list with archive names which match the search criteria"} {"_id": "q_3716", "text": "Search archives using filters."} {"_id": "q_3717", "text": "Check if filename is a compressed file supported by the tool.\n\n This function uses magic numbers (first four bytes) to determine\n the type of the file. Supported types are 'gz' and 'bz2'. When\n the filetype is not supported, the function returns `None`.\n\n :param filepath: path to the file\n\n :returns: 'gz' or 'bz2'; `None` if the type is not supported"} {"_id": "q_3718", "text": "Generate a months range.\n\n Generator of months starting on `from_date` util `to_date`. Each\n returned item is a tuple of two datatime objects like in (month, month+1).\n Thus, the result will follow the sequence:\n ((fd, fd+1), (fd+1, fd+2), ..., (td-2, td-1), (td-1, td))\n\n :param from_date: generate dates starting on this month\n :param to_date: generate dates until this month\n\n :result: a generator of months range"} {"_id": "q_3719", "text": "Convert an email message into a dictionary.\n\n This function transforms an `email.message.Message` object\n into a dictionary. Headers are stored as key:value pairs\n while the body of the message is stored inside `body` key.\n Body may have two other keys inside, 'plain', for plain body\n messages and 'html', for HTML encoded messages.\n\n The returned dictionary has the type `requests.structures.CaseInsensitiveDict`\n due to same headers with different case formats can appear in\n the same message.\n\n :param msg: email message of type `email.message.Message`\n\n :returns : dictionary of type `requests.structures.CaseInsensitiveDict`\n\n :raises ParseError: when an error occurs transforming the message\n to a dictionary"} {"_id": "q_3720", "text": "Remove control and invalid characters from an xml stream.\n\n Looks for invalid characters and subtitutes them with whitespaces.\n This solution is based on these two posts: Olemis Lang's reponse\n on StackOverflow (http://stackoverflow.com/questions/1707890) and\n lawlesst's on GitHub Gist (https://gist.github.com/lawlesst/4110923),\n that is based on the previous answer.\n\n :param xml: XML stream\n\n :returns: a purged XML stream"} {"_id": "q_3721", "text": "Convert a XML stream into a dictionary.\n\n This function transforms a xml stream into a dictionary. The\n attributes are stored as single elements while child nodes are\n stored into lists. The text node is stored using the special\n key '__text__'.\n\n This code is based on Winston Ewert's solution to this problem.\n See http://codereview.stackexchange.com/questions/10400/convert-elementtree-to-dict\n for more info. The code was licensed as cc by-sa 3.0.\n\n :param raw_xml: XML stream\n\n :returns: a dict with the XML data\n\n :raises ParseError: raised when an error occurs parsing the given\n XML stream"} {"_id": "q_3722", "text": "Parse a Redmine issues JSON stream.\n\n The method parses a JSON stream and returns a list iterator.\n Each item is a dictionary that contains the issue parsed data.\n\n :param raw_json: JSON string to parse\n\n :returns: a generator of parsed issues"} {"_id": "q_3723", "text": "Get the information of the given issue.\n\n :param issue_id: issue identifier"} {"_id": "q_3724", "text": "Get the information of the given user.\n\n :param user_id: user identifier"} {"_id": "q_3725", "text": "Call to get a resource.\n\n :param method: resource to get\n :param params: dict with the HTTP parameters needed to get\n the given resource"} {"_id": "q_3726", "text": "Fetch data from a Docker Hub repository.\n\n The method retrieves, from a repository stored in Docker Hub,\n its data which includes number of pulls, stars, description,\n among other data.\n\n :param category: the category of items to fetch\n\n :returns: a generator of data"} {"_id": "q_3727", "text": "Fetch the Dockher Hub items\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"} {"_id": "q_3728", "text": "Add extra information for custom fields.\n\n :param custom_fields: set of custom fields with the extra information\n :param fields: fields of the issue where to add the extra information\n\n :returns: an set of items with the extra information mapped"} {"_id": "q_3729", "text": "Retrieve all the items from a given date.\n\n :param url: endpoint API url\n :param from_date: obtain items updated since this date\n :param expand_fields: if True, it includes the expand fields in the payload"} {"_id": "q_3730", "text": "Retrieve all the issues from a given date.\n\n :param from_date: obtain issues updated since this date"} {"_id": "q_3731", "text": "Retrieve all the comments of a given issue.\n\n :param issue_id: ID of the issue"} {"_id": "q_3732", "text": "Retrieve all the fields available."} {"_id": "q_3733", "text": "Retrieve all the questions from a given date.\n\n :param from_date: obtain questions updated since this date"} {"_id": "q_3734", "text": "Returns the StackExchange argument parser."} {"_id": "q_3735", "text": "Fetch the pages\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"} {"_id": "q_3736", "text": "Get the max date in unixtime format from reviews."} {"_id": "q_3737", "text": "Retrieve recent pages from all namespaces starting from rccontinue."} {"_id": "q_3738", "text": "Fetch the messages the bot can read from the server.\n\n The method retrieves, from the Telegram server, the messages\n sent with an offset equal or greater than the given.\n\n A list of chats, groups and channels identifiers can be set\n using the parameter `chats`. When it is set, only those\n messages sent to any of these will be returned. An empty list\n will return no messages.\n\n :param category: the category of items to fetch\n :param offset: obtain messages from this offset\n :param chats: list of chat names used to filter messages\n\n :returns: a generator of messages\n\n :raises ValueError: when `chats` is an empty list"} {"_id": "q_3739", "text": "Check if a message can be filtered based in a list of chats.\n\n This method returns `True` when the message was sent to a chat\n of the given list. It also returns `True` when chats is `None`.\n\n :param message: Telegram message\n :param chats: list of chat, groups and channels identifiers\n\n :returns: `True` when the message can be filtered; otherwise,\n it returns `False`"} {"_id": "q_3740", "text": "Fetch the articles\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"} {"_id": "q_3741", "text": "NNTP metadata.\n\n This method takes items, overriding `metadata` decorator,\n to add extra information related to NNTP.\n\n :param item: an item fetched by a backend\n :param filter_classified: sets if classified fields were filtered"} {"_id": "q_3742", "text": "Parse a NNTP article.\n\n This method parses a NNTP article stored in a string object\n and returns an dictionary.\n\n :param raw_article: NNTP article string\n\n :returns: a dictionary of type `requests.structures.CaseInsensitiveDict`\n\n :raises ParseError: when an error is found parsing the article"} {"_id": "q_3743", "text": "Fetch NNTP data from the server or from the archive\n\n :param method: the name of the command to execute\n :param args: the arguments required by the command"} {"_id": "q_3744", "text": "Fetch article data\n\n :param article_id: id of the article to fetch"} {"_id": "q_3745", "text": "Fetch data from NNTP\n\n :param method: the name of the command to execute\n :param args: the arguments required by the command"} {"_id": "q_3746", "text": "Fetch data from the archive\n\n :param method: the name of the command to execute\n :param args: the arguments required by the command"} {"_id": "q_3747", "text": "Create a http session and initialize the retry object."} {"_id": "q_3748", "text": "The fetching process sleeps until the rate limit is restored or\n raises a RateLimitError exception if sleep_for_rate flag is disabled."} {"_id": "q_3749", "text": "Parse a Supybot IRC stream.\n\n Returns an iterator of dicts. Each dicts contains information\n about the date, type, nick and body of a single log entry.\n\n :returns: iterator of parsed lines\n\n :raises ParseError: when an invalid line is found parsing the given\n stream"} {"_id": "q_3750", "text": "Parse timestamp section"} {"_id": "q_3751", "text": "Parse message section"} {"_id": "q_3752", "text": "Fetch the topics\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"} {"_id": "q_3753", "text": "Parse a topics page stream.\n\n The result of parsing process is a generator of tuples. Each\n tuple contains de identifier of the topic, the last date\n when it was updated and whether is pinned or not.\n\n :param raw_json: JSON stream to parse\n\n :returns: a generator of parsed bugs"} {"_id": "q_3754", "text": "Retrieve the post whit `post_id` identifier.\n\n :param post_id: identifier of the post to retrieve"} {"_id": "q_3755", "text": "Fetch the tasks\n\n :param category: the category of items to fetch\n :param kwargs: backend arguments\n\n :returns: a generator of items"} {"_id": "q_3756", "text": "Parse a Phabricator tasks JSON stream.\n\n The method parses a JSON stream and returns a list iterator.\n Each item is a dictionary that contains the task parsed data.\n\n :param raw_json: JSON string to parse\n\n :returns: a generator of parsed tasks"} {"_id": "q_3757", "text": "Parse a Phabricator users JSON stream.\n\n The method parses a JSON stream and returns a list iterator.\n Each item is a dictionary that contais the user parsed data.\n\n :param raw_json: JSON string to parse\n\n :returns: a generator of parsed users"} {"_id": "q_3758", "text": "Retrieve tasks.\n\n :param from_date: retrieve tasks that where updated from that date;\n dates are converted epoch time."} {"_id": "q_3759", "text": "Retrieve tasks transactions.\n\n :param phids: list of tasks identifiers"} {"_id": "q_3760", "text": "Extracts the identifier from a Confluence item.\n\n This identifier will be the mix of two fields because a\n historical content does not have any unique identifier.\n In this case, 'id' and 'version' values are combined because\n it should not be possible to have two equal version numbers\n for the same content. The value to return will follow the\n pattern: #v (i.e 28979#v10)."} {"_id": "q_3761", "text": "Parse the result property, extracting the value\n and unit of measure"} {"_id": "q_3762", "text": "Return a capabilities url"} {"_id": "q_3763", "text": "Get and parse a WFS capabilities document, returning an\n instance of WFSCapabilitiesInfoset\n\n Parameters\n ----------\n url : string\n The URL to the WFS capabilities document.\n timeout : number\n A timeout value (in seconds) for the request."} {"_id": "q_3764", "text": "Parse a WFS capabilities document, returning an\n instance of WFSCapabilitiesInfoset\n\n string should be an XML capabilities document"} {"_id": "q_3765", "text": "helper function to build a WFS 3.0 URL\n\n @type path: string\n @param path: path of WFS URL\n\n @returns: fully constructed URL path"} {"_id": "q_3766", "text": "Consruct fiona schema based on given elements\n\n :param list Element: list of elements\n :param dict nsmap: namespace map\n\n :return dict: schema"} {"_id": "q_3767", "text": "Get url for describefeaturetype request\n\n :return str: url"} {"_id": "q_3768", "text": "use ComplexDataInput with a reference to a document"} {"_id": "q_3769", "text": "A URL that can be used to open the page.\n\n The URL is formatted from :py:attr:`URL_TEMPLATE`, which is then\n appended to :py:attr:`base_url` unless the template results in an\n absolute URL.\n\n :return: URL that can be used to open the page.\n :rtype: str"} {"_id": "q_3770", "text": "Open the page.\n\n Navigates to :py:attr:`seed_url` and calls :py:func:`wait_for_page_to_load`.\n\n :return: The current page object.\n :rtype: :py:class:`Page`\n :raises: UsageError"} {"_id": "q_3771", "text": "Root element for the page region.\n\n Page regions should define a root element either by passing this on\n instantiation or by defining a :py:attr:`_root_locator` attribute. To\n reduce the chances of hitting :py:class:`~selenium.common.exceptions.StaleElementReferenceException`\n or similar you should use :py:attr:`_root_locator`, as this is looked up every\n time the :py:attr:`root` property is accessed."} {"_id": "q_3772", "text": "Finds an element on the page.\n\n :param strategy: Location strategy to use. See :py:class:`~selenium.webdriver.common.by.By` or :py:attr:`~pypom.splinter_driver.ALLOWED_STRATEGIES`.\n :param locator: Location of target element.\n :type strategy: str\n :type locator: str\n :return: An element.\n :rytpe: :py:class:`~selenium.webdriver.remote.webelement.WebElement` or :py:class:`~splinter.driver.webdriver.WebDriverElement`"} {"_id": "q_3773", "text": "Finds elements on the page.\n\n :param strategy: Location strategy to use. See :py:class:`~selenium.webdriver.common.by.By` or :py:attr:`~pypom.splinter_driver.ALLOWED_STRATEGIES`.\n :param locator: Location of target elements.\n :type strategy: str\n :type locator: str\n :return: List of :py:class:`~selenium.webdriver.remote.webelement.WebElement` or :py:class:`~splinter.element_list.ElementList`\n :rtype: list"} {"_id": "q_3774", "text": "Checks whether an element is present.\n\n :param strategy: Location strategy to use. See :py:class:`~selenium.webdriver.common.by.By` or :py:attr:`~pypom.splinter_driver.ALLOWED_STRATEGIES`.\n :param locator: Location of target element.\n :type strategy: str\n :type locator: str\n :return: ``True`` if element is present, else ``False``.\n :rtype: bool"} {"_id": "q_3775", "text": "Checks whether an element is displayed.\n\n :param strategy: Location strategy to use. See :py:class:`~selenium.webdriver.common.by.By` or :py:attr:`~pypom.splinter_driver.ALLOWED_STRATEGIES`.\n :param locator: Location of target element.\n :type strategy: str\n :type locator: str\n :return: ``True`` if element is displayed, else ``False``.\n :rtype: bool"} {"_id": "q_3776", "text": "Register driver adapter used by page object"} {"_id": "q_3777", "text": "Get the list of TV genres.\n\n Args:\n language: (optional) ISO 639-1 code.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3778", "text": "Get the cast and crew information for a specific movie id.\n\n Args:\n append_to_response: (optional) Comma separated, any movie method.\n\n Returns:\n A dict representation of the JSON returned from the API."} {"_id": "q_3779", "text": "Get the plot keywords for a specific movie id.\n\n Returns:\n A dict representation of the JSON returned from the API."} {"_id": "q_3780", "text": "Get the release dates and certification for a specific movie id.\n\n Args:\n append_to_response: (optional) Comma separated, any movie method.\n\n Returns:\n A dict representation of the JSON returned from the API."} {"_id": "q_3781", "text": "Get the translations for a specific movie id.\n\n Args:\n append_to_response: (optional) Comma separated, any movie method.\n\n Returns:\n A dict representation of the JSON returned from the API."} {"_id": "q_3782", "text": "Get the similar movies for a specific movie id.\n\n Args:\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n append_to_response: (optional) Comma separated, any movie method.\n\n Returns:\n A dict representation of the JSON returned from the API."} {"_id": "q_3783", "text": "Get the reviews for a particular movie id.\n\n Args:\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n append_to_response: (optional) Comma separated, any movie method.\n\n Returns:\n A dict representation of the JSON returned from the API."} {"_id": "q_3784", "text": "Get the list of upcoming movies. This list refreshes every day.\n The maximum number of items this list will include is 100.\n\n Args:\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n\n Returns:\n A dict representation of the JSON returned from the API."} {"_id": "q_3785", "text": "Get the list of movies playing in theatres. This list refreshes\n every day. The maximum number of items this list will include is 100.\n\n Args:\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n\n Returns:\n A dict representation of the JSON returned from the API."} {"_id": "q_3786", "text": "Get the list of popular movies on The Movie Database. This list\n refreshes every day.\n\n Args:\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n\n Returns:\n A dict representation of the JSON returned from the API."} {"_id": "q_3787", "text": "Get the list of top rated movies. By default, this list will only\n include movies that have 10 or more votes. This list refreshes every\n day.\n\n Args:\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n\n Returns:\n A dict representation of the JSON returned from the API."} {"_id": "q_3788", "text": "This method lets users get the status of whether or not the movie has\n been rated or added to their favourite or watch lists. A valid session\n id is required.\n\n Args:\n session_id: see Authentication.\n\n Returns:\n A dict representation of the JSON returned from the API."} {"_id": "q_3789", "text": "This method lets users rate a movie. A valid session id or guest\n session id is required.\n\n Args:\n session_id: see Authentication.\n guest_session_id: see Authentication.\n value: Rating value.\n\n Returns:\n A dict representation of the JSON returned from the API."} {"_id": "q_3790", "text": "Get the movie credits for a specific person id.\n\n Args:\n language: (optional) ISO 639-1 code.\n append_to_response: (optional) Comma separated, any person method.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3791", "text": "Get the detailed information about a particular credit record. This is \n currently only supported with the new credit model found in TV. These \n ids can be found from any TV credit response as well as the tv_credits \n and combined_credits methods for people.\n\n The episodes object returns a list of episodes and are generally going \n to be guest stars. The season array will return a list of season \n numbers. Season credits are credits that were marked with the \n \"add to every season\" option in the editing interface and are \n assumed to be \"season regulars\".\n\n Args:\n language: (optional) ISO 639-1 code.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3792", "text": "Discover TV shows by different types of data like average rating, \n number of votes, genres, the network they aired on and air dates.\n\n Args:\n page: (optional) Minimum 1, maximum 1000.\n language: (optional) ISO 639-1 code.\n sort_by: (optional) Available options are 'vote_average.desc', \n 'vote_average.asc', 'first_air_date.desc', \n 'first_air_date.asc', 'popularity.desc', 'popularity.asc'\n first_air_year: (optional) Filter the results release dates to \n matches that include this value. Expected value \n is a year.\n vote_count.gte or vote_count_gte: (optional) Only include TV shows \n that are equal to,\n or have vote count higher than this value. Expected\n value is an integer.\n vote_average.gte or vote_average_gte: (optional) Only include TV \n shows that are equal \n to, or have a higher average rating than this \n value. Expected value is a float.\n with_genres: (optional) Only include TV shows with the specified \n genres. Expected value is an integer (the id of a \n genre). Multiple valued can be specified. Comma \n separated indicates an 'AND' query, while a \n pipe (|) separated value indicates an 'OR'.\n with_networks: (optional) Filter TV shows to include a specific \n network. Expected value is an integer (the id of a\n network). They can be comma separated to indicate an\n 'AND' query.\n first_air_date.gte or first_air_date_gte: (optional) The minimum \n release to include. \n Expected format is 'YYYY-MM-DD'.\n first_air_date.lte or first_air_date_lte: (optional) The maximum \n release to include. \n Expected format is 'YYYY-MM-DD'.\n \n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3793", "text": "Get the system wide configuration info.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3794", "text": "Get the list of supported certifications for movies.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3795", "text": "Get the basic information for an account.\n\n Call this method first, before calling other Account methods.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3796", "text": "Generate a session id for user based authentication.\n\n A session id is required in order to use any of the write methods.\n\n Args:\n request_token: The token you generated for the user to approve.\n The token needs to be approved before being\n used here.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3797", "text": "Generate a guest session id.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3798", "text": "Get a list of rated moview for a specific guest session id.\n\n Args:\n page: (optional) Minimum 1, maximum 1000.\n sort_by: (optional) 'created_at.asc' | 'created_at.desc'\n language: (optional) ISO 639-1 code.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3799", "text": "Delete movies from a list that the user created.\n\n A valid session id is required.\n\n Args:\n media_id: A movie id.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3800", "text": "Get the similar TV series for a specific TV series id.\n\n Args:\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n append_to_response: (optional) Comma separated, any TV method.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3801", "text": "Get the list of TV shows that are currently on the air. This query\n looks for any TV show that has an episode with an air date in the\n next 7 days.\n\n Args:\n page: (optional) Minimum 1, maximum 1000.\n language: (optional) ISO 639 code.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3802", "text": "Get the primary information about a TV season by its season number.\n\n Args:\n language: (optional) ISO 639 code.\n append_to_response: (optional) Comma separated, any TV series\n method.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3803", "text": "Get the external ids that we have stored for a TV season by season\n number.\n\n Args:\n language: (optional) ISO 639 code.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3804", "text": "Get the primary information about a TV episode by combination of a\n season and episode number.\n\n Args:\n language: (optional) ISO 639 code.\n append_to_response: (optional) Comma separated, any TV series\n method.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3805", "text": "Get the TV episode credits by combination of season and episode number.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3806", "text": "Get the external ids for a TV episode by combination of a season and\n episode number.\n\n Args:\n language: (optional) ISO 639 code.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3807", "text": "Search for movies by title.\n\n Args:\n query: CGI escpaed string.\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n include_adult: (optional) Toggle the inclusion of adult titles. \n Expected value is True or False.\n year: (optional) Filter the results release dates to matches that \n include this value.\n primary_release_year: (optional) Filter the results so that only \n the primary release dates have this value.\n search_type: (optional) By default, the search type is 'phrase'. \n This is almost guaranteed the option you will want. \n It's a great all purpose search type and by far the \n most tuned for every day querying. For those wanting \n more of an \"autocomplete\" type search, set this \n option to 'ngram'.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3808", "text": "Search for people by name.\n\n Args:\n query: CGI escpaed string.\n page: (optional) Minimum value of 1. Expected value is an integer.\n include_adult: (optional) Toggle the inclusion of adult titles. \n Expected value is True or False.\n search_type: (optional) By default, the search type is 'phrase'. \n This is almost guaranteed the option you will want. \n It's a great all purpose search type and by far the \n most tuned for every day querying. For those wanting \n more of an \"autocomplete\" type search, set this \n option to 'ngram'.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3809", "text": "Search for companies by name.\n\n Args:\n query: CGI escpaed string.\n page: (optional) Minimum value of 1. Expected value is an integer.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3810", "text": "Search the movie, tv show and person collections with a single query.\n\n Args:\n query: CGI escpaed string.\n page: (optional) Minimum value of 1. Expected value is an integer.\n language: (optional) ISO 639-1 code.\n include_adult: (optional) Toggle the inclusion of adult titles.\n Expected value is True or False.\n\n Returns:\n A dict respresentation of the JSON returned from the API."} {"_id": "q_3811", "text": "r'-?\\d+"} {"_id": "q_3812", "text": "r'\\}"} {"_id": "q_3813", "text": "r'<<\\S+\\r?\\n"} {"_id": "q_3814", "text": "Initialize the parse table at install time"} {"_id": "q_3815", "text": "Add another row of data from a test suite"} {"_id": "q_3816", "text": "Return an instance of SuiteFile, ResourceFile, SuiteFolder\n\n Exactly which is returned depends on whether it's a file or\n folder, and if a file, the contents of the file. If there is a\n testcase table, this will return an instance of SuiteFile,\n otherwise it will return an instance of ResourceFile."} {"_id": "q_3817", "text": "The general idea is to do a quick parse, creating a list of\n tables. Each table is nothing more than a list of rows, with\n each row being a list of cells. Additional parsing such as\n combining rows into statements is done on demand. This first\n pass is solely to read in the plain text and organize it by table."} {"_id": "q_3818", "text": "Return 'suite' or 'resource' or None\n\n This will return 'suite' if a testcase table is found;\n It will return 'resource' if at least one robot table\n is found. If no tables are found it will return None"} {"_id": "q_3819", "text": "Generator which returns all keywords in the suite"} {"_id": "q_3820", "text": "Regurgitate the tables and rows"} {"_id": "q_3821", "text": "Generator which returns all of the statements in all of the variables tables"} {"_id": "q_3822", "text": "The idea is, we recognize when we have a new testcase by \n checking the first cell. If it's not empty and not a comment, \n we have a new test case."} {"_id": "q_3823", "text": "Parse command line arguments, and run rflint"} {"_id": "q_3824", "text": "Report a rule violation"} {"_id": "q_3825", "text": "Returns a list of rules of a given class\n \n Rules are treated as singletons - we only instantiate each\n rule once."} {"_id": "q_3826", "text": "Import the given rule file"} {"_id": "q_3827", "text": "Handle the parsing of command line arguments."} {"_id": "q_3828", "text": "Creates a customized Draft4ExtendedValidator.\n\n :param spec_resolver: resolver for the spec\n :type resolver: :class:`jsonschema.RefResolver`"} {"_id": "q_3829", "text": "While yaml supports integer keys, these are not valid in\n json, and will break jsonschema. This method coerces all keys\n to strings."} {"_id": "q_3830", "text": "Open a file, read it and return its contents."} {"_id": "q_3831", "text": "Takes a list of reference sentences for a single segment\n and returns an object that encapsulates everything that BLEU\n needs to know about them."} {"_id": "q_3832", "text": "Takes a reference sentences for a single segment\n and returns an object that encapsulates everything that BLEU\n needs to know about them. Also provides a set cause bleualign wants it"} {"_id": "q_3833", "text": "Creates the sentence alignment of two texts.\n\n Texts can consist of several blocks. Block boundaries cannot be crossed by sentence \n alignment links. \n\n Each block consists of a list that contains the lengths (in characters) of the sentences\n in this block.\n \n @param source_blocks: The list of blocks in the source text.\n @param target_blocks: The list of blocks in the target text.\n @param params: the sentence alignment parameters.\n\n @returns: A list of sentence alignment lists"} {"_id": "q_3834", "text": "Creates a path to model files\n model_path - string"} {"_id": "q_3835", "text": "Strips illegal characters from a string. Used to sanitize input essays.\n Removes all non-punctuation, digit, or letter characters.\n Returns sanitized string.\n string - string"} {"_id": "q_3836", "text": "Uses aspell to spell correct an input string.\n Requires aspell to be installed and added to the path.\n Returns the spell corrected string if aspell is found, original string if not.\n string - string"} {"_id": "q_3837", "text": "Makes a list unique"} {"_id": "q_3838", "text": "Generates a count of the number of times each unique item appears in a list"} {"_id": "q_3839", "text": "Given an input string, part of speech tags the string, then generates a list of\n ngrams that appear in the string.\n Used to define grammatically correct part of speech tag sequences.\n Returns a list of part of speech tag sequences."} {"_id": "q_3840", "text": "Generates predictions on a novel data array using a fit classifier\n clf is a classifier that has already been fit\n arr is a data array identical in dimension to the array clf was trained on\n Returns the array of predictions."} {"_id": "q_3841", "text": "Calculates the average value of a list of numbers\n Returns a float"} {"_id": "q_3842", "text": "Calculates kappa correlation between rater_a and rater_b.\n Kappa measures how well 2 quantities vary together.\n rater_a is a list of rater a scores\n rater_b is a list of rater b scores\n min_rating is an optional argument describing the minimum rating possible on the data set\n max_rating is an optional argument describing the maximum rating possible on the data set\n Returns a float corresponding to the kappa correlation"} {"_id": "q_3843", "text": "Initializes dictionaries from an essay set object\n Dictionaries must be initialized prior to using this to extract features\n e_set is an input essay set\n returns a confirmation of initialization"} {"_id": "q_3844", "text": "Gets a set of gramatically correct part of speech sequences from an input file called essaycorpus.txt\n Returns the set and caches the file"} {"_id": "q_3845", "text": "Generates length based features from an essay set\n Generally an internal function called by gen_feats\n Returns an array of length features\n e_set - EssaySet object"} {"_id": "q_3846", "text": "Generates bag of words features from an input essay set and trained FeatureExtractor\n Generally called by gen_feats\n Returns an array of features\n e_set - EssaySet object"} {"_id": "q_3847", "text": "Generates bag of words, length, and prompt features from an essay set object\n returns an array of features\n e_set - EssaySet object"} {"_id": "q_3848", "text": "Gets two classifiers for each type of algorithm, and returns them. First for predicting, second for cv error.\n type - one of util_functions.AlgorithmTypes"} {"_id": "q_3849", "text": "Extracts features and generates predictors based on a given predictor set\n predictor_set - a PredictorSet object that has been initialized with data\n type - one of util_functions.AlgorithmType"} {"_id": "q_3850", "text": "Function that creates essay set, extracts features, and writes out model\n See above functions for argument descriptions"} {"_id": "q_3851", "text": "Initialize dictionaries with the textual inputs in the PredictorSet object\n p_set - PredictorSet object that has had data fed in"} {"_id": "q_3852", "text": "r\"\"\"Get descriptors in module.\n\n Parameters:\n mdl(module): module to search\n submodule(bool): search recursively\n\n Returns:\n Iterator[Descriptor]"} {"_id": "q_3853", "text": "Register Descriptors from json descriptor objects.\n\n Parameters:\n obj(list or dict): descriptors to register"} {"_id": "q_3854", "text": "r\"\"\"Register descriptors.\n\n Descriptor-like:\n * Descriptor instance: self\n * Descriptor class: use Descriptor.preset() method\n * module: use Descriptor-likes in module\n * Iterable: use Descriptor-likes in Iterable\n\n Parameters:\n desc(Descriptor-like): descriptors to register\n version(str): version\n ignore_3D(bool): ignore 3D descriptors"} {"_id": "q_3855", "text": "Output message.\n\n Parameters:\n s(str): message to output\n file(file-like): output to\n end(str): end mark of message\n\n Return:\n None"} {"_id": "q_3856", "text": "r\"\"\"Check calculatable descriptor class or not.\n\n Returns:\n bool"} {"_id": "q_3857", "text": "r\"\"\"Calculate atomic surface area.\n\n :type i: int\n :param i: atom index\n\n :rtype: float"} {"_id": "q_3858", "text": "r\"\"\"Construct SurfaceArea from rdkit Mol type.\n\n :type mol: rdkit.Chem.Mol\n :param mol: input molecule\n\n :type conformer: int\n :param conformer: conformer id\n\n :type solvent_radius: float\n :param solvent_radius: solvent radius\n\n :type level: int\n :param level: mesh level\n\n :rtype: SurfaceArea"} {"_id": "q_3859", "text": "Create Descriptor instance from json dict.\n\n Parameters:\n obj(dict): descriptor dict\n\n Returns:\n Descriptor: descriptor"} {"_id": "q_3860", "text": "r\"\"\"Delete missing value.\n\n Returns:\n Result"} {"_id": "q_3861", "text": "r\"\"\"Get items.\n\n Returns:\n Iterable[(Descriptor, value)]"} {"_id": "q_3862", "text": "Reports an OrderError. Error message will state that\n first_tag came before second_tag."} {"_id": "q_3863", "text": "Write the fields of a single review to out."} {"_id": "q_3864", "text": "Write the fields of a single annotation to out."} {"_id": "q_3865", "text": "Write a file fields to out."} {"_id": "q_3866", "text": "Write a package fields to out."} {"_id": "q_3867", "text": "Write an SPDX tag value document.\n - document - spdx.document instance.\n - out - file like object that will be written to.\n Optionally `validate` the document before writing and raise\n InvalidDocumentError if document.validate returns False."} {"_id": "q_3868", "text": "Return an spdx.checksum.Algorithm instance representing the SHA1\n checksum or None if does not match CHECKSUM_RE."} {"_id": "q_3869", "text": "Set the document version.\n Raise SPDXValueError if malformed value, CardinalityError\n if already defined"} {"_id": "q_3870", "text": "Sets the document name.\n Raises CardinalityError if already defined."} {"_id": "q_3871", "text": "Sets the document SPDX Identifier.\n Raises value error if malformed value, CardinalityError\n if already defined."} {"_id": "q_3872", "text": "Sets document comment, Raises CardinalityError if\n comment already set.\n Raises SPDXValueError if comment is not free form text."} {"_id": "q_3873", "text": "Sets the document namespace.\n Raise SPDXValueError if malformed value, CardinalityError\n if already defined."} {"_id": "q_3874", "text": "Sets the `spdx_document_uri` attribute of the `ExternalDocumentRef`\n object."} {"_id": "q_3875", "text": "Builds a tool object out of a string representation.\n Returns built tool. Raises SPDXValueError if failed to extract\n tool name or name is malformed"} {"_id": "q_3876", "text": "Adds a creator to the document's creation info.\n Returns true if creator is valid.\n Creator must be built by an EntityBuilder.\n Raises SPDXValueError if not a creator type."} {"_id": "q_3877", "text": "Sets created date, Raises CardinalityError if\n created date already set.\n Raises SPDXValueError if created is not a date."} {"_id": "q_3878", "text": "Sets the license list version, Raises CardinalityError if\n already set, SPDXValueError if incorrect value."} {"_id": "q_3879", "text": "Resets builder state to allow building new creation info."} {"_id": "q_3880", "text": "Adds a reviewer to the SPDX Document.\n Reviwer is an entity created by an EntityBuilder.\n Raises SPDXValueError if not a valid reviewer type."} {"_id": "q_3881", "text": "Sets the review date. Raises CardinalityError if\n already set. OrderError if no reviewer defined before.\n Raises SPDXValueError if invalid reviewed value."} {"_id": "q_3882", "text": "Adds an annotator to the SPDX Document.\n Annotator is an entity created by an EntityBuilder.\n Raises SPDXValueError if not a valid annotator type."} {"_id": "q_3883", "text": "Sets the annotation date. Raises CardinalityError if\n already set. OrderError if no annotator defined before.\n Raises SPDXValueError if invalid value."} {"_id": "q_3884", "text": "Sets the annotation comment. Raises CardinalityError if\n already set. OrderError if no annotator defined before.\n Raises SPDXValueError if comment is not free form text."} {"_id": "q_3885", "text": "Sets the annotation type. Raises CardinalityError if\n already set. OrderError if no annotator defined before.\n Raises SPDXValueError if invalid value."} {"_id": "q_3886", "text": "Resets the builder's state in order to build new packages."} {"_id": "q_3887", "text": "Creates a package for the SPDX Document.\n name - any string.\n Raises CardinalityError if package already defined."} {"_id": "q_3888", "text": "Sets the package file name, if not already set.\n name - Any string.\n Raises CardinalityError if already has a file_name.\n Raises OrderError if no pacakge previously defined."} {"_id": "q_3889", "text": "Sets the package supplier, if not already set.\n entity - Organization, Person or NoAssert.\n Raises CardinalityError if already has a supplier.\n Raises OrderError if no package previously defined."} {"_id": "q_3890", "text": "Sets the package originator, if not already set.\n entity - Organization, Person or NoAssert.\n Raises CardinalityError if already has an originator.\n Raises OrderError if no package previously defined."} {"_id": "q_3891", "text": "Sets the package download location, if not already set.\n location - A string\n Raises CardinalityError if already defined.\n Raises OrderError if no package previously defined."} {"_id": "q_3892", "text": "Sets the package homepage location if not already set.\n location - A string or None or NoAssert.\n Raises CardinalityError if already defined.\n Raises OrderError if no package previously defined.\n Raises SPDXValueError if location has incorrect value."} {"_id": "q_3893", "text": "Sets the package's source information, if not already set.\n text - Free form text.\n Raises CardinalityError if already defined.\n Raises OrderError if no package previously defined.\n SPDXValueError if text is not free form text."} {"_id": "q_3894", "text": "Sets the package's concluded licenses.\n licenses - License info.\n Raises CardinalityError if already defined.\n Raises OrderError if no package previously defined.\n Raises SPDXValueError if data malformed."} {"_id": "q_3895", "text": "Adds a license from a file to the package.\n Raises SPDXValueError if data malformed.\n Raises OrderError if no package previously defined."} {"_id": "q_3896", "text": "Sets the package's declared license.\n Raises SPDXValueError if data malformed.\n Raises OrderError if no package previously defined.\n Raises CardinalityError if already set."} {"_id": "q_3897", "text": "Sets the package's license comment.\n Raises OrderError if no package previously defined.\n Raises CardinalityError if already set.\n Raises SPDXValueError if text is not free form text."} {"_id": "q_3898", "text": "Raises OrderError if no package defined."} {"_id": "q_3899", "text": "Raises OrderError if no package or no file defined.\n Raises CardinalityError if more than one comment set.\n Raises SPDXValueError if text is not free form text."} {"_id": "q_3900", "text": "Raises OrderError if no package or file defined.\n Raises CardinalityError if more than one chksum set."} {"_id": "q_3901", "text": "Raises OrderError if no package or file defined.\n Raises SPDXValueError if malformed value."} {"_id": "q_3902", "text": "Raises OrderError if no package or file defined.\n Raises SPDXValueError if text is not free form text.\n Raises CardinalityError if more than one per file."} {"_id": "q_3903", "text": "Raises OrderError if no package or file defined.\n Raises SPDXValueError if not free form text or NONE or NO_ASSERT.\n Raises CardinalityError if more than one."} {"_id": "q_3904", "text": "Raises OrderError if no package or file defined.\n Raises SPDXValueError if not free form text.\n Raises CardinalityError if more than one."} {"_id": "q_3905", "text": "Sets a file name, uri or home artificat.\n Raises OrderError if no package or file defined."} {"_id": "q_3906", "text": "Resets the builder's state to enable building new files."} {"_id": "q_3907", "text": "Sets license extracted text.\n Raises SPDXValueError if text is not free form text.\n Raises OrderError if no license ID defined."} {"_id": "q_3908", "text": "Sets license name.\n Raises SPDXValueError if name is not str or utils.NoAssert\n Raises OrderError if no license id defined."} {"_id": "q_3909", "text": "Sets license comment.\n Raises SPDXValueError if comment is not free form text.\n Raises OrderError if no license ID defined."} {"_id": "q_3910", "text": "Adds a license cross reference.\n Raises OrderError if no License ID defined."} {"_id": "q_3911", "text": "Return an ISO-8601 representation of a datetime object."} {"_id": "q_3912", "text": "Must be called before parse."} {"_id": "q_3913", "text": "Parses a license list and returns a License or None if it failed."} {"_id": "q_3914", "text": "Write an SPDX RDF document.\n - document - spdx.document instance.\n - out - file like object that will be written to.\n Optionally `validate` the document before writing and raise\n InvalidDocumentError if document.validate returns False."} {"_id": "q_3915", "text": "Return a node representing spdx.checksum."} {"_id": "q_3916", "text": "Traverse conjunctions and disjunctions like trees and return a\n set of all licenses in it as nodes."} {"_id": "q_3917", "text": "Return a node representing a conjunction of licenses."} {"_id": "q_3918", "text": "Handle dependencies for a single file.\n - doc_file - instance of spdx.file.File."} {"_id": "q_3919", "text": "Return a review node."} {"_id": "q_3920", "text": "Return an annotation node."} {"_id": "q_3921", "text": "Return a node representing package verification code."} {"_id": "q_3922", "text": "Write package optional fields."} {"_id": "q_3923", "text": "Return a Node representing the package.\n Files must have been added to the graph before this method is called."} {"_id": "q_3924", "text": "Return node representing pkg_file\n pkg_file should be instance of spdx.file."} {"_id": "q_3925", "text": "Add hasFile triples to graph.\n Must be called after files have been added."} {"_id": "q_3926", "text": "Add and return the root document node to graph."} {"_id": "q_3927", "text": "Returns True if the fields are valid according to the SPDX standard.\n Appends user friendly messages to the messages parameter."} {"_id": "q_3928", "text": "Checks if value is a special SPDX value such as\n NONE, NOASSERTION or UNKNOWN if so returns proper model.\n else returns value"} {"_id": "q_3929", "text": "Return license comment or None."} {"_id": "q_3930", "text": "Return an ExtractedLicense object to represent a license object.\n But does not add it to the SPDXDocument model.\n Return None if failed."} {"_id": "q_3931", "text": "Build and return an ExtractedLicense or None.\n Note that this function adds the license to the document."} {"_id": "q_3932", "text": "Returns first found fileName property or None if not found."} {"_id": "q_3933", "text": "Sets file dependencies."} {"_id": "q_3934", "text": "Parse all file contributors and adds them to the model."} {"_id": "q_3935", "text": "Sets file notice text."} {"_id": "q_3936", "text": "Sets file comment text."} {"_id": "q_3937", "text": "Sets file license comment."} {"_id": "q_3938", "text": "Sets file license information."} {"_id": "q_3939", "text": "Sets file type."} {"_id": "q_3940", "text": "Sets file checksum. Assumes SHA1 algorithm without checking."} {"_id": "q_3941", "text": "Sets file licenses concluded."} {"_id": "q_3942", "text": "Returns review date or None if not found.\n Reports error on failure.\n Note does not check value format."} {"_id": "q_3943", "text": "Returns annotation comment or None if found none or more than one.\n Reports errors."} {"_id": "q_3944", "text": "Returns annotation date or None if not found.\n Reports error on failure.\n Note does not check value format."} {"_id": "q_3945", "text": "Parse creators, created and comment."} {"_id": "q_3946", "text": "Parses the External Document ID, SPDX Document URI and Checksum."} {"_id": "q_3947", "text": "Validate the package fields.\n Append user friendly error messages to the `messages` list."} {"_id": "q_3948", "text": "Helper for validate_mandatory_str_field and\n validate_optional_str_fields"} {"_id": "q_3949", "text": "Sets document comment, Raises CardinalityError if\n comment already set."} {"_id": "q_3950", "text": "Sets the external document reference's check sum, if not already set.\n chk_sum - The checksum value in the form of a string."} {"_id": "q_3951", "text": "Sets the package's source information, if not already set.\n text - Free form text.\n Raises CardinalityError if already defined.\n Raises OrderError if no package previously defined."} {"_id": "q_3952", "text": "Sets the package's verification code excluded file.\n Raises OrderError if no package previously defined."} {"_id": "q_3953", "text": "Set's the package summary.\n Raises CardinalityError if summary already set.\n Raises OrderError if no package previously defined."} {"_id": "q_3954", "text": "Sets the file check sum, if not already set.\n chk_sum - A string\n Raises CardinalityError if already defined.\n Raises OrderError if no package previously defined."} {"_id": "q_3955", "text": "Raises OrderError if no package or file defined.\n Raises CardinalityError if more than one per file."} {"_id": "q_3956", "text": "Raises OrderError if no package or no file defined.\n Raises CardinalityError if more than one comment set."} {"_id": "q_3957", "text": "Sets the annotation comment. Raises CardinalityError if\n already set. OrderError if no annotator defined before."} {"_id": "q_3958", "text": "Sets the annotation type. Raises CardinalityError if\n already set. OrderError if no annotator defined before."} {"_id": "q_3959", "text": "Validate all fields of the document and update the\n messages list with user friendly error messages for display."} {"_id": "q_3960", "text": "Decorator to synchronize function."} {"_id": "q_3961", "text": "Program message output."} {"_id": "q_3962", "text": "Utility function to handle runtime failures gracefully.\n Show concise information if possible, then terminate program."} {"_id": "q_3963", "text": "Clean up temp files"} {"_id": "q_3964", "text": "Get the fixed part of the path without wildcard"} {"_id": "q_3965", "text": "Given a API name, list all legal parameters using boto3 service model."} {"_id": "q_3966", "text": "Combine existing parameters with extra options supplied from command line\n options. Carefully merge special type of parameter if needed."} {"_id": "q_3967", "text": "Terminate all threads by deleting the queue and forcing the child threads\n to quit."} {"_id": "q_3968", "text": "Utility function to add a single task into task queue"} {"_id": "q_3969", "text": "Utility function to wait all tasks to complete"} {"_id": "q_3970", "text": "Retrieve S3 access keys from the environment, or None if not present."} {"_id": "q_3971", "text": "Retrieve S3 access keys from the command line, or None if not present."} {"_id": "q_3972", "text": "Retrieve S3 access key settings from s3cmd's config file, if present; otherwise return None."} {"_id": "q_3973", "text": "Initialize s3 access keys from environment variable or s3cfg config file."} {"_id": "q_3974", "text": "Connect to S3 storage"} {"_id": "q_3975", "text": "List all buckets"} {"_id": "q_3976", "text": "Walk through a S3 directory. This function initiate a walk with a basedir.\n It also supports multiple wildcards."} {"_id": "q_3977", "text": "Walk through local directories from root basedir"} {"_id": "q_3978", "text": "Get privileges from metadata of the source in s3, and apply them to target"} {"_id": "q_3979", "text": "Download a single file or a directory by adding a task into queue"} {"_id": "q_3980", "text": "Download files.\n This function can handle multiple files if source S3 URL has wildcard\n characters. It also handles recursive mode by download all files and\n keep the directory structure."} {"_id": "q_3981", "text": "Copy a single file or a directory by adding a task into queue"} {"_id": "q_3982", "text": "Sync directory to directory."} {"_id": "q_3983", "text": "Check MD5 for a local file and a remote file.\n Return True if they have the same md5 hash, otherwise False."} {"_id": "q_3984", "text": "Partially match a path and a filter_path with wildcards.\n This function will return True if this path partially match a filter path.\n This is used for walking through directories with multiple level wildcard."} {"_id": "q_3985", "text": "Thread worker for s3walk.\n Recursively walk into all subdirectories if they still match the filter\n path partially."} {"_id": "q_3986", "text": "Thread worker for upload operation."} {"_id": "q_3987", "text": "Verify the file size of the downloaded file."} {"_id": "q_3988", "text": "Write local file chunk"} {"_id": "q_3989", "text": "Copy a single file from source to target using boto S3 library."} {"_id": "q_3990", "text": "Main entry to handle commands. Dispatch to individual command handler."} {"_id": "q_3991", "text": "Validate input parameters with given format.\n This function also checks for wildcards for recursive mode."} {"_id": "q_3992", "text": "Pretty print the result of s3walk. Here we calculate the maximum width\n of each column and align them."} {"_id": "q_3993", "text": "Handler for mb command"} {"_id": "q_3994", "text": "Handler for put command"} {"_id": "q_3995", "text": "Handler for get command"} {"_id": "q_3996", "text": "Handler for dsync command."} {"_id": "q_3997", "text": "Handler for cp command"} {"_id": "q_3998", "text": "Handler for mv command"} {"_id": "q_3999", "text": "Handler for size command"} {"_id": "q_4000", "text": "Handler of total_size command"} {"_id": "q_4001", "text": "Search for date information in the string"} {"_id": "q_4002", "text": "Search for time information in the string"} {"_id": "q_4003", "text": "includes the contents of a file on disk.\n takes a filename"} {"_id": "q_4004", "text": "pipes the output of a program"} {"_id": "q_4005", "text": "unescapes html entities. the opposite of escape."} {"_id": "q_4006", "text": "Set attributes on the current active tag context"} {"_id": "q_4007", "text": "Add or update the value of an attribute."} {"_id": "q_4008", "text": "Recursively searches children for tags of a certain\n type with matching attributes."} {"_id": "q_4009", "text": "Normalize attribute names for shorthand and work arounds for limitations\n in Python's syntax"} {"_id": "q_4010", "text": "This will call `clean_attribute` on the attribute and also allows for the\n creation of boolean attributes.\n\n Ex. input(selected=True) is equivalent to input(selected=\"selected\")"} {"_id": "q_4011", "text": "Discover gateways using multicast"} {"_id": "q_4012", "text": "Get data from gateway"} {"_id": "q_4013", "text": "Push data broadcasted from gateway to device"} {"_id": "q_4014", "text": "Get key using token from gateway"} {"_id": "q_4015", "text": "Creates a registration message to identify the worker to the interchange"} {"_id": "q_4016", "text": "Send heartbeat to the incoming task queue"} {"_id": "q_4017", "text": "Receives a results from the MPI worker pool and send it out via 0mq\n\n Returns:\n --------\n result: task result from the workers"} {"_id": "q_4018", "text": "Pulls tasks from the incoming tasks 0mq pipe onto the internal\n pending task queue\n\n Parameters:\n -----------\n kill_event : threading.Event\n Event to let the thread know when it is time to die."} {"_id": "q_4019", "text": "Start the Manager process.\n\n The worker loops on this:\n\n 1. If the last message sent was older than heartbeat period we send a heartbeat\n 2.\n\n\n TODO: Move task receiving to a thread"} {"_id": "q_4020", "text": "Decorator function to launch a function as a separate process"} {"_id": "q_4021", "text": "Send UDP messages to usage tracker asynchronously\n\n This multiprocessing based messenger was written to overcome the limitations\n of signalling/terminating a thread that is blocked on a system call. This\n messenger is created as a separate process, and initialized with 2 queues,\n to_send to receive messages to be sent to the internet.\n\n Args:\n - domain_name (str) : Domain name string\n - UDP_IP (str) : IP address YYY.YYY.YYY.YYY\n - UDP_PORT (int) : UDP port to send out on\n - sock_timeout (int) : Socket timeout\n - to_send (multiprocessing.Queue) : Queue of outgoing messages to internet"} {"_id": "q_4022", "text": "By default tracking is enabled.\n\n If Test mode is set via env variable PARSL_TESTING, a test flag is set\n\n Tracking is disabled if :\n 1. config[\"globals\"][\"usageTracking\"] is set to False (Bool)\n 2. Environment variable PARSL_TRACKING is set to false (case insensitive)"} {"_id": "q_4023", "text": "Collect preliminary run info at the start of the DFK.\n\n Returns :\n - Message dict dumped as json string, ready for UDP"} {"_id": "q_4024", "text": "Collect the final run information at the time of DFK cleanup.\n\n Returns:\n - Message dict dumped as json string, ready for UDP"} {"_id": "q_4025", "text": "Send UDP message."} {"_id": "q_4026", "text": "Send message over UDP.\n\n If tracking is disables, the bytes_sent will always be set to -1\n\n Returns:\n (bytes_sent, time_taken)"} {"_id": "q_4027", "text": "This function is called as a callback when an AppFuture\n is in its final state.\n\n It will trigger post-app processing such as checkpointing\n and stageout.\n\n Args:\n task_id (string) : Task id\n future (Future) : The relevant app future (which should be\n consistent with the task structure 'app_fu' entry\n\n KWargs:\n memo_cbk(Bool) : Indicates that the call is coming from a memo update,\n that does not require additional memo updates."} {"_id": "q_4028", "text": "Handle the actual submission of the task to the executor layer.\n\n If the app task has the executors attributes not set (default=='all')\n the task is launched on a randomly selected executor from the\n list of executors. This behavior could later be updated to support\n binding to executors based on user specified criteria.\n\n If the app task specifies a particular set of executors, it will be\n targeted at those specific executors.\n\n Args:\n task_id (uuid string) : A uuid string that uniquely identifies the task\n executable (callable) : A callable object\n args (list of positional args)\n kwargs (arbitrary keyword arguments)\n\n\n Returns:\n Future that tracks the execution of the submitted executable"} {"_id": "q_4029", "text": "Count the number of unresolved futures on which a task depends.\n\n Args:\n - args (List[args]) : The list of args list to the fn\n - kwargs (Dict{kwargs}) : The dict of all kwargs passed to the fn\n\n Returns:\n - count, [list of dependencies]"} {"_id": "q_4030", "text": "Add task to the dataflow system.\n\n If the app task has the executors attributes not set (default=='all')\n the task will be launched on a randomly selected executor from the\n list of executors. If the app task specifies a particular set of\n executors, it will be targeted at the specified executors.\n\n >>> IF all deps are met:\n >>> send to the runnable queue and launch the task\n >>> ELSE:\n >>> post the task in the pending queue\n\n Args:\n - func : A function object\n - *args : Args to the function\n\n KWargs :\n - executors (list or string) : List of executors this call could go to.\n Default='all'\n - fn_hash (Str) : Hash of the function and inputs\n Default=None\n - cache (Bool) : To enable memoization or not\n - kwargs (dict) : Rest of the kwargs to the fn passed as dict.\n\n Returns:\n (AppFuture) [DataFutures,]"} {"_id": "q_4031", "text": "DataFlowKernel cleanup.\n\n This involves killing resources explicitly and sending die messages to IPP workers.\n\n If the executors are managed (created by the DFK), then we call scale_in on each of\n the executors and call executor.shutdown. Otherwise, we do nothing, and executor\n cleanup is left to the user."} {"_id": "q_4032", "text": "Load a checkpoint file into a lookup table.\n\n The data being loaded from the pickle file mostly contains input\n attributes of the task: func, args, kwargs, env...\n To simplify the check of whether the exact task has been completed\n in the checkpoint, we hash these input params and use it as the key\n for the memoized lookup table.\n\n Args:\n - checkpointDirs (list) : List of filepaths to checkpoints\n Eg. ['runinfo/001', 'runinfo/002']\n\n Returns:\n - memoized_lookup_table (dict)"} {"_id": "q_4033", "text": "Load checkpoints from the checkpoint files into a dictionary.\n\n The results are used to pre-populate the memoizer's lookup_table\n\n Kwargs:\n - checkpointDirs (list) : List of run folder to use as checkpoints\n Eg. ['runinfo/001', 'runinfo/002']\n\n Returns:\n - dict containing, hashed -> future mappings"} {"_id": "q_4034", "text": "Pull tasks from the incoming tasks 0mq pipe onto the internal\n pending task queue\n\n Parameters:\n -----------\n kill_event : threading.Event\n Event to let the thread know when it is time to die."} {"_id": "q_4035", "text": "Command server to run async command to the interchange"} {"_id": "q_4036", "text": "Return the DataManager of the currently loaded DataFlowKernel."} {"_id": "q_4037", "text": "Transport the file from the input source to the executor.\n\n This function returns a DataFuture.\n\n Args:\n - self\n - file (File) : file to stage in\n - executor (str) : an executor the file is going to be staged in to.\n If the executor argument is not specified for a file\n with 'globus' scheme, the file will be staged in to\n the first executor with the \"globus\" key in a config."} {"_id": "q_4038", "text": "Finds the checkpoints from all last runs.\n\n Note that checkpoints are incremental, and this helper will not find\n previous checkpoints from earlier than the most recent run. It probably\n should be made to do so.\n\n Kwargs:\n - rundir(str) : Path to the runinfo directory\n\n Returns:\n - a list suitable for the checkpointFiles parameter of DataFlowKernel\n constructor"} {"_id": "q_4039", "text": "Find the checkpoint from the last run, if one exists.\n\n Note that checkpoints are incremental, and this helper will not find\n previous checkpoints from earlier than the most recent run. It probably\n should be made to do so.\n\n Kwargs:\n - rundir(str) : Path to the runinfo directory\n\n Returns:\n - a list suitable for checkpointFiles parameter of DataFlowKernel\n constructor, with 0 or 1 elements"} {"_id": "q_4040", "text": "Revert to using stdlib pickle.\n\n Reverts custom serialization enabled by use_dill|cloudpickle."} {"_id": "q_4041", "text": "Specify path to the ipcontroller-engine.json file.\n\n This file is stored in in the ipython_dir/profile folders.\n\n Returns :\n - str, File path to engine file"} {"_id": "q_4042", "text": "Terminate the controller process and its child processes.\n\n Args:\n - None"} {"_id": "q_4043", "text": "Create a hash of the task and its inputs and check the lookup table for this hash.\n\n If present, the results are returned. The result is a tuple indicating whether a memo\n exists and the result, since a Null result is possible and could be confusing.\n This seems like a reasonable option without relying on an cache_miss exception.\n\n Args:\n - task(task) : task from the dfk.tasks table\n\n Returns:\n Tuple of the following:\n - present (Bool): Is this present in the memo_lookup_table\n - Result (Py Obj): Result of the function if present in table\n\n This call will also set task['hashsum'] to the unique hashsum for the func+inputs."} {"_id": "q_4044", "text": "Updates the memoization lookup table with the result from a task.\n\n Args:\n - task_id (int): Integer task id\n - task (dict) : A task dict from dfk.tasks\n - r (Result future): Result future\n\n A warning is issued when a hash collision occurs during the update.\n This is not likely."} {"_id": "q_4045", "text": "Extract buffers larger than a certain threshold."} {"_id": "q_4046", "text": "Restore extracted buffers."} {"_id": "q_4047", "text": "Serialize an object into a list of sendable buffers.\n\n Parameters\n ----------\n\n obj : object\n The object to be serialized\n buffer_threshold : int\n The threshold (in bytes) for pulling out data buffers\n to avoid pickling them.\n item_threshold : int\n The maximum number of items over which canning will iterate.\n Containers (lists, dicts) larger than this will be pickled without\n introspection.\n\n Returns\n -------\n [bufs] : list of buffers representing the serialized object."} {"_id": "q_4048", "text": "Reconstruct an object serialized by serialize_object from data buffers.\n\n Parameters\n ----------\n\n bufs : list of buffers/bytes\n\n g : globals to be used when uncanning\n\n Returns\n -------\n\n (newobj, bufs) : unpacked object, and the list of remaining unused buffers."} {"_id": "q_4049", "text": "Generate submit script and write it to a file.\n\n Args:\n - template (string) : The template string to be used for the writing submit script\n - script_filename (string) : Name of the submit script\n - job_name (string) : job name\n - configs (dict) : configs that get pushed into the template\n\n Returns:\n - True: on success\n\n Raises:\n SchedulerMissingArgs : If template is missing args\n ScriptPathError : Unable to write submit script out"} {"_id": "q_4050", "text": "Cancels the jobs specified by a list of job ids\n\n Args:\n job_ids : [ ...]\n\n Returns :\n [True/False...] : If the cancel operation fails the entire list will be False."} {"_id": "q_4051", "text": "Save information that must persist to a file.\n\n We do not want to create a new VPC and new identical security groups, so we save\n information about them in a file between runs."} {"_id": "q_4052", "text": "Create a session.\n\n First we look in self.key_file for a path to a json file with the\n credentials. The key file should have 'AWSAccessKeyId' and 'AWSSecretKey'.\n\n Next we look at self.profile for a profile name and try\n to use the Session call to automatically pick up the keys for the profile from\n the user default keys file ~/.aws/config.\n\n Finally, boto3 will look for the keys in environment variables:\n AWS_ACCESS_KEY_ID: The access key for your AWS account.\n AWS_SECRET_ACCESS_KEY: The secret key for your AWS account.\n AWS_SESSION_TOKEN: The session key for your AWS account.\n This is only needed when you are using temporary credentials.\n The AWS_SECURITY_TOKEN environment variable can also be used,\n but is only supported for backwards compatibility purposes.\n AWS_SESSION_TOKEN is supported by multiple AWS SDKs besides python."} {"_id": "q_4053", "text": "Start an instance in the VPC in the first available subnet.\n\n N instances will be started if nodes_per_block > 1.\n Not supported. We only do 1 node per block.\n\n Parameters\n ----------\n command : str\n Command string to execute on the node.\n job_name : str\n Name associated with the instances."} {"_id": "q_4054", "text": "Get states of all instances on EC2 which were started by this file."} {"_id": "q_4055", "text": "Submit the command onto a freshly instantiated AWS EC2 instance.\n\n Submit returns an ID that corresponds to the task that was just submitted.\n\n Parameters\n ----------\n command : str\n Command to be invoked on the remote side.\n blocksize : int\n Number of blocks requested.\n tasks_per_node : int (default=1)\n Number of command invocations to be launched per node\n job_name : str\n Prefix for the job name.\n\n Returns\n -------\n None or str\n If at capacity, None will be returned. Otherwise, the job identifier will be returned."} {"_id": "q_4056", "text": "Cancel the jobs specified by a list of job ids.\n\n Parameters\n ----------\n job_ids : list of str\n List of of job identifiers\n\n Returns\n -------\n list of bool\n Each entry in the list will contain False if the operation fails. Otherwise, the entry will be True."} {"_id": "q_4057", "text": "Teardown the EC2 infastructure.\n\n Terminate all EC2 instances, delete all subnets, delete security group, delete VPC,\n and reset all instance variables."} {"_id": "q_4058", "text": "Scale out the existing resources."} {"_id": "q_4059", "text": "Update the resource dictionary with job statuses."} {"_id": "q_4060", "text": "Scales out the number of active workers by 1.\n\n This method is notImplemented for threads and will raise the error if called.\n\n Parameters:\n blocks : int\n Number of blocks to be provisioned."} {"_id": "q_4061", "text": "Returns the status of the executor via probing the execution providers."} {"_id": "q_4062", "text": "Callback from executor future to update the parent.\n\n Args:\n - parent_fu (Future): Future returned by the executor along with callback\n\n Returns:\n - None\n\n Updates the super() with the result() or exception()"} {"_id": "q_4063", "text": "Cancels the resources identified by the job_ids provided by the user.\n\n Args:\n - job_ids (list): A list of job identifiers\n\n Returns:\n - A list of status from cancelling the job which can be True, False\n\n Raises:\n - ExecutionProviderException or its subclasses"} {"_id": "q_4064", "text": "This is a function that mocks the Swift-T side.\n\n It listens on the the incoming_q for tasks and posts returns on the outgoing_q.\n\n Args:\n - incoming_q (Queue object) : The queue to listen on\n - outgoing_q (Queue object) : Queue to post results on\n\n The messages posted on the incoming_q will be of the form :\n\n .. code:: python\n\n {\n \"task_id\" : ,\n \"buffer\" : serialized buffer containing the fn, args and kwargs\n }\n\n If ``None`` is received, the runner will exit.\n\n Response messages should be of the form:\n\n .. code:: python\n\n {\n \"task_id\" : ,\n \"result\" : serialized buffer containing result\n \"exception\" : serialized exception object\n }\n\n On exiting the runner will post ``None`` to the outgoing_q"} {"_id": "q_4065", "text": "Shutdown method, to kill the threads and workers."} {"_id": "q_4066", "text": "Submits work to the the outgoing_q.\n\n The outgoing_q is an external process listens on this\n queue for new work. This method is simply pass through and behaves like a\n submit call as described here `Python docs: `_\n\n Args:\n - func (callable) : Callable function\n - *args (list) : List of arbitrary positional arguments.\n\n Kwargs:\n - **kwargs (dict) : A dictionary of arbitrary keyword args for func.\n\n Returns:\n Future"} {"_id": "q_4067", "text": "Return the resolved filepath on the side where it is called from.\n\n The appropriate filepath will be returned when called from within\n an app running remotely as well as regular python on the client side.\n\n Args:\n - self\n Returns:\n - filepath (string)"} {"_id": "q_4068", "text": "The App decorator function.\n\n Args:\n - apptype (string) : Apptype can be bash|python\n\n Kwargs:\n - data_flow_kernel (DataFlowKernel): The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for\n managing this app. This can be omitted only\n after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`.\n - walltime (int) : Walltime for app in seconds,\n default=60\n - executors (str|list) : Labels of the executors that this app can execute over. Default is 'all'.\n - cache (Bool) : Enable caching of the app call\n default=False\n\n Returns:\n A PythonApp or BashApp object, which when called runs the apps through the executor."} {"_id": "q_4069", "text": "Decorator function for making python apps.\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, `@python_app` if using all defaults or `@python_app(walltime=120)`. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n walltime : int\n Walltime for app in seconds. Default is 60.\n executors : string or list\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False."} {"_id": "q_4070", "text": "Decorator function for making bash apps.\n\n Parameters\n ----------\n function : function\n Do not pass this keyword argument directly. This is needed in order to allow for omitted parenthesis,\n for example, `@bash_app` if using all defaults or `@bash_app(walltime=120)`. If the\n decorator is used alone, function will be the actual function being decorated, whereas if it\n is called with arguments, function will be None. Default is None.\n data_flow_kernel : DataFlowKernel\n The :class:`~parsl.dataflow.dflow.DataFlowKernel` responsible for managing this app. This can\n be omitted only after calling :meth:`parsl.dataflow.dflow.DataFlowKernelLoader.load`. Default is None.\n walltime : int\n Walltime for app in seconds. Default is 60.\n executors : string or list\n Labels of the executors that this app can execute over. Default is 'all'.\n cache : bool\n Enable caching of the app call. Default is False."} {"_id": "q_4071", "text": "Internal\n Wrap the Parsl app with a function that will call the monitor function and point it at the correct pid when the task begins."} {"_id": "q_4072", "text": "Transport file on the remote side to a local directory\n\n Args:\n - remote_source (string): remote_source\n - local_dir (string): Local directory to copy to\n\n\n Returns:\n - str: Local path to file\n\n Raises:\n - FileExists : Name collision at local directory.\n - FileCopyException : FileCopy failed."} {"_id": "q_4073", "text": "Return true if the path refers to an existing directory.\n\n Parameters\n ----------\n path : str\n Path of directory on the remote side to check."} {"_id": "q_4074", "text": "Create a directory on the remote side.\n\n If intermediate directories do not exist, they will be created.\n\n Parameters\n ----------\n path : str\n Path of directory on the remote side to create.\n mode : int\n Permissions (posix-style) for the newly-created directory.\n exist_ok : bool\n If False, raise an OSError if the target directory already exists."} {"_id": "q_4075", "text": "Let the FlowControl system know that there is an event."} {"_id": "q_4076", "text": "Create the kubernetes deployment"} {"_id": "q_4077", "text": "Compose the launch command and call the scale_out\n\n This should be implemented in the child classes to take care of\n executor specific oddities."} {"_id": "q_4078", "text": "Starts the interchange process locally\n\n Starts the interchange process locally and uses an internal command queue to\n get the worker task and result ports that the interchange has bound to."} {"_id": "q_4079", "text": "Puts a worker on hold, preventing scheduling of additional tasks to it.\n\n This is called \"hold\" mostly because this only stops scheduling of tasks,\n and does not actually kill the worker.\n\n Parameters\n ----------\n\n worker_id : str\n Worker id to be put on hold"} {"_id": "q_4080", "text": "Sends hold command to all managers which are in a specific block\n\n Parameters\n ----------\n block_id : str\n Block identifier of the block to be put on hold"} {"_id": "q_4081", "text": "Return status of all blocks."} {"_id": "q_4082", "text": "Called by RQ when there is a failure in a worker.\n\n NOTE: Make sure that in your RQ worker process, rollbar.init() has been called with\n handler='blocking'. The default handler, 'thread', does not work from inside an RQ worker."} {"_id": "q_4083", "text": "Pyramid entry point"} {"_id": "q_4084", "text": "Decorator for making error handling on AWS Lambda easier"} {"_id": "q_4085", "text": "Reports an arbitrary string message to Rollbar.\n\n message: the string body of the message\n level: level to report at. One of: 'critical', 'error', 'warning', 'info', 'debug'\n request: the request object for the context of the message\n extra_data: dictionary of params to include with the message. 'body' is reserved.\n payload_data: param names to pass in the 'data' level of the payload; overrides defaults."} {"_id": "q_4086", "text": "Searches a project for items that match the input criteria.\n\n title: all or part of the item's title to search for.\n return_fields: the fields that should be returned for each item.\n e.g. ['id', 'project_id', 'status'] will return a dict containing\n only those fields for each item.\n access_token: a project access token. If this is not provided,\n the one provided to init() will be used instead.\n search_fields: additional fields to include in the search.\n currently supported: status, level, environment"} {"_id": "q_4087", "text": "Creates .rollbar log file for use with rollbar-agent"} {"_id": "q_4088", "text": "Returns a dictionary describing the logged-in user using data from `request.\n\n Try request.rollbar_person first, then 'user', then 'user_id'"} {"_id": "q_4089", "text": "Attempts to add information from the lambda context if it exists"} {"_id": "q_4090", "text": "Attempts to build request data; if successful, sets the 'request' key on `data`."} {"_id": "q_4091", "text": "Returns True if we should record local variables for the given frame."} {"_id": "q_4092", "text": "Returns a dictionary containing data from the request.\n Can handle webob or werkzeug-based request objects."} {"_id": "q_4093", "text": "Returns a dictionary containing information about the server environment."} {"_id": "q_4094", "text": "This runs the protocol on port 8000"} {"_id": "q_4095", "text": "Read into ``buf`` from the device. The number of bytes read will be the\n length of ``buf``.\n\n If ``start`` or ``end`` is provided, then the buffer will be sliced\n as if ``buf[start:end]``. This will not cause an allocation like\n ``buf[start:end]`` will so it saves memory.\n\n :param bytearray buffer: buffer to write into\n :param int start: Index to start writing at\n :param int end: Index to write up to but not include"} {"_id": "q_4096", "text": "Write the bytes from ``buffer`` to the device. Transmits a stop bit if\n ``stop`` is set.\n\n If ``start`` or ``end`` is provided, then the buffer will be sliced\n as if ``buffer[start:end]``. This will not cause an allocation like\n ``buffer[start:end]`` will so it saves memory.\n\n :param bytearray buffer: buffer containing the bytes to write\n :param int start: Index to start writing from\n :param int end: Index to read up to but not include\n :param bool stop: If true, output an I2C stop condition after the buffer is written"} {"_id": "q_4097", "text": "Write the bytes from ``out_buffer`` to the device, then immediately\n reads into ``in_buffer`` from the device. The number of bytes read\n will be the length of ``in_buffer``.\n Transmits a stop bit after the write, if ``stop`` is set.\n\n If ``out_start`` or ``out_end`` is provided, then the output buffer\n will be sliced as if ``out_buffer[out_start:out_end]``. This will\n not cause an allocation like ``buffer[out_start:out_end]`` will so\n it saves memory.\n\n If ``in_start`` or ``in_end`` is provided, then the input buffer\n will be sliced as if ``in_buffer[in_start:in_end]``. This will not\n cause an allocation like ``in_buffer[in_start:in_end]`` will so\n it saves memory.\n\n :param bytearray out_buffer: buffer containing the bytes to write\n :param bytearray in_buffer: buffer containing the bytes to read into\n :param int out_start: Index to start writing from\n :param int out_end: Index to read up to but not include\n :param int in_start: Index to start writing at\n :param int in_end: Index to write up to but not include\n :param bool stop: If true, output an I2C stop condition after the buffer is written"} {"_id": "q_4098", "text": "This function returns a Hangul letter by composing the specified chosung, joongsung, and jongsung.\n @param chosung\n @param joongsung\n @param jongsung the terminal Hangul letter. This is optional if you do not need a jongsung."} {"_id": "q_4099", "text": "Check whether this letter contains Jongsung"} {"_id": "q_4100", "text": "Returns true if node is inside the name of an except handler."} {"_id": "q_4101", "text": "Return true if given node is inside lambda"} {"_id": "q_4102", "text": "Recursively returns all atoms in nested lists and tuples."} {"_id": "q_4103", "text": "return True if the node is referencing the \"super\" builtin function"} {"_id": "q_4104", "text": "return true if the function does nothing but raising an exception"} {"_id": "q_4105", "text": "return true if the given Name node is used in function or lambda\n default argument's value"} {"_id": "q_4106", "text": "return true if the name is used in function decorator"} {"_id": "q_4107", "text": "return True if `frame` is an astroid.Class node with `node` in the\n subtree of its bases attribute"} {"_id": "q_4108", "text": "return the higher parent which is not an AssignName, Tuple or List node"} {"_id": "q_4109", "text": "decorator to store messages that are handled by a checker method"} {"_id": "q_4110", "text": "Returns the specified argument from a function call.\n\n :param astroid.Call call_node: Node representing a function call to check.\n :param int position: position of the argument.\n :param str keyword: the keyword of the argument.\n\n :returns: The node representing the argument, None if the argument is not found.\n :rtype: astroid.Name\n :raises ValueError: if both position and keyword are None.\n :raises NoSuchArgumentError: if no argument at the provided position or with\n the provided keyword."} {"_id": "q_4111", "text": "Check if the given exception handler catches\n the given error_type.\n\n The *handler* parameter is a node, representing an ExceptHandler node.\n The *error_type* can be an exception, such as AttributeError,\n the name of an exception, or it can be a tuple of errors.\n The function will return True if the handler catches any of the\n given errors."} {"_id": "q_4112", "text": "Detect if the given function node is decorated with a property."} {"_id": "q_4113", "text": "Determine if the `func` node has a decorator with the qualified name `qname`."} {"_id": "q_4114", "text": "Return the ExceptHandler or the TryExcept node in which the node is."} {"_id": "q_4115", "text": "Return the inferred value for the given node.\n\n Return None if inference failed or if there is some ambiguity (more than\n one node has been inferred)."} {"_id": "q_4116", "text": "Return the inferred type for `node`\n\n If there is more than one possible type, or if inferred type is Uninferable or None,\n return None"} {"_id": "q_4117", "text": "Check if the given function node is a singledispatch function."} {"_id": "q_4118", "text": "Split the names of the given module into subparts\n\n For example,\n _qualified_names('pylint.checkers.ImportsChecker')\n returns\n ['pylint', 'pylint.checkers', 'pylint.checkers.ImportsChecker']"} {"_id": "q_4119", "text": "Get a prepared module name from the given import node\n\n In the case of relative imports, this will return the\n absolute qualified module name, which might be useful\n for debugging. Otherwise, the initial module name\n is returned unchanged."} {"_id": "q_4120", "text": "return a string which represents imports as a tree"} {"_id": "q_4121", "text": "triggered when an import statement is seen"} {"_id": "q_4122", "text": "triggered when a from statement is seen"} {"_id": "q_4123", "text": "Check `node` import or importfrom node position is correct\n\n Send a message if `node` comes before another instruction"} {"_id": "q_4124", "text": "Checks imports of module `node` are grouped by category\n\n Imports must follow this order: standard, 3rd party, local"} {"_id": "q_4125", "text": "notify an imported module, used to analyze dependencies"} {"_id": "q_4126", "text": "check if the module is deprecated"} {"_id": "q_4127", "text": "check if the module has a preferred replacement"} {"_id": "q_4128", "text": "return a verbatim layout for displaying dependencies"} {"_id": "q_4129", "text": "build the internal or the external depedency graph"} {"_id": "q_4130", "text": "Read config file and return list of options"} {"_id": "q_4131", "text": "return true if the node should be treated"} {"_id": "q_4132", "text": "get callbacks from handler for the visited node"} {"_id": "q_4133", "text": "Check the consistency of msgid.\n\n msg ids for a checker should be a string of len 4, where the two first\n characters are the checker id and the two last the msg id in this\n checker.\n\n :raises InvalidMessageError: If the checker id in the messages are not\n always the same."} {"_id": "q_4134", "text": "Check that a datetime was infered.\n If so, emit boolean-datetime warning."} {"_id": "q_4135", "text": "Manage message of different type and in the context of path."} {"_id": "q_4136", "text": "Launch layouts display"} {"_id": "q_4137", "text": "get title for objects"} {"_id": "q_4138", "text": "set different default options with _default dictionary"} {"_id": "q_4139", "text": "visit one class and add it to diagram"} {"_id": "q_4140", "text": "return ancestor nodes of a class node"} {"_id": "q_4141", "text": "extract recursively classes related to klass_node"} {"_id": "q_4142", "text": "leave the pyreverse.utils.Project node\n\n return the generated diagram definition"} {"_id": "q_4143", "text": "visit astroid.ImportFrom and catch modules for package diagram"} {"_id": "q_4144", "text": "return a class diagram definition for the given klass and its\n related klasses"} {"_id": "q_4145", "text": "Get the diagrams configuration data\n\n :param project:The pyreverse project\n :type project: pyreverse.utils.Project\n :param linker: The linker\n :type linker: pyreverse.inspector.Linker(IdGeneratorMixIn, LocalsVisitor)\n\n :returns: The list of diagram definitions\n :rtype: list(:class:`pylint.pyreverse.diagrams.ClassDiagram`)"} {"_id": "q_4146", "text": "Check if the given owner should be ignored\n\n This will verify if the owner's module is in *ignored_modules*\n or the owner's module fully qualified name is in *ignored_modules*\n or if the *ignored_modules* contains a pattern which catches\n the fully qualified name of the module.\n\n Also, similar checks are done for the owner itself, if its name\n matches any name from the *ignored_classes* or if its qualified\n name can be found in *ignored_classes*."} {"_id": "q_4147", "text": "Check if the given node has a parent of the given type."} {"_id": "q_4148", "text": "Check if the given name is used as a variadic argument."} {"_id": "q_4149", "text": "Check that the given uninferable Call node does not\n call an actual function."} {"_id": "q_4150", "text": "Detect TypeErrors for unary operands."} {"_id": "q_4151", "text": "visit an astroid.AssignName node\n\n handle locals_type"} {"_id": "q_4152", "text": "handle an astroid.assignattr node\n\n handle instance_attrs_type"} {"_id": "q_4153", "text": "return true if the module should be added to dependencies"} {"_id": "q_4154", "text": "colorize message by wrapping it with ansi escape codes\n\n :type msg: str or unicode\n :param msg: the message string to colorize\n\n :type color: str or None\n :param color:\n the color identifier (see `ANSI_COLORS` for available values)\n\n :type style: str or None\n :param style:\n style string (see `ANSI_COLORS` for available values). To get\n several style effects at the same time, use a coma as separator.\n\n :raise KeyError: if an unexistent color or style identifier is given\n\n :rtype: str or unicode\n :return: the ansi escaped string"} {"_id": "q_4155", "text": "Register the reporter classes with the linter."} {"_id": "q_4156", "text": "manage message of different types, and colorize output\n using ansi escape codes"} {"_id": "q_4157", "text": "open a vcg graph"} {"_id": "q_4158", "text": "draw a node"} {"_id": "q_4159", "text": "draw an edge from a node to another."} {"_id": "q_4160", "text": "Check the new string formatting."} {"_id": "q_4161", "text": "check for bad escapes in a non-raw string.\n\n prefix: lowercase string of eg 'ur' string prefix markers.\n string_body: the un-parsed body of the string, not including the quote\n marks.\n start_row: integer line number in the source."} {"_id": "q_4162", "text": "display a section as text"} {"_id": "q_4163", "text": "Display an evaluation section as a text."} {"_id": "q_4164", "text": "Register a MessageDefinition with consistency in mind.\n\n :param MessageDefinition message: The message definition being added."} {"_id": "q_4165", "text": "Check that a symbol is not already used."} {"_id": "q_4166", "text": "Raise an error when a symbol is duplicated.\n\n :param str msgid: The msgid corresponding to the symbols\n :param str symbol: Offending symbol\n :param str other_symbol: Other offending symbol\n :raises InvalidMessageError: when a symbol is duplicated."} {"_id": "q_4167", "text": "Raise an error when a msgid is duplicated.\n\n :param str symbol: The symbol corresponding to the msgids\n :param str msgid: Offending msgid\n :param str other_msgid: Other offending msgid\n :raises InvalidMessageError: when a msgid is duplicated."} {"_id": "q_4168", "text": "Generates a user-consumable representation of a message.\n\n Can be just the message ID or the ID and the symbol."} {"_id": "q_4169", "text": "Output full messages list documentation in ReST format."} {"_id": "q_4170", "text": "Output full documentation in ReST format for all extension modules"} {"_id": "q_4171", "text": "Use sched_affinity if available for virtualized or containerized environments."} {"_id": "q_4172", "text": "take a list of module names which are pylint plugins and load\n and register them"} {"_id": "q_4173", "text": "overridden from config.OptionsProviderMixin to handle some\n special options"} {"_id": "q_4174", "text": "disable all reporters"} {"_id": "q_4175", "text": "Disable all other checkers and enable Python 3 warnings."} {"_id": "q_4176", "text": "return all available checkers as a list"} {"_id": "q_4177", "text": "Get all the checker names that this linter knows about."} {"_id": "q_4178", "text": "return checkers needed for activated messages and reports"} {"_id": "q_4179", "text": "get modules and errors from a list of modules and handle errors"} {"_id": "q_4180", "text": "set the name of the currently analyzed module and\n init statistics for it"} {"_id": "q_4181", "text": "Check a module from its astroid representation."} {"_id": "q_4182", "text": "optik callback for printing some help about a particular message"} {"_id": "q_4183", "text": "optik callback for printing full documentation"} {"_id": "q_4184", "text": "Wrap the text on the given line length."} {"_id": "q_4185", "text": "return decoded line from encoding or decode with default encoding"} {"_id": "q_4186", "text": "Determines if the basename is matched in a regex blacklist\n\n :param str base_name: The basename of the file\n :param list black_list_re: A collection of regex patterns to match against.\n Successful matches are blacklisted.\n\n :returns: `True` if the basename is blacklisted, `False` otherwise.\n :rtype: bool"} {"_id": "q_4187", "text": "load all module and package in the given directory, looking for a\n 'register' function in each one, used to register pylint checkers"} {"_id": "q_4188", "text": "return string as a comment"} {"_id": "q_4189", "text": "format an options section using the INI format"} {"_id": "q_4190", "text": "format options using the INI format"} {"_id": "q_4191", "text": "overridden to detect problems easily"} {"_id": "q_4192", "text": "return the ancestor nodes"} {"_id": "q_4193", "text": "trick to get table content without actually writing it\n\n return an aligned list of lists containing table cells values as string"} {"_id": "q_4194", "text": "Walk the AST to collect block level options line numbers."} {"_id": "q_4195", "text": "Report an ignored message.\n\n state_scope is either MSG_STATE_SCOPE_MODULE or MSG_STATE_SCOPE_CONFIG,\n depending on whether the message was disabled locally in the module,\n or globally. The other arguments are the same as for add_message."} {"_id": "q_4196", "text": "register a report\n\n reportid is the unique identifier for the report\n r_title the report's title\n r_cb the method to call to make the report\n checker is the checker defining the report"} {"_id": "q_4197", "text": "render registered reports"} {"_id": "q_4198", "text": "Get the name of the property that the given node is a setter for.\n\n :param node: The node to get the property name for.\n :type node: str\n\n :rtype: str or None\n :returns: The name of the property that the node is a setter for,\n or None if one could not be found."} {"_id": "q_4199", "text": "Get the property node for the given setter node.\n\n :param node: The node to get the property for.\n :type node: astroid.FunctionDef\n\n :rtype: astroid.FunctionDef or None\n :returns: The node relating to the property of the given setter node,\n or None if one could not be found."} {"_id": "q_4200", "text": "Check if a return node returns a value other than None.\n\n :param return_node: The return node to check.\n :type return_node: astroid.Return\n\n :rtype: bool\n :return: True if the return node returns a value other than None,\n False otherwise."} {"_id": "q_4201", "text": "Gets all of the possible raised exception types for the given raise node.\n\n .. note::\n\n Caught exception types are ignored.\n\n\n :param node: The raise node to find exception types for.\n :type node: astroid.node_classes.NodeNG\n\n :returns: A list of exception types possibly raised by :param:`node`.\n :rtype: set(str)"} {"_id": "q_4202", "text": "inspect the source file to find messages activated or deactivated by id."} {"_id": "q_4203", "text": "inspect the source file to find encoding problem"} {"_id": "q_4204", "text": "inspect the source to find fixme problems"} {"_id": "q_4205", "text": "Check if the name is a future import from another module."} {"_id": "q_4206", "text": "get overridden method if any"} {"_id": "q_4207", "text": "return extra information to add to the message for unpacking-non-sequence\n and unbalanced-tuple-unpacking errors"} {"_id": "q_4208", "text": "Detect that the given frames shares a global\n scope.\n\n Two frames shares a global scope when neither\n of them are hidden under a function scope, as well\n as any of parent scope of them, until the root scope.\n In this case, depending from something defined later on\n will not work, because it is still undefined.\n\n Example:\n class A:\n # B has the same global scope as `C`, leading to a NameError.\n class B(C): ...\n class C: ..."} {"_id": "q_4209", "text": "Return True if the node is in a local class scope, as an assignment.\n\n :param node: Node considered\n :type node: astroid.Node\n :return: True if the node is in a local class scope, as an assignment. False otherwise.\n :rtype: bool"} {"_id": "q_4210", "text": "Return True if there is a node with the same name in the to_consume dict of an upper scope\n and if that scope is a function\n\n :param node: node to check for\n :type node: astroid.Node\n :param index: index of the current consumer inside self._to_consume\n :type index: int\n :return: True if there is a node with the same name in the to_consume dict of an upper scope\n and if that scope is a function\n :rtype: bool"} {"_id": "q_4211", "text": "Check for unbalanced tuple unpacking\n and unpacking non sequences."} {"_id": "q_4212", "text": "Update consumption analysis for metaclasses."} {"_id": "q_4213", "text": "return a list of subpackages for the given directory"} {"_id": "q_4214", "text": "make a layout with some stats about duplication"} {"_id": "q_4215", "text": "append a file to search for similarities"} {"_id": "q_4216", "text": "display computed similarities on stdout"} {"_id": "q_4217", "text": "find similarities in the two given linesets"} {"_id": "q_4218", "text": "create the index for this set"} {"_id": "q_4219", "text": "Check if a definition signature is equivalent to a call."} {"_id": "q_4220", "text": "Determine if the two methods have different parameters\n\n They are considered to have different parameters if:\n\n * they have different positional parameters, including different names\n\n * one of the methods is having variadics, while the other is not\n\n * they have different keyword only parameters."} {"_id": "q_4221", "text": "Safely infer the return value of a function.\n\n Returns None if inference failed or if there is some ambiguity (more than\n one node has been inferred). Otherwise returns infered value."} {"_id": "q_4222", "text": "Set the given node as accessed."} {"_id": "q_4223", "text": "init visit variable _accessed"} {"_id": "q_4224", "text": "Detect that a class has a consistent mro or duplicate bases."} {"_id": "q_4225", "text": "Detect that a class inherits something which is not\n a class or a type."} {"_id": "q_4226", "text": "Check if the given function node is an useless method override\n\n We consider it *useless* if it uses the super() builtin, but having\n nothing additional whatsoever than not implementing the method at all.\n If the method uses super() to delegate an operation to the rest of the MRO,\n and if the method called is the same as the current one, the arguments\n passed to super() are the same as the parameters that were passed to\n this method, then the method could be removed altogether, by letting\n other implementation to take precedence."} {"_id": "q_4227", "text": "on method node, check if this method couldn't be a function\n\n ignore class, static and abstract methods, initializer,\n methods overridden from a parent class."} {"_id": "q_4228", "text": "Check that the given AssignAttr node\n is defined in the class slots."} {"_id": "q_4229", "text": "check if the name handle an access to a class member\n if so, register it"} {"_id": "q_4230", "text": "check that the given class node implements abstract methods from\n base classes"} {"_id": "q_4231", "text": "Verify that the exception context is properly set.\n\n An exception context can be only `None` or an exception."} {"_id": "q_4232", "text": "display results encapsulated in the layout tree"} {"_id": "q_4233", "text": "Check if a class node is a typing.NamedTuple class"} {"_id": "q_4234", "text": "Check if a class definition defines an Enum class.\n\n :param node: The class node to check.\n :type node: astroid.ClassDef\n\n :returns: True if the given node represents an Enum class. False otherwise.\n :rtype: bool"} {"_id": "q_4235", "text": "Check if a class definition defines a Python 3.7+ dataclass\n\n :param node: The class node to check.\n :type node: astroid.ClassDef\n\n :returns: True if the given node represents a dataclass class. False otherwise.\n :rtype: bool"} {"_id": "q_4236", "text": "initialize visit variables"} {"_id": "q_4237", "text": "check number of public methods"} {"_id": "q_4238", "text": "check the node has any spelling errors"} {"_id": "q_4239", "text": "Format the message according to the given template.\n\n The template format is the one of the format method :\n cf. http://docs.python.org/2/library/string.html#formatstrings"} {"_id": "q_4240", "text": "Check if the given node is an actual elif\n\n This is a problem we're having with the builtin ast module,\n which splits `elif` branches into a separate if statement.\n Unfortunately we need to know the exact type in certain\n cases."} {"_id": "q_4241", "text": "Check if the given if node can be simplified.\n\n The if statement can be reduced to a boolean expression\n in some cases. For instance, if there are two branches\n and both of them return a boolean value that depends on\n the result of the statement's test, then this can be reduced\n to `bool(test)` without losing any functionality."} {"_id": "q_4242", "text": "Check if an exception of type StopIteration is raised inside a generator"} {"_id": "q_4243", "text": "Check if a StopIteration exception is raised by the call to next function\n\n If the next value has a default value, then do not add message.\n\n :param node: Check to see if this Call node is a next function\n :type node: :class:`astroid.node_classes.Call`"} {"_id": "q_4244", "text": "Get the duplicated types from the underlying isinstance calls.\n\n :param astroid.BoolOp node: Node which should contain a bunch of isinstance calls.\n :returns: Dictionary of the comparison objects from the isinstance calls,\n to duplicate values from consecutive calls.\n :rtype: dict"} {"_id": "q_4245", "text": "Check isinstance calls which can be merged together."} {"_id": "q_4246", "text": "Returns true if node is 'condition and true_value or false_value' form.\n\n All of: condition, true_value and false_value should not be a complex boolean expression"} {"_id": "q_4247", "text": "Check that all return statements inside a function are consistent.\n\n Return statements are consistent if:\n - all returns are explicit and if there is no implicit return;\n - all returns are empty and if there is, possibly, an implicit return.\n\n Args:\n node (astroid.FunctionDef): the function holding the return statements."} {"_id": "q_4248", "text": "Check if the node ends with an explicit return statement.\n\n Args:\n node (astroid.NodeNG): node to be checked.\n\n Returns:\n bool: True if the node ends with an explicit statement, False otherwise."} {"_id": "q_4249", "text": "Emit a convention whenever range and len are used for indexing."} {"_id": "q_4250", "text": "check if we need graphviz for different output format"} {"_id": "q_4251", "text": "checking arguments and run project"} {"_id": "q_4252", "text": "write a class diagram"} {"_id": "q_4253", "text": "initialize DotWriter and add options for layout."} {"_id": "q_4254", "text": "return True if message may be emitted using the current interpreter"} {"_id": "q_4255", "text": "return the help string for the given message id"} {"_id": "q_4256", "text": "Extracts the environment PYTHONPATH and appends the current sys.path to\n those."} {"_id": "q_4257", "text": "Pylint the given file.\n\n When run from emacs we will be in the directory of a file, and passed its\n filename. If this file is part of a package and is trying to import other\n modules from within its own package or another package rooted in a directory\n below it, pylint will classify it as a failed import.\n\n To get around this, we traverse down the directory tree to find the root of\n the package this module is in. We then invoke pylint from this directory.\n\n Finally, we must correct the filenames in the output generated by pylint so\n Emacs doesn't become confused (it will expect just the original filename,\n while pylint may extend it with extra directories if we've traversed down\n the tree)"} {"_id": "q_4258", "text": "Run pylint from python\n\n ``command_options`` is a string containing ``pylint`` command line options;\n ``return_std`` (boolean) indicates return of created standard output\n and error (see below);\n ``stdout`` and ``stderr`` are 'file-like' objects in which standard output\n could be written.\n\n Calling agent is responsible for stdout/err management (creation, close).\n Default standard output and error are those from sys,\n or standalone ones (``subprocess.PIPE``) are used\n if they are not set and ``return_std``.\n\n If ``return_std`` is set to ``True``, this function returns a 2-uple\n containing standard output and error related to created process,\n as follows: ``(stdout, stderr)``.\n\n To silently run Pylint on a module, and get its standard output and error:\n >>> (pylint_stdout, pylint_stderr) = py_run( 'module_name.py', True)"} {"_id": "q_4259", "text": "recursive function doing the real work for get_cycles"} {"_id": "q_4260", "text": "returns self._source"} {"_id": "q_4261", "text": "Generates a graph file.\n\n :param str outputfile: filename and path [defaults to graphname.png]\n :param str dotfile: filename and path [defaults to graphname.dot]\n :param str mapfile: filename and path\n\n :rtype: str\n :return: a path to the generated file"} {"_id": "q_4262", "text": "If the msgid is a numeric one, then register it to inform the user\n it could furnish instead a symbolic msgid."} {"_id": "q_4263", "text": "reenable message of the given id"} {"_id": "q_4264", "text": "Get the message symbol of the given message id\n\n Return the original message id if the message does not\n exist."} {"_id": "q_4265", "text": "Adds a message given by ID or name.\n\n If provided, the message string is expanded using args.\n\n AST checkers must provide the node argument (but may optionally\n provide line if the line number is different), raw and token checkers\n must provide the line argument."} {"_id": "q_4266", "text": "output a full documentation in ReST format"} {"_id": "q_4267", "text": "Return a line with |s for each of the positions in the given lists."} {"_id": "q_4268", "text": "Get an indentation string for hanging indentation, consisting of the line-indent plus\n a number of spaces to fill up to the column of this token.\n\n e.g. the token indent for foo\n in \"print(foo)\"\n is \" \""} {"_id": "q_4269", "text": "Returns the valid offsets for the token at the given position."} {"_id": "q_4270", "text": "Extracts indentation information for a hanging indent\n\n Case of hanging indent after a bracket (including parenthesis)\n\n :param str bracket: bracket in question\n :param int position: Position of bracket in self._tokens\n\n :returns: the state and valid positions for hanging indentation\n :rtype: _ContinuedIndent"} {"_id": "q_4271", "text": "Extracts indentation information for a continued indent."} {"_id": "q_4272", "text": "Pushes a new token for continued indentation on the stack.\n\n Tokens that can modify continued indentation offsets are:\n * opening brackets\n * 'lambda'\n * : inside dictionaries\n\n push_token relies on the caller to filter out those\n interesting tokens.\n\n :param int token: The concrete token\n :param int position: The position of the token in the stream."} {"_id": "q_4273", "text": "a new line has been encountered, process it if necessary"} {"_id": "q_4274", "text": "Check that there are not unnecessary parens after a keyword.\n\n Parens are unnecessary if there is exactly one balanced outer pair on a\n line, and it is followed by a colon, and contains no commas (i.e. is not a\n tuple).\n\n Args:\n tokens: list of Tokens; the entire list of Tokens.\n start: int; the position of the keyword in the token list."} {"_id": "q_4275", "text": "Extended check of PEP-484 type hint presence"} {"_id": "q_4276", "text": "check the node line number and check it if not yet done"} {"_id": "q_4277", "text": "Check for lines containing multiple statements."} {"_id": "q_4278", "text": "check lines have less than a maximum number of characters"} {"_id": "q_4279", "text": "return the indent level of the string"} {"_id": "q_4280", "text": "Checks if an import node is in the context of a conditional."} {"_id": "q_4281", "text": "Look for indexing exceptions."} {"_id": "q_4282", "text": "Visit an except handler block and check for exception unpacking."} {"_id": "q_4283", "text": "Visit a raise statement and check for raising\n strings or old-raise-syntax."} {"_id": "q_4284", "text": "search the pylint rc file and return its path if it find it, else None"} {"_id": "q_4285", "text": "return a validated value for an option according to its type\n\n optional argument name is only used for error message formatting"} {"_id": "q_4286", "text": "optik callback for option setting"} {"_id": "q_4287", "text": "write a configuration file according to the current configuration\n into the given stream or stdout"} {"_id": "q_4288", "text": "return the usage string for available options"} {"_id": "q_4289", "text": "initialize the provider using default values"} {"_id": "q_4290", "text": "return the dictionary defining an option given its name"} {"_id": "q_4291", "text": "return an iterator on options grouped by section\n\n (section, [list of (optname, optdict, optvalue)])"} {"_id": "q_4292", "text": "Determines if a BoundMethod node represents a method call.\n\n Args:\n func (astroid.BoundMethod): The BoundMethod AST node to check.\n types (Optional[String]): Optional sequence of caller type names to restrict check.\n methods (Optional[String]): Optional sequence of method names to restrict check.\n\n Returns:\n bool: true if the node represents a method call for the given type and\n method names, False otherwise."} {"_id": "q_4293", "text": "Checks if node represents a string with complex formatting specs.\n\n Args:\n node (astroid.node_classes.NodeNG): AST node to check\n Returns:\n bool: True if inferred string uses complex formatting, False otherwise"} {"_id": "q_4294", "text": "Checks to see if a module uses a non-Python logging module."} {"_id": "q_4295", "text": "Checks calls to logging methods."} {"_id": "q_4296", "text": "return True if the node is inside a kind of for loop"} {"_id": "q_4297", "text": "Returns the loop node that holds the break node in arguments.\n\n Args:\n break_node (astroid.Break): the break node of interest.\n\n Returns:\n astroid.For or astroid.While: the loop node holding the break node."} {"_id": "q_4298", "text": "Returns true if a loop may ends up in a break statement.\n\n Args:\n loop (astroid.For, astroid.While): the loop node inspected.\n\n Returns:\n bool: True if the loop may ends up in a break statement, False otherwise."} {"_id": "q_4299", "text": "Returns a tuple of property classes and names.\n\n Property classes are fully qualified, such as 'abc.abstractproperty' and\n property names are the actual names, such as 'abstract_property'."} {"_id": "q_4300", "text": "return True if the object is a method redefined via decorator.\n\n For example:\n @property\n def x(self): return self._x\n @x.setter\n def x(self, value): self._x = value"} {"_id": "q_4301", "text": "Is this a call with exactly 1 argument,\n where that argument is positional?"} {"_id": "q_4302", "text": "Check instantiating abstract class with\n abc.ABCMeta as metaclass."} {"_id": "q_4303", "text": "Check that any loop with an else clause has a break statement."} {"_id": "q_4304", "text": "initialize visit variables and statistics"} {"_id": "q_4305", "text": "check for various kind of statements without effect"} {"_id": "q_4306", "text": "check whether or not the lambda is suspicious"} {"_id": "q_4307", "text": "check the use of an assert statement on a tuple."} {"_id": "q_4308", "text": "check duplicate key in dictionary"} {"_id": "q_4309", "text": "check that a node is not inside a finally clause of a\n try...finally statement.\n If we found before a try...finally bloc a parent which its type is\n in breaker_classes, we skip the whole check."} {"_id": "q_4310", "text": "check module level assigned names"} {"_id": "q_4311", "text": "Check if we compare to a literal, which is usually what we do not want to do."} {"_id": "q_4312", "text": "create the subgraphs representing any `if` and `for` statements"} {"_id": "q_4313", "text": "parse the body and any `else` block of `if` and `for` statements"} {"_id": "q_4314", "text": "visit an astroid.Module node to check too complex rating and\n add message if is greather than max_complexity stored from options"} {"_id": "q_4315", "text": "walk to the checker's dir and collect visit and leave methods"} {"_id": "q_4316", "text": "call visit events of astroid checkers for the given node, recurse on\n its children, then leave events."} {"_id": "q_4317", "text": "create a relation ship"} {"_id": "q_4318", "text": "return a relation ship or None"} {"_id": "q_4319", "text": "return visible attributes, possibly with class name"} {"_id": "q_4320", "text": "return visible methods"} {"_id": "q_4321", "text": "create a diagram object"} {"_id": "q_4322", "text": "return class names if needed in diagram"} {"_id": "q_4323", "text": "return all class nodes in the diagram"} {"_id": "q_4324", "text": "return a module by its name, looking also for relative imports;\n raise KeyError if not found"} {"_id": "q_4325", "text": "add dependencies created by from-imports"} {"_id": "q_4326", "text": "Deletes old deployed versions of the function in AWS Lambda.\n\n Won't delete $Latest and any aliased version\n\n :param str src:\n The path to your Lambda ready project (folder must contain a valid\n config.yaml and handler module (e.g.: service.py).\n :param int keep_last_versions:\n The number of recent versions to keep and not delete"} {"_id": "q_4327", "text": "Deploys a new function to AWS Lambda.\n\n :param str src:\n The path to your Lambda ready project (folder must contain a valid\n config.yaml and handler module (e.g.: service.py).\n :param str local_package:\n The path to a local package with should be included in the deploy as\n well (and/or is not available on PyPi)"} {"_id": "q_4328", "text": "Deploys a new function via AWS S3.\n\n :param str src:\n The path to your Lambda ready project (folder must contain a valid\n config.yaml and handler module (e.g.: service.py).\n :param str local_package:\n The path to a local package with should be included in the deploy as\n well (and/or is not available on PyPi)"} {"_id": "q_4329", "text": "Uploads a new function to AWS S3.\n\n :param str src:\n The path to your Lambda ready project (folder must contain a valid\n config.yaml and handler module (e.g.: service.py).\n :param str local_package:\n The path to a local package with should be included in the deploy as\n well (and/or is not available on PyPi)"} {"_id": "q_4330", "text": "Copies template files to a given directory.\n\n :param str src:\n The path to output the template lambda project files.\n :param bool minimal:\n Minimal possible template files (excludes event.json)."} {"_id": "q_4331", "text": "Tranlate a string of the form \"module.function\" into a callable\n function.\n\n :param str src:\n The path to your Lambda project containing a valid handler file.\n :param str handler:\n A dot delimited string representing the `.`."} {"_id": "q_4332", "text": "Shortcut to insert the `account_id` and `role` into the iam string."} {"_id": "q_4333", "text": "Upload a function to AWS S3."} {"_id": "q_4334", "text": "Download the data at a URL, and cache it under the given name.\n\n The file is stored under `pyav/test` with the given name in the directory\n :envvar:`PYAV_TESTDATA_DIR`, or the first that is writeable of:\n\n - the current virtualenv\n - ``/usr/local/share``\n - ``/usr/local/lib``\n - ``/usr/share``\n - ``/usr/lib``\n - the user's home"} {"_id": "q_4335", "text": "Download and return a path to a sample from the FFmpeg test suite.\n\n Data is handled by :func:`cached_download`.\n\n See the `FFmpeg Automated Test Environment `_"} {"_id": "q_4336", "text": "Get distutils-compatible extension extras for the given library.\n\n This requires ``pkg-config``."} {"_id": "q_4337", "text": "Update the `dst` with the `src`, extending values where lists.\n\n Primiarily useful for integrating results from `get_library_config`."} {"_id": "q_4338", "text": "Spawn a process, and eat the stdio."} {"_id": "q_4339", "text": "filter the quoted text out of a message"} {"_id": "q_4340", "text": "parse one document to prep for TextRank"} {"_id": "q_4341", "text": "construct the TextRank graph from parsed paragraphs"} {"_id": "q_4342", "text": "output the graph in Dot file format"} {"_id": "q_4343", "text": "render the TextRank graph for visual formats"} {"_id": "q_4344", "text": "run the TextRank algorithm"} {"_id": "q_4345", "text": "leverage noun phrase chunking"} {"_id": "q_4346", "text": "iterate through the noun phrases"} {"_id": "q_4347", "text": "iterator for collecting the named-entities"} {"_id": "q_4348", "text": "create a MinHash digest"} {"_id": "q_4349", "text": "determine distance for each sentence"} {"_id": "q_4350", "text": "iterator for the most significant sentences, up to a specified limit"} {"_id": "q_4351", "text": "pretty print a JSON object"} {"_id": "q_4352", "text": "Fetch data about tag"} {"_id": "q_4353", "text": "Create the tag."} {"_id": "q_4354", "text": "Private method to extract from a value, the resources.\n It will check the type of object in the array provided and build\n the right structure for the API."} {"_id": "q_4355", "text": "Add the Tag to a Droplet.\n\n Attributes accepted at creation time:\n droplet: array of string or array of int, or array of Droplets."} {"_id": "q_4356", "text": "Remove the Tag from the Droplet.\n\n Attributes accepted at creation time:\n droplet: array of string or array of int, or array of Droplets."} {"_id": "q_4357", "text": "Class method that will return a Action object by ID."} {"_id": "q_4358", "text": "Wait until the action is marked as completed or with an error.\n It will return True in case of success, otherwise False.\n\n Optional Args:\n update_every_seconds - int : number of seconds to wait before\n checking if the action is completed."} {"_id": "q_4359", "text": "Class method that will return a Droplet object by ID.\n\n Args:\n api_token (str): token\n droplet_id (int): droplet id"} {"_id": "q_4360", "text": "Take a snapshot!\n\n Args:\n snapshot_name (str): name of snapshot\n\n Optional Args:\n return_dict (bool): Return a dict when True (default),\n otherwise return an Action.\n power_off (bool): Before taking the snapshot the droplet will be\n turned off with another API call. It will wait until the\n droplet will be powered off.\n\n Returns dict or Action"} {"_id": "q_4361", "text": "Change the kernel to a new one\n\n Args:\n kernel : instance of digitalocean.Kernel.Kernel\n\n Optional Args:\n return_dict (bool): Return a dict when True (default),\n otherwise return an Action.\n\n Returns dict or Action"} {"_id": "q_4362", "text": "Check and return a list of SSH key IDs or fingerprints according\n to DigitalOcean's API. This method is used to check and create a\n droplet with the correct SSH keys."} {"_id": "q_4363", "text": "Returns a list of Action objects\n This actions can be used to check the droplet's status"} {"_id": "q_4364", "text": "Returns a specific Action by its ID.\n\n Args:\n action_id (int): id of action"} {"_id": "q_4365", "text": "Returns a list of Record objects"} {"_id": "q_4366", "text": "Load the FloatingIP object from DigitalOcean.\n\n Requires self.ip to be set."} {"_id": "q_4367", "text": "Creates a FloatingIP and assigns it to a Droplet.\n\n Note: Every argument and parameter given to this method will be\n assigned to the object.\n\n Args:\n droplet_id: int - droplet id"} {"_id": "q_4368", "text": "Assign a FloatingIP to a Droplet.\n\n Args:\n droplet_id: int - droplet id"} {"_id": "q_4369", "text": "Add tags to this Firewall."} {"_id": "q_4370", "text": "Remove tags from this Firewall."} {"_id": "q_4371", "text": "Class method that will return a SSHKey object by ID."} {"_id": "q_4372", "text": "Load the SSHKey object from DigitalOcean.\n\n Requires either self.id or self.fingerprint to be set."} {"_id": "q_4373", "text": "This method will load a SSHKey object from DigitalOcean\n from a public_key. This method will avoid problems like\n uploading the same public_key twice."} {"_id": "q_4374", "text": "This function returns a list of Region object."} {"_id": "q_4375", "text": "This function returns a list of Droplet object."} {"_id": "q_4376", "text": "This function returns a list of SSHKey object."} {"_id": "q_4377", "text": "Return a SSHKey object by its ID."} {"_id": "q_4378", "text": "This method returns a list of all tags."} {"_id": "q_4379", "text": "This function returns a list of FloatingIP objects."} {"_id": "q_4380", "text": "Returns a list of Load Balancer objects."} {"_id": "q_4381", "text": "Returns a Load Balancer object by its ID.\n\n Args:\n id (str): Load Balancer ID"} {"_id": "q_4382", "text": "This function returns a list of Certificate objects."} {"_id": "q_4383", "text": "This method returns a list of all Snapshots."} {"_id": "q_4384", "text": "This method returns a list of all Snapshots based on Droplets."} {"_id": "q_4385", "text": "This method returns a list of all Snapshots based on volumes."} {"_id": "q_4386", "text": "This function returns a list of Volume objects."} {"_id": "q_4387", "text": "Returns a Volume object by its ID."} {"_id": "q_4388", "text": "Return a Firewall by its ID."} {"_id": "q_4389", "text": "Class method that will return a LoadBalancer object by its ID.\n\n Args:\n api_token (str): DigitalOcean API token\n id (str): Load Balancer ID"} {"_id": "q_4390", "text": "Loads updated attributues for a LoadBalancer object.\n\n Requires self.id to be set."} {"_id": "q_4391", "text": "Creates a new LoadBalancer.\n\n Note: Every argument and parameter given to this method will be\n assigned to the object.\n\n Args:\n name (str): The Load Balancer's name\n region (str): The slug identifier for a DigitalOcean region\n algorithm (str, optional): The load balancing algorithm to be\n used. Currently, it must be either \"round_robin\" or\n \"least_connections\"\n forwarding_rules (obj:`list`): A list of `ForwrdingRules` objects\n health_check (obj, optional): A `HealthCheck` object\n sticky_sessions (obj, optional): A `StickySessions` object\n redirect_http_to_https (bool, optional): A boolean indicating\n whether HTTP requests to the Load Balancer should be\n redirected to HTTPS\n droplet_ids (obj:`list` of `int`): A list of IDs representing\n Droplets to be added to the Load Balancer (mutually\n exclusive with 'tag')\n tag (str): A string representing a DigitalOcean Droplet tag\n (mutually exclusive with 'droplet_ids')"} {"_id": "q_4392", "text": "Save the LoadBalancer"} {"_id": "q_4393", "text": "Assign a LoadBalancer to a Droplet.\n\n Args:\n droplet_ids (obj:`list` of `int`): A list of Droplet IDs"} {"_id": "q_4394", "text": "Unassign a LoadBalancer.\n\n Args:\n droplet_ids (obj:`list` of `int`): A list of Droplet IDs"} {"_id": "q_4395", "text": "Removes existing forwarding rules from a LoadBalancer.\n\n Args:\n forwarding_rules (obj:`list`): A list of `ForwrdingRules` objects"} {"_id": "q_4396", "text": "Creates a new record for a domain.\n\n Args:\n type (str): The type of the DNS record (e.g. A, CNAME, TXT).\n name (str): The host name, alias, or service being defined by the\n record.\n data (int): Variable data depending on record type.\n priority (int): The priority for SRV and MX records.\n port (int): The port for SRV records.\n ttl (int): The time to live for the record, in seconds.\n weight (int): The weight for SRV records.\n flags (int): An unsigned integer between 0-255 used for CAA records.\n tags (string): The parameter tag for CAA records. Valid values are\n \"issue\", \"wildissue\", or \"iodef\""} {"_id": "q_4397", "text": "Save existing record"} {"_id": "q_4398", "text": "Checks if any timeout for the requests to DigitalOcean is required.\n To set a timeout, use the REQUEST_TIMEOUT_ENV_VAR environment\n variable."} {"_id": "q_4399", "text": "Class method that will return an Volume object by ID."} {"_id": "q_4400", "text": "Creates a Block Storage volume\n\n Note: Every argument and parameter given to this method will be\n assigned to the object.\n\n Args:\n name: string - a name for the volume\n snapshot_id: string - unique identifier for the volume snapshot\n size_gigabytes: int - size of the Block Storage volume in GiB\n filesystem_type: string, optional - name of the filesystem type the\n volume will be formated with ('ext4' or 'xfs')\n filesystem_label: string, optional - the label to be applied to the\n filesystem, only used in conjunction with filesystem_type\n\n Optional Args:\n description: string - text field to describe a volume"} {"_id": "q_4401", "text": "Attach a Volume to a Droplet.\n\n Args:\n droplet_id: int - droplet id\n region: string - slug identifier for the region"} {"_id": "q_4402", "text": "Detach a Volume to a Droplet.\n\n Args:\n size_gigabytes: int - size of the Block Storage volume in GiB\n region: string - slug identifier for the region"} {"_id": "q_4403", "text": "Retrieve the list of snapshots that have been created from a volume.\n\n Args:"} {"_id": "q_4404", "text": "Class method that will return a Certificate object by its ID."} {"_id": "q_4405", "text": "Class method that will return an Image object by ID or slug.\n\n This method is used to validate the type of the image. If it is a\n number, it will be considered as an Image ID, instead if it is a\n string, it will considered as slug."} {"_id": "q_4406", "text": "Creates a new custom DigitalOcean Image from the Linux virtual machine\n image located at the provided `url`."} {"_id": "q_4407", "text": "Load slug.\n\n Loads by id, or by slug if id is not present or use slug is True."} {"_id": "q_4408", "text": "Rename an image"} {"_id": "q_4409", "text": "Convert reduce_sum layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4410", "text": "Convert slice operation.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4411", "text": "Convert clip operation.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4412", "text": "Convert elementwise addition.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4413", "text": "Convert elementwise subtraction.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4414", "text": "Convert Linear.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4415", "text": "Convert matmul layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4416", "text": "Convert constant layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4417", "text": "Convert transpose layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4418", "text": "Convert reshape layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4419", "text": "Convert squeeze operation.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4420", "text": "Convert unsqueeze operation.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4421", "text": "Convert shape operation.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4422", "text": "Convert Average pooling.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4423", "text": "Convert 3d Max pooling.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4424", "text": "Convert instance normalization layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4425", "text": "Convert dropout.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4426", "text": "Convert relu layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4427", "text": "Convert leaky relu layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4428", "text": "Convert softmax layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4429", "text": "Convert hardtanh layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4430", "text": "Convert selu layer.\n\n Args:\n params: dictionary with layer parameters\n w_name: name prefix in state_dict\n scope_name: pytorch scope name\n inputs: pytorch node inputs\n layers: dictionary with keras tensors\n weights: pytorch state_dict\n names: use short names for keras layers"} {"_id": "q_4431", "text": "Removes itself from the cache\n\n Note: This is required by the oauthlib"} {"_id": "q_4432", "text": "Returns the User object\n\n Returns None if the user isn't found or the passwords don't match\n\n :param username: username of the user\n :param password: password of the user"} {"_id": "q_4433", "text": "Creates a Token object and removes all expired tokens that belong\n to the user\n\n :param token: token object\n :param request: OAuthlib request object"} {"_id": "q_4434", "text": "Creates Grant object with the given params\n\n :param client_id: ID of the client\n :param code:\n :param request: OAuthlib request object"} {"_id": "q_4435", "text": "Init app with Flask instance.\n\n You can also pass the instance of Flask later::\n\n oauth = OAuth()\n oauth.init_app(app)"} {"_id": "q_4436", "text": "Registers a new remote application.\n\n :param name: the name of the remote application\n :param register: whether the remote app will be registered\n\n Find more parameters from :class:`OAuthRemoteApp`."} {"_id": "q_4437", "text": "Sends a request to the remote server with OAuth tokens attached.\n\n :param data: the data to be sent to the server.\n :param headers: an optional dictionary of headers.\n :param format: the format for the `data`. Can be `urlencoded` for\n URL encoded data or `json` for JSON.\n :param method: the HTTP request method to use.\n :param content_type: an optional content type. If a content type\n is provided, the data is passed as it, and\n the `format` is ignored.\n :param token: an optional token to pass, if it is None, token will\n be generated by tokengetter."} {"_id": "q_4438", "text": "Handles an oauth1 authorization response."} {"_id": "q_4439", "text": "Handles an oauth2 authorization response."} {"_id": "q_4440", "text": "Handles authorization response smartly."} {"_id": "q_4441", "text": "Uses cached client or create new one with specific token."} {"_id": "q_4442", "text": "Creates a client with specific access token pair.\n\n :param token: a tuple of access token pair ``(token, token_secret)``\n or a dictionary of access token response.\n :returns: a :class:`requests_oauthlib.oauth1_session.OAuth1Session`\n object."} {"_id": "q_4443", "text": "When consumer confirm the authrozation."} {"_id": "q_4444", "text": "Request token handler decorator.\n\n The decorated function should return an dictionary or None as\n the extra credentials for creating the token response.\n\n If you don't need to add any extra credentials, it could be as\n simple as::\n\n @app.route('/oauth/request_token')\n @oauth.request_token_handler\n def request_token():\n return {}"} {"_id": "q_4445", "text": "Get client secret.\n\n The client object must has ``client_secret`` attribute."} {"_id": "q_4446", "text": "Get request token secret.\n\n The request token object should a ``secret`` attribute."} {"_id": "q_4447", "text": "Get access token secret.\n\n The access token object should a ``secret`` attribute."} {"_id": "q_4448", "text": "Default realms of the client."} {"_id": "q_4449", "text": "Realms for this request token."} {"_id": "q_4450", "text": "Retrieves a previously stored client provided RSA key."} {"_id": "q_4451", "text": "Validates that supplied client key."} {"_id": "q_4452", "text": "Validates request token is available for client."} {"_id": "q_4453", "text": "Validates access token is available for client."} {"_id": "q_4454", "text": "Validate the timestamp and nonce is used or not."} {"_id": "q_4455", "text": "Check if the token has permission on those realms."} {"_id": "q_4456", "text": "Validate verifier exists."} {"_id": "q_4457", "text": "Verify if the request token is existed."} {"_id": "q_4458", "text": "Save request token to database.\n\n A grantsetter is required, which accepts a token and request\n parameters::\n\n def grantsetter(token, request):\n grant = Grant(\n token=token['oauth_token'],\n secret=token['oauth_token_secret'],\n client=request.client,\n redirect_uri=oauth.redirect_uri,\n realms=request.realms,\n )\n return grant.save()"} {"_id": "q_4459", "text": "The error page URI.\n\n When something turns error, it will redirect to this error page.\n You can configure the error page URI with Flask config::\n\n OAUTH2_PROVIDER_ERROR_URI = '/error'\n\n You can also define the error page by a named endpoint::\n\n OAUTH2_PROVIDER_ERROR_ENDPOINT = 'oauth.error'"} {"_id": "q_4460", "text": "When consumer confirm the authorization."} {"_id": "q_4461", "text": "Determine if client authentication is required for current request.\n\n According to the rfc6749, client authentication is required in the\n following cases:\n\n Resource Owner Password Credentials Grant: see `Section 4.3.2`_.\n Authorization Code Grant: see `Section 4.1.3`_.\n Refresh Token Grant: see `Section 6`_.\n\n .. _`Section 4.3.2`: http://tools.ietf.org/html/rfc6749#section-4.3.2\n .. _`Section 4.1.3`: http://tools.ietf.org/html/rfc6749#section-4.1.3\n .. _`Section 6`: http://tools.ietf.org/html/rfc6749#section-6"} {"_id": "q_4462", "text": "Authenticate itself in other means.\n\n Other means means is described in `Section 3.2.1`_.\n\n .. _`Section 3.2.1`: http://tools.ietf.org/html/rfc6749#section-3.2.1"} {"_id": "q_4463", "text": "Authenticate a non-confidential client.\n\n :param client_id: Client ID of the non-confidential client\n :param request: The Request object passed by oauthlib"} {"_id": "q_4464", "text": "Get the list of scopes associated with the refresh token.\n\n This method is used in the refresh token grant flow. We return\n the scope of the token to be refreshed so it can be applied to the\n new access token."} {"_id": "q_4465", "text": "Ensures the requested scope matches the scope originally granted\n by the resource owner. If the scope is omitted it is treated as equal\n to the scope originally granted by the resource owner.\n\n DEPRECATION NOTE: This method will cease to be used in oauthlib>0.4.2,\n future versions of ``oauthlib`` use the validator method\n ``get_original_scopes`` to determine the scope of the refreshed token."} {"_id": "q_4466", "text": "Default redirect_uri for the given client."} {"_id": "q_4467", "text": "Default scopes for the given client."} {"_id": "q_4468", "text": "Persist the Bearer token."} {"_id": "q_4469", "text": "Ensure client_id belong to a valid and active client."} {"_id": "q_4470", "text": "Ensure the grant code is valid."} {"_id": "q_4471", "text": "Ensure the client is authorized to use the grant type requested.\n\n It will allow any of the four grant types (`authorization_code`,\n `password`, `client_credentials`, `refresh_token`) by default.\n Implemented `allowed_grant_types` for client object to authorize\n the request.\n\n It is suggested that `allowed_grant_types` should contain at least\n `authorization_code` and `refresh_token`."} {"_id": "q_4472", "text": "Ensure the token is valid and belongs to the client\n\n This method is used by the authorization code grant indirectly by\n issuing refresh tokens, resource owner password credentials grant\n (also indirectly) and the refresh token grant."} {"_id": "q_4473", "text": "Ensure the client is authorized access to requested scopes."} {"_id": "q_4474", "text": "Ensure the username and password is valid.\n\n Attach user object on request for later using."} {"_id": "q_4475", "text": "Revoke an access or refresh token."} {"_id": "q_4476", "text": "Since weibo is a rubbish server, it does not follow the standard,\n we need to change the authorization header for it."} {"_id": "q_4477", "text": "Extract request params."} {"_id": "q_4478", "text": "Make sure text is bytes type."} {"_id": "q_4479", "text": "Create response class for Flask."} {"_id": "q_4480", "text": "Gets the cached clients dictionary in current context."} {"_id": "q_4481", "text": "Adds remote application and applies custom attributes on it.\n\n If the application instance's name is different from the argument\n provided name, or the keyword arguments is not empty, then the\n application instance will not be modified but be copied as a\n prototype.\n\n :param remote_app: the remote application instance.\n :type remote_app: the subclasses of :class:`BaseApplication`\n :params kwargs: the overriding attributes for the application instance."} {"_id": "q_4482", "text": "Creates and adds new remote application.\n\n :param name: the remote application's name.\n :param version: '1' or '2', the version code of OAuth protocol.\n :param kwargs: the attributes of remote application."} {"_id": "q_4483", "text": "Attempt to return a PWM instance for the platform which the code is being\n executed on. Currently supports only the Raspberry Pi using the RPi.GPIO\n library and Beaglebone Black using the Adafruit_BBIO library. Will throw an\n exception if a PWM instance can't be created for the current platform. The\n returned PWM object has the same interface as the RPi_PWM_Adapter and\n BBIO_PWM_Adapter classes."} {"_id": "q_4484", "text": "Stop PWM output on specified pin."} {"_id": "q_4485", "text": "Write the specified byte value to the IODIR registor. If no value\n specified the current buffered value will be written."} {"_id": "q_4486", "text": "Write the specified byte value to the GPPU registor. If no value\n specified the current buffered value will be written."} {"_id": "q_4487", "text": "Disable the FTDI drivers for the current platform. This is necessary\n because they will conflict with libftdi and accessing the FT232H. Note you\n can enable the FTDI drivers again by calling enable_FTDI_driver."} {"_id": "q_4488", "text": "Close the FTDI device. Will be automatically called when the program ends."} {"_id": "q_4489", "text": "Helper function to call write_data on the provided FTDI device and\n verify it succeeds."} {"_id": "q_4490", "text": "Helper function to call the provided command on the FTDI device and\n verify the response matches the expected value."} {"_id": "q_4491", "text": "Helper function to continuously poll reads on the FTDI device until an\n expected number of bytes are returned. Will throw a timeout error if no\n data is received within the specified number of timeout seconds. Returns\n the read data as a string if successful, otherwise raises an execption."} {"_id": "q_4492", "text": "Enable MPSSE mode on the FTDI device."} {"_id": "q_4493", "text": "Synchronize buffers with MPSSE by sending bad opcode and reading expected\n error response. Should be called once after enabling MPSSE."} {"_id": "q_4494", "text": "Set the clock speed of the MPSSE engine. Can be any value from 450hz\n to 30mhz and will pick that speed or the closest speed below it."} {"_id": "q_4495", "text": "Read both GPIO bus states and return a 16 bit value with their state.\n D0-D7 are the lower 8 bits and C0-C7 are the upper 8 bits."} {"_id": "q_4496", "text": "Return command to update the MPSSE GPIO state to the current direction\n and level."} {"_id": "q_4497", "text": "Set the input or output mode for a specified pin. Mode should be\n either OUT or IN."} {"_id": "q_4498", "text": "Half-duplex SPI read. The specified length of bytes will be clocked\n in the MISO line and returned as a bytearray object."} {"_id": "q_4499", "text": "Write the specified number of bytes to the chip."} {"_id": "q_4500", "text": "Read a signed byte from the specified register."} {"_id": "q_4501", "text": "Return an I2C device for the specified address and on the specified bus.\n If busnum isn't specified, the default I2C bus for the platform will attempt\n to be detected."} {"_id": "q_4502", "text": "Write an 8-bit value to the specified register."} {"_id": "q_4503", "text": "Read an unsigned byte from the specified register."} {"_id": "q_4504", "text": "Attempt to return a GPIO instance for the platform which the code is being\n executed on. Currently supports only the Raspberry Pi using the RPi.GPIO\n library and Beaglebone Black using the Adafruit_BBIO library. Will throw an\n exception if a GPIO instance can't be created for the current platform. The\n returned GPIO object is an instance of BaseGPIO."} {"_id": "q_4505", "text": "Set the input or output mode for a specified pin. Mode should be\n either OUTPUT or INPUT."} {"_id": "q_4506", "text": "Set the input or output mode for a specified pin. Mode should be\n either DIR_IN or DIR_OUT."} {"_id": "q_4507", "text": "Remove edge detection for a particular GPIO channel. Pin should be\n type IN."} {"_id": "q_4508", "text": "Call the method repeatedly such that it will return a PKey object."} {"_id": "q_4509", "text": "Call the function with an encrypted PEM and a passphrase callback which\n returns the wrong passphrase."} {"_id": "q_4510", "text": "Call the function with an encrypted PEM and a passphrase callback which\n returns a non-string."} {"_id": "q_4511", "text": "Create a CRL object with 100 Revoked objects, then call the\n get_revoked method repeatedly."} {"_id": "q_4512", "text": "Copy an empty Revoked object repeatedly. The copy is not garbage\n collected, therefore it needs to be manually freed."} {"_id": "q_4513", "text": "Generate a certificate given a certificate request.\n\n Arguments: req - Certificate request to use\n issuerCert - The certificate of the issuer\n issuerKey - The private key of the issuer\n serial - Serial number for the certificate\n notBefore - Timestamp (relative to now) when the certificate\n starts being valid\n notAfter - Timestamp (relative to now) when the certificate\n stops being valid\n digest - Digest method to use for signing, default is sha256\n Returns: The signed certificate in an X509 object"} {"_id": "q_4514", "text": "Builds a decorator that ensures that functions that rely on OpenSSL\n functions that are not present in this build raise NotImplementedError,\n rather than AttributeError coming out of cryptography.\n\n :param flag: A cryptography flag that guards the functions, e.g.\n ``Cryptography_HAS_NEXTPROTONEG``.\n :param error: The string to be used in the exception if the flag is false."} {"_id": "q_4515", "text": "Set the passphrase callback. This function will be called\n when a private key with a passphrase is loaded.\n\n :param callback: The Python callback to use. This must accept three\n positional arguments. First, an integer giving the maximum length\n of the passphrase it may return. If the returned passphrase is\n longer than this, it will be truncated. Second, a boolean value\n which will be true if the user should be prompted for the\n passphrase twice and the callback should verify that the two values\n supplied are equal. Third, the value given as the *userdata*\n parameter to :meth:`set_passwd_cb`. The *callback* must return\n a byte string. If an error occurs, *callback* should return a false\n value (e.g. an empty string).\n :param userdata: (optional) A Python object which will be given as\n argument to the callback\n :return: None"} {"_id": "q_4516", "text": "Load a certificate chain from a file.\n\n :param certfile: The name of the certificate chain file (``bytes`` or\n ``unicode``). Must be PEM encoded.\n\n :return: None"} {"_id": "q_4517", "text": "Load a certificate from a file\n\n :param certfile: The name of the certificate file (``bytes`` or\n ``unicode``).\n :param filetype: (optional) The encoding of the file, which is either\n :const:`FILETYPE_PEM` or :const:`FILETYPE_ASN1`. The default is\n :const:`FILETYPE_PEM`.\n\n :return: None"} {"_id": "q_4518", "text": "Load a certificate from a X509 object\n\n :param cert: The X509 object\n :return: None"} {"_id": "q_4519", "text": "Add certificate to chain\n\n :param certobj: The X509 certificate object to add to the chain\n :return: None"} {"_id": "q_4520", "text": "Load the trusted certificates that will be sent to the client. Does\n not actually imply any of the certificates are trusted; that must be\n configured separately.\n\n :param bytes cafile: The path to a certificates file in PEM format.\n :return: None"} {"_id": "q_4521", "text": "Load parameters for Ephemeral Diffie-Hellman\n\n :param dhfile: The file to load EDH parameters from (``bytes`` or\n ``unicode``).\n\n :return: None"} {"_id": "q_4522", "text": "Set the list of ciphers to be used in this context.\n\n See the OpenSSL manual for more information (e.g.\n :manpage:`ciphers(1)`).\n\n :param bytes cipher_list: An OpenSSL cipher string.\n :return: None"} {"_id": "q_4523", "text": "Set the list of preferred client certificate signers for this server\n context.\n\n This list of certificate authorities will be sent to the client when\n the server requests a client certificate.\n\n :param certificate_authorities: a sequence of X509Names.\n :return: None\n\n .. versionadded:: 0.10"} {"_id": "q_4524", "text": "Add the CA certificate to the list of preferred signers for this\n context.\n\n The list of certificate authorities will be sent to the client when the\n server requests a client certificate.\n\n :param certificate_authority: certificate authority's X509 certificate.\n :return: None\n\n .. versionadded:: 0.10"} {"_id": "q_4525", "text": "Specify the protocols that the client is prepared to speak after the\n TLS connection has been negotiated using Application Layer Protocol\n Negotiation.\n\n :param protos: A list of the protocols to be offered to the server.\n This list should be a Python list of bytestrings representing the\n protocols to offer, e.g. ``[b'http/1.1', b'spdy/2']``."} {"_id": "q_4526", "text": "Set a callback to provide OCSP data to be stapled to the TLS handshake\n on the server side.\n\n :param callback: The callback function. It will be invoked with two\n arguments: the Connection, and the optional arbitrary data you have\n provided. The callback must return a bytestring that contains the\n OCSP data to staple to the handshake. If no OCSP data is available\n for this connection, return the empty bytestring.\n :param data: Some opaque data that will be passed into the callback\n function when called. This can be used to avoid needing to do\n complex data lookups or to keep track of what context is being\n used. This parameter is optional."} {"_id": "q_4527", "text": "Set a callback to validate OCSP data stapled to the TLS handshake on\n the client side.\n\n :param callback: The callback function. It will be invoked with three\n arguments: the Connection, a bytestring containing the stapled OCSP\n assertion, and the optional arbitrary data you have provided. The\n callback must return a boolean that indicates the result of\n validating the OCSP data: ``True`` if the OCSP data is valid and\n the certificate can be trusted, or ``False`` if either the OCSP\n data is invalid or the certificate has been revoked.\n :param data: Some opaque data that will be passed into the callback\n function when called. This can be used to avoid needing to do\n complex data lookups or to keep track of what context is being\n used. This parameter is optional."} {"_id": "q_4528", "text": "Switch this connection to a new session context.\n\n :param context: A :class:`Context` instance giving the new session\n context to use."} {"_id": "q_4529", "text": "Receive data on the connection and copy it directly into the provided\n buffer, rather than creating a new string.\n\n :param buffer: The buffer to copy into.\n :param nbytes: (optional) The maximum number of bytes to read into the\n buffer. If not present, defaults to the size of the buffer. If\n larger than the size of the buffer, is reduced to the size of the\n buffer.\n :param flags: (optional) The only supported flag is ``MSG_PEEK``,\n all other flags are ignored.\n :return: The number of bytes read into the buffer."} {"_id": "q_4530", "text": "If the Connection was created with a memory BIO, this method can be\n used to read bytes from the write end of that memory BIO. Many\n Connection methods will add bytes which must be read in this manner or\n the buffer will eventually fill up and the Connection will be able to\n take no further actions.\n\n :param bufsiz: The maximum number of bytes to read\n :return: The string read."} {"_id": "q_4531", "text": "Renegotiate the session.\n\n :return: True if the renegotiation can be started, False otherwise\n :rtype: bool"} {"_id": "q_4532", "text": "Send the shutdown message to the Connection.\n\n :return: True if the shutdown completed successfully (i.e. both sides\n have sent closure alerts), False otherwise (in which case you\n call :meth:`recv` or :meth:`send` when the connection becomes\n readable/writeable)."} {"_id": "q_4533", "text": "Retrieve the list of ciphers used by the Connection object.\n\n :return: A list of native cipher strings."} {"_id": "q_4534", "text": "Retrieve the random value used with the server hello message.\n\n :return: A string representing the state"} {"_id": "q_4535", "text": "Retrieve the value of the master key for this session.\n\n :return: A string representing the state"} {"_id": "q_4536", "text": "Obtain the name of the currently used cipher.\n\n :returns: The name of the currently used cipher or :obj:`None`\n if no connection has been established.\n :rtype: :class:`unicode` or :class:`NoneType`\n\n .. versionadded:: 0.15"} {"_id": "q_4537", "text": "Obtain the number of secret bits of the currently used cipher.\n\n :returns: The number of secret bits of the currently used cipher\n or :obj:`None` if no connection has been established.\n :rtype: :class:`int` or :class:`NoneType`\n\n .. versionadded:: 0.15"} {"_id": "q_4538", "text": "Obtain the protocol version of the currently used cipher.\n\n :returns: The protocol name of the currently used cipher\n or :obj:`None` if no connection has been established.\n :rtype: :class:`unicode` or :class:`NoneType`\n\n .. versionadded:: 0.15"} {"_id": "q_4539", "text": "Retrieve the protocol version of the current connection.\n\n :returns: The TLS version of the current connection, for example\n the value for TLS 1.2 would be ``TLSv1.2``or ``Unknown``\n for connections that were not successfully established.\n :rtype: :class:`unicode`"} {"_id": "q_4540", "text": "Get the protocol that was negotiated by NPN.\n\n :returns: A bytestring of the protocol name. If no protocol has been\n negotiated yet, returns an empty string.\n\n .. versionadded:: 0.15"} {"_id": "q_4541", "text": "Get the protocol that was negotiated by ALPN.\n\n :returns: A bytestring of the protocol name. If no protocol has been\n negotiated yet, returns an empty string."} {"_id": "q_4542", "text": "Allocate a new OpenSSL memory BIO.\n\n Arrange for the garbage collector to clean it up automatically.\n\n :param buffer: None or some bytes to use to put into the BIO so that they\n can be read out."} {"_id": "q_4543", "text": "Copy the contents of an OpenSSL BIO object into a Python byte string."} {"_id": "q_4544", "text": "Retrieve the time value of an ASN1 time object.\n\n @param timestamp: An ASN1_GENERALIZEDTIME* (or an object safely castable to\n that type) from which the time value will be retrieved.\n\n @return: The time value from C{timestamp} as a L{bytes} string in a certain\n format. Or C{None} if the object contains no time value."} {"_id": "q_4545", "text": "Return a single curve object selected by name.\n\n See :py:func:`get_elliptic_curves` for information about curve objects.\n\n :param name: The OpenSSL short name identifying the curve object to\n retrieve.\n :type name: :py:class:`unicode`\n\n If the named curve is not supported then :py:class:`ValueError` is raised."} {"_id": "q_4546", "text": "Dump a public key to a buffer.\n\n :param type: The file type (one of :data:`FILETYPE_PEM` or\n :data:`FILETYPE_ASN1`).\n :param PKey pkey: The public key to dump\n :return: The buffer with the dumped key in it.\n :rtype: bytes"} {"_id": "q_4547", "text": "Verify the signature for a data string.\n\n :param cert: signing certificate (X509 object) corresponding to the\n private key which generated the signature.\n :param signature: signature returned by sign function\n :param data: data to be verified\n :param digest: message digest to use\n :return: ``None`` if the signature is correct, raise exception otherwise.\n\n .. versionadded:: 0.11"} {"_id": "q_4548", "text": "Export as a ``cryptography`` key.\n\n :rtype: One of ``cryptography``'s `key interfaces`_.\n\n .. _key interfaces: https://cryptography.io/en/latest/hazmat/\\\n primitives/asymmetric/rsa/#key-interfaces\n\n .. versionadded:: 16.1.0"} {"_id": "q_4549", "text": "Generate a key pair of the given type, with the given number of bits.\n\n This generates a key \"into\" the this object.\n\n :param type: The key type.\n :type type: :py:data:`TYPE_RSA` or :py:data:`TYPE_DSA`\n :param bits: The number of bits.\n :type bits: :py:data:`int` ``>= 0``\n :raises TypeError: If :py:data:`type` or :py:data:`bits` isn't\n of the appropriate type.\n :raises ValueError: If the number of bits isn't an integer of\n the appropriate size.\n :return: ``None``"} {"_id": "q_4550", "text": "Check the consistency of an RSA private key.\n\n This is the Python equivalent of OpenSSL's ``RSA_check_key``.\n\n :return: ``True`` if key is consistent.\n\n :raise OpenSSL.crypto.Error: if the key is inconsistent.\n\n :raise TypeError: if the key is of a type which cannot be checked.\n Only RSA keys can currently be checked."} {"_id": "q_4551", "text": "Get the curves supported by OpenSSL.\n\n :param lib: The OpenSSL library binding object.\n\n :return: A :py:type:`set` of ``cls`` instances giving the names of the\n elliptic curves the underlying library supports."} {"_id": "q_4552", "text": "Create a new OpenSSL EC_KEY structure initialized to use this curve.\n\n The structure is automatically garbage collected when the Python object\n is garbage collected."} {"_id": "q_4553", "text": "Return the DER encoding of this name.\n\n :return: The DER encoded form of this name.\n :rtype: :py:class:`bytes`"} {"_id": "q_4554", "text": "Returns the short type name of this X.509 extension.\n\n The result is a byte string such as :py:const:`b\"basicConstraints\"`.\n\n :return: The short type name.\n :rtype: :py:data:`bytes`\n\n .. versionadded:: 0.12"} {"_id": "q_4555", "text": "Returns the data of the X509 extension, encoded as ASN.1.\n\n :return: The ASN.1 encoded data of this X509 extension.\n :rtype: :py:data:`bytes`\n\n .. versionadded:: 0.12"} {"_id": "q_4556", "text": "Set the public key of the certificate signing request.\n\n :param pkey: The public key to use.\n :type pkey: :py:class:`PKey`\n\n :return: ``None``"} {"_id": "q_4557", "text": "Get the public key of the certificate signing request.\n\n :return: The public key.\n :rtype: :py:class:`PKey`"} {"_id": "q_4558", "text": "Return the subject of this certificate signing request.\n\n This creates a new :class:`X509Name` that wraps the underlying subject\n name field on the certificate signing request. Modifying it will modify\n the underlying signing request, and will have the effect of modifying\n any other :class:`X509Name` that refers to this subject.\n\n :return: The subject of this certificate signing request.\n :rtype: :class:`X509Name`"} {"_id": "q_4559", "text": "Add extensions to the certificate signing request.\n\n :param extensions: The X.509 extensions to add.\n :type extensions: iterable of :py:class:`X509Extension`\n :return: ``None``"} {"_id": "q_4560", "text": "Get X.509 extensions in the certificate signing request.\n\n :return: The X.509 extensions in this request.\n :rtype: :py:class:`list` of :py:class:`X509Extension` objects.\n\n .. versionadded:: 0.15"} {"_id": "q_4561", "text": "Export as a ``cryptography`` certificate.\n\n :rtype: ``cryptography.x509.Certificate``\n\n .. versionadded:: 17.1.0"} {"_id": "q_4562", "text": "Set the version number of the certificate. Note that the\n version value is zero-based, eg. a value of 0 is V1.\n\n :param version: The version number of the certificate.\n :type version: :py:class:`int`\n\n :return: ``None``"} {"_id": "q_4563", "text": "Get the public key of the certificate.\n\n :return: The public key.\n :rtype: :py:class:`PKey`"} {"_id": "q_4564", "text": "Set the public key of the certificate.\n\n :param pkey: The public key.\n :type pkey: :py:class:`PKey`\n\n :return: :py:data:`None`"} {"_id": "q_4565", "text": "Return the digest of the X509 object.\n\n :param digest_name: The name of the digest algorithm to use.\n :type digest_name: :py:class:`bytes`\n\n :return: The digest of the object, formatted as\n :py:const:`b\":\"`-delimited hex pairs.\n :rtype: :py:class:`bytes`"} {"_id": "q_4566", "text": "Set the serial number of the certificate.\n\n :param serial: The new serial number.\n :type serial: :py:class:`int`\n\n :return: :py:data`None`"} {"_id": "q_4567", "text": "Return the serial number of this certificate.\n\n :return: The serial number.\n :rtype: int"} {"_id": "q_4568", "text": "Adjust the time stamp on which the certificate stops being valid.\n\n :param int amount: The number of seconds by which to adjust the\n timestamp.\n :return: ``None``"} {"_id": "q_4569", "text": "Return the issuer of this certificate.\n\n This creates a new :class:`X509Name` that wraps the underlying issuer\n name field on the certificate. Modifying it will modify the underlying\n certificate, and will have the effect of modifying any other\n :class:`X509Name` that refers to this issuer.\n\n :return: The issuer of this certificate.\n :rtype: :class:`X509Name`"} {"_id": "q_4570", "text": "Return the subject of this certificate.\n\n This creates a new :class:`X509Name` that wraps the underlying subject\n name field on the certificate. Modifying it will modify the underlying\n certificate, and will have the effect of modifying any other\n :class:`X509Name` that refers to this subject.\n\n :return: The subject of this certificate.\n :rtype: :class:`X509Name`"} {"_id": "q_4571", "text": "Get a specific extension of the certificate by index.\n\n Extensions on a certificate are kept in order. The index\n parameter selects which extension will be returned.\n\n :param int index: The index of the extension to retrieve.\n :return: The extension at the specified index.\n :rtype: :py:class:`X509Extension`\n :raises IndexError: If the extension index was out of bounds.\n\n .. versionadded:: 0.12"} {"_id": "q_4572", "text": "Add a certificate revocation list to this store.\n\n The certificate revocation lists added to a store will only be used if\n the associated flags are configured to check certificate revocation\n lists.\n\n .. versionadded:: 16.1.0\n\n :param CRL crl: The certificate revocation list to add to this store.\n :return: ``None`` if the certificate revocation list was added\n successfully."} {"_id": "q_4573", "text": "Set up the store context for a subsequent verification operation.\n\n Calling this method more than once without first calling\n :meth:`_cleanup` will leak memory."} {"_id": "q_4574", "text": "Set the serial number.\n\n The serial number is formatted as a hexadecimal number encoded in\n ASCII.\n\n :param bytes hex_str: The new serial number.\n\n :return: ``None``"} {"_id": "q_4575", "text": "Get the serial number.\n\n The serial number is formatted as a hexadecimal number encoded in\n ASCII.\n\n :return: The serial number.\n :rtype: bytes"} {"_id": "q_4576", "text": "Set the reason of this revocation.\n\n If :data:`reason` is ``None``, delete the reason instead.\n\n :param reason: The reason string.\n :type reason: :class:`bytes` or :class:`NoneType`\n\n :return: ``None``\n\n .. seealso::\n\n :meth:`all_reasons`, which gives you a list of all supported\n reasons which you might pass to this method."} {"_id": "q_4577", "text": "Export as a ``cryptography`` CRL.\n\n :rtype: ``cryptography.x509.CertificateRevocationList``\n\n .. versionadded:: 17.1.0"} {"_id": "q_4578", "text": "Get the CRL's issuer.\n\n .. versionadded:: 16.1.0\n\n :rtype: X509Name"} {"_id": "q_4579", "text": "Sign the CRL.\n\n Signing a CRL enables clients to associate the CRL itself with an\n issuer. Before a CRL is meaningful to other OpenSSL functions, it must\n be signed by an issuer.\n\n This method implicitly sets the issuer's name based on the issuer\n certificate and private key used to sign the CRL.\n\n .. versionadded:: 16.1.0\n\n :param X509 issuer_cert: The issuer's certificate.\n :param PKey issuer_key: The issuer's private key.\n :param bytes digest: The digest method to sign the CRL with."} {"_id": "q_4580", "text": "Returns the type name of the PKCS7 structure\n\n :return: A string with the typename"} {"_id": "q_4581", "text": "Replace or set the CA certificates within the PKCS12 object.\n\n :param cacerts: The new CA certificates, or :py:const:`None` to unset\n them.\n :type cacerts: An iterable of :py:class:`X509` or :py:const:`None`\n\n :return: ``None``"} {"_id": "q_4582", "text": "Sign the certificate request with this key and digest type.\n\n :param pkey: The private key to sign with.\n :type pkey: :py:class:`PKey`\n\n :param digest: The message digest to use.\n :type digest: :py:class:`bytes`\n\n :return: ``None``"} {"_id": "q_4583", "text": "Generate a base64 encoded representation of this SPKI object.\n\n :return: The base64 encoded string.\n :rtype: :py:class:`bytes`"} {"_id": "q_4584", "text": "Get the public key of this certificate.\n\n :return: The public key.\n :rtype: :py:class:`PKey`"} {"_id": "q_4585", "text": "Set the public key of the certificate\n\n :param pkey: The public key\n :return: ``None``"} {"_id": "q_4586", "text": "If ``obj`` is text, emit a warning that it should be bytes instead and try\n to convert it to bytes automatically.\n\n :param str label: The name of the parameter from which ``obj`` was taken\n (so a developer can easily find the source of the problem and correct\n it).\n\n :return: If ``obj`` is the text string type, a ``bytes`` object giving the\n UTF-8 encoding of that text is returned. Otherwise, ``obj`` itself is\n returned."} {"_id": "q_4587", "text": "Returns a generator of \"Path\"s"} {"_id": "q_4588", "text": "Internal helper to provide color names."} {"_id": "q_4589", "text": "Serializes str, Attrib, or PathAttrib objects.\n\n Example::\n\n foobar"} {"_id": "q_4590", "text": "Serializes a list, where the values are objects of type\n str, Attrib, or PathAttrib.\n\n Example::\n\n text\n foobar\n foobar"} {"_id": "q_4591", "text": "Parse the event definition node, and return an instance of Event"} {"_id": "q_4592", "text": "Parse the messageEventDefinition node and return an instance of\n MessageEventDefinition"} {"_id": "q_4593", "text": "Parse the timerEventDefinition node and return an instance of\n TimerEventDefinition\n\n This currently only supports the timeDate node for specifying an expiry\n time for the timer."} {"_id": "q_4594", "text": "Called by the weak reference when its target dies.\n In other words, we can assert that self.weak_subscribers is not\n None at this time."} {"_id": "q_4595", "text": "Connects a taskspec that is executed if the condition DOES match.\n\n condition -- a condition (Condition)\n taskspec -- the conditional task spec"} {"_id": "q_4596", "text": "Runs the task. Should not be called directly.\n Returns True if completed, False otherwise."} {"_id": "q_4597", "text": "Returns True if the entire Workflow is completed, False otherwise.\n\n :rtype: bool\n :return: Whether the workflow is completed."} {"_id": "q_4598", "text": "Cancels all open tasks in the workflow.\n\n :type success: bool\n :param success: Whether the Workflow should be marked as successfully\n completed."} {"_id": "q_4599", "text": "Returns the task with the given id.\n\n :type id:integer\n :param id: The id of a task.\n :rtype: Task\n :returns: The task with the given id."} {"_id": "q_4600", "text": "Returns all tasks whose spec has the given name.\n\n :type name: str\n :param name: The name of a task spec.\n :rtype: Task\n :return: The task that relates to the spec with the given name."} {"_id": "q_4601", "text": "Runs the next task.\n Returns True if completed, False otherwise.\n\n :type pick_up: bool\n :param pick_up: When True, this method attempts to choose the next\n task not by searching beginning at the root, but by\n searching from the position at which the last call\n of complete_next() left off.\n :type halt_on_manual: bool\n :param halt_on_manual: When True, this method will not attempt to\n complete any tasks that have manual=True.\n See :meth:`SpiffWorkflow.specs.TaskSpec.__init__`\n :rtype: bool\n :returns: True if all tasks were completed, False otherwise."} {"_id": "q_4602", "text": "Create a new workflow instance from the given spec and arguments.\n\n :param workflow_spec: the workflow spec to use\n\n :param read_only: this should be in read only mode\n\n :param kwargs: Any extra kwargs passed to the deserialize_workflow\n method will be passed through here"} {"_id": "q_4603", "text": "Adds a new child and assigns the given TaskSpec to it.\n\n :type task_spec: TaskSpec\n :param task_spec: The task spec that is assigned to the new child.\n :type state: integer\n :param state: The bitmask of states for the new child.\n :rtype: Task\n :returns: The new child task."} {"_id": "q_4604", "text": "Assigns a new thread id to the task.\n\n :type recursive: bool\n :param recursive: Whether to assign the id to children recursively.\n :rtype: bool\n :returns: The new thread id."} {"_id": "q_4605", "text": "Returns the ancestor that has a task with the given task spec\n as a parent.\n If no such ancestor was found, the root task is returned.\n\n :type parent_task_spec: TaskSpec\n :param parent_task_spec: The wanted ancestor.\n :rtype: Task\n :returns: The child of the given ancestor."} {"_id": "q_4606", "text": "Returns the ancestor that has the given task spec assigned.\n If no such ancestor was found, the root task is returned.\n\n :type task_spec: TaskSpec\n :param task_spec: The wanted task spec.\n :rtype: Task\n :returns: The ancestor."} {"_id": "q_4607", "text": "Returns the ancestor that has a task with the given name assigned.\n Returns None if no such ancestor was found.\n\n :type name: str\n :param name: The name of the wanted task.\n :rtype: Task\n :returns: The ancestor."} {"_id": "q_4608", "text": "Returns a textual representation of this Task's state."} {"_id": "q_4609", "text": "Returns the subtree as a string for debugging.\n\n :rtype: str\n :returns: The debug information."} {"_id": "q_4610", "text": "Parses args and evaluates any Attrib entries"} {"_id": "q_4611", "text": "Parses kwargs and evaluates any Attrib entries"} {"_id": "q_4612", "text": "Sends Celery asynchronous call and stores async call information for\n retrieval laster"} {"_id": "q_4613", "text": "Abort celery task and retry it"} {"_id": "q_4614", "text": "Clear celery task data"} {"_id": "q_4615", "text": "Updates the branch such that all possible future routes are added.\n\n Should NOT be overwritten! Instead, overwrite _predict_hook().\n\n :type my_task: Task\n :param my_task: The associated task in the task tree.\n :type seen: list[taskspec]\n :param seen: A list of already visited tasks.\n :type looked_ahead: integer\n :param looked_ahead: The depth of the predicted path so far."} {"_id": "q_4616", "text": "Return True on success, False otherwise.\n\n :type my_task: Task\n :param my_task: The associated task in the task tree."} {"_id": "q_4617", "text": "Creates the package, writing the data out to the provided file-like\n object."} {"_id": "q_4618", "text": "Writes a local file in to the zip file and adds it to the manifest\n dictionary\n\n :param filename: The zip file name\n\n :param src_filename: the local file name"} {"_id": "q_4619", "text": "Adds the SVG files to the archive for this BPMN file."} {"_id": "q_4620", "text": "Utility method to merge an option and config, with the option taking \"\n precedence"} {"_id": "q_4621", "text": "Parses the specified child task node, and returns the task spec. This\n can be called by a TaskParser instance, that is owned by this\n ProcessParser."} {"_id": "q_4622", "text": "Reads the \"pre-assign\" or \"post-assign\" tag from the given node.\n\n start_node -- the xml node (xml.dom.minidom.Node)"} {"_id": "q_4623", "text": "Reads the conditional statement from the given node.\n\n workflow -- the workflow with which the concurrence is associated\n start_node -- the xml structure (xml.dom.minidom.Node)"} {"_id": "q_4624", "text": "Reads the workflow from the given XML structure and returns a\n WorkflowSpec instance."} {"_id": "q_4625", "text": "Called by a task spec when it was added into the workflow."} {"_id": "q_4626", "text": "Checks integrity of workflow and reports any problems with it.\n\n Detects:\n - loops (tasks that wait on each other in a loop)\n :returns: empty list if valid, a list of errors if not"} {"_id": "q_4627", "text": "Indicate to the workflow that a message has been received. The message\n will be processed by any waiting Intermediate or Boundary Message\n Events, that are waiting for the message."} {"_id": "q_4628", "text": "Deserializes the trigger using the provided serializer."} {"_id": "q_4629", "text": "Evaluate the given expression, within the context of the given task and\n return the result."} {"_id": "q_4630", "text": "Checks whether the preconditions for going to READY state are met.\n Returns True if the threshold was reached, False otherwise.\n Also returns the list of tasks that yet need to be completed."} {"_id": "q_4631", "text": "Connects the task spec that is executed if no other condition\n matches.\n\n :type task_spec: TaskSpec\n :param task_spec: The following task spec."} {"_id": "q_4632", "text": "Return extra config options to be passed to the TrelloIssue class"} {"_id": "q_4633", "text": "A wrapper around get_comments that build the taskwarrior\n annotations."} {"_id": "q_4634", "text": "Get the list of boards to pull cards from. If the user gave a value to\n trello.include_boards use that, otherwise ask the Trello API for the\n user's boards."} {"_id": "q_4635", "text": "Build the full url to the API endpoint"} {"_id": "q_4636", "text": "Grab all issues matching a github query"} {"_id": "q_4637", "text": "Grab all the pull requests"} {"_id": "q_4638", "text": "Return a main config value, or default if it does not exist."} {"_id": "q_4639", "text": "Validate generic options for a particular target"} {"_id": "q_4640", "text": "Return true if the issue in question should be included"} {"_id": "q_4641", "text": "Make a RST-compatible table\n\n From http://stackoverflow.com/a/12539081"} {"_id": "q_4642", "text": "Retrieve password from the given command"} {"_id": "q_4643", "text": "Accepts both integers and empty values."} {"_id": "q_4644", "text": "Pull down tasks from forges and add them to your taskwarrior tasks.\n\n Relies on configuration in bugwarriorrc"} {"_id": "q_4645", "text": "Pages through an object collection from the bitbucket API.\n Returns an iterator that lazily goes through all the 'values'\n of all the pages in the collection."} {"_id": "q_4646", "text": "Returns a list of UDAs defined by given targets\n\n For all targets in `targets`, build a dictionary of configuration overrides\n representing the UDAs defined by the passed-in services (`targets`).\n\n Given a hypothetical situation in which you have two services, the first\n of which defining a UDA named 'serviceAid' (\"Service A ID\", string) and\n a second service defining two UDAs named 'serviceBproject'\n (\"Service B Project\", string) and 'serviceBnumber'\n (\"Service B Number\", numeric), this would return the following structure::\n\n {\n 'uda': {\n 'serviceAid': {\n 'label': 'Service A ID',\n 'type': 'string',\n },\n 'serviceBproject': {\n 'label': 'Service B Project',\n 'type': 'string',\n },\n 'serviceBnumber': {\n 'label': 'Service B Number',\n 'type': 'numeric',\n }\n }\n }"} {"_id": "q_4647", "text": "Parse the big ugly sprint string stored by JIRA.\n\n They look like:\n com.atlassian.greenhopper.service.sprint.Sprint@4c9c41a5[id=2322,rapid\n ViewId=1173,state=ACTIVE,name=Sprint 1,startDate=2016-09-06T16:08:07.4\n 55Z,endDate=2016-09-23T16:08:00.000Z,completeDate=,sequence=2322]"} {"_id": "q_4648", "text": "Initialize a new state file with the given contents.\n This function fails in case the state file already exists."} {"_id": "q_4649", "text": "Update the current state file with the specified contents"} {"_id": "q_4650", "text": "Try to load a blockade state file in the current directory"} {"_id": "q_4651", "text": "Generate a new blockade ID based on the CWD"} {"_id": "q_4652", "text": "Make sure the state directory exists"} {"_id": "q_4653", "text": "Try to delete the state.yml file and the folder .blockade"} {"_id": "q_4654", "text": "Convert blockade ID and container information into\n a state dictionary object."} {"_id": "q_4655", "text": "Write the given state information into a file"} {"_id": "q_4656", "text": "Validate the partitions of containers. If there are any containers\n not in any partition, place them in an new partition."} {"_id": "q_4657", "text": "Get a map of blockade chains IDs -> list of IPs targeted at them\n\n For figuring out which container is in which partition"} {"_id": "q_4658", "text": "Start the timer waiting for pain"} {"_id": "q_4659", "text": "Start the blockade event"} {"_id": "q_4660", "text": "Stop chaos when there is no current blockade operation"} {"_id": "q_4661", "text": "Stop chaos while there is a blockade event in progress"} {"_id": "q_4662", "text": "Delete all state associated with the chaos session"} {"_id": "q_4663", "text": "Sort a dictionary or list of containers into dependency order\n\n Returns a sequence"} {"_id": "q_4664", "text": "Convert a dictionary of configuration values\n into a sequence of BlockadeContainerConfig instances"} {"_id": "q_4665", "text": "Start the containers and link them together"} {"_id": "q_4666", "text": "Destroy all containers and restore networks"} {"_id": "q_4667", "text": "Kill some or all containers"} {"_id": "q_4668", "text": "Fetch the logs of a container"} {"_id": "q_4669", "text": "Start the Blockade REST API"} {"_id": "q_4670", "text": "Add one or more existing Docker containers to a Blockade group"} {"_id": "q_4671", "text": "Get the event log for a given blockade"} {"_id": "q_4672", "text": "Efficient way to compute highly repetitive scoring\n i.e. sequences are involved multiple time\n\n Args:\n sequences(list[str]): list of sequences (either hyp or ref)\n scores_ids(list[tuple(int)]): list of pairs (hyp_id, ref_id)\n ie. scores[i] = rouge_n(scores_ids[i][0],\n scores_ids[i][1])\n\n Returns:\n scores: list of length `len(scores_ids)` containing rouge `n`\n scores as a dict with 'f', 'r', 'p'\n Raises:\n KeyError: if there's a value of i in scores_ids that is not in\n [0, len(sequences)["} {"_id": "q_4673", "text": "Performs the actual evaluation of Flas-CORS options and actually\n modifies the response object.\n\n This function is used both in the decorator and the after_request\n callback"} {"_id": "q_4674", "text": "Safely attempts to match a pattern or string to a request origin."} {"_id": "q_4675", "text": "Compute CORS options for an application by combining the DEFAULT_OPTIONS,\n the app's configuration-specified options and any dictionaries passed. The\n last specified option wins."} {"_id": "q_4676", "text": "A helper method to serialize and processes the options dictionary."} {"_id": "q_4677", "text": "This function is the decorator which is used to wrap a Flask route with.\n In the simplest case, simply use the default parameters to allow all\n origins in what is the most permissive configuration. If this method\n modifies state or performs authentication which may be brute-forced, you\n should add some degree of protection, such as Cross Site Forgery\n Request protection.\n\n :param origins:\n The origin, or list of origins to allow requests from.\n The origin(s) may be regular expressions, case-sensitive strings,\n or else an asterisk\n\n Default : '*'\n :type origins: list, string or regex\n\n :param methods:\n The method or list of methods which the allowed origins are allowed to\n access for non-simple requests.\n\n Default : [GET, HEAD, POST, OPTIONS, PUT, PATCH, DELETE]\n :type methods: list or string\n\n :param expose_headers:\n The header or list which are safe to expose to the API of a CORS API\n specification.\n\n Default : None\n :type expose_headers: list or string\n\n :param allow_headers:\n The header or list of header field names which can be used when this\n resource is accessed by allowed origins. The header(s) may be regular\n expressions, case-sensitive strings, or else an asterisk.\n\n Default : '*', allow all headers\n :type allow_headers: list, string or regex\n\n :param supports_credentials:\n Allows users to make authenticated requests. If true, injects the\n `Access-Control-Allow-Credentials` header in responses. This allows\n cookies and credentials to be submitted across domains.\n\n :note: This option cannot be used in conjuction with a '*' origin\n\n Default : False\n :type supports_credentials: bool\n\n :param max_age:\n The maximum time for which this CORS request maybe cached. This value\n is set as the `Access-Control-Max-Age` header.\n\n Default : None\n :type max_age: timedelta, integer, string or None\n\n :param send_wildcard: If True, and the origins parameter is `*`, a wildcard\n `Access-Control-Allow-Origin` header is sent, rather than the\n request's `Origin` header.\n\n Default : False\n :type send_wildcard: bool\n\n :param vary_header:\n If True, the header Vary: Origin will be returned as per the W3\n implementation guidelines.\n\n Setting this header when the `Access-Control-Allow-Origin` is\n dynamically generated (e.g. when there is more than one allowed\n origin, and an Origin than '*' is returned) informs CDNs and other\n caches that the CORS headers are dynamic, and cannot be cached.\n\n If False, the Vary header will never be injected or altered.\n\n Default : True\n :type vary_header: bool\n\n :param automatic_options:\n Only applies to the `cross_origin` decorator. If True, Flask-CORS will\n override Flask's default OPTIONS handling to return CORS headers for\n OPTIONS requests.\n\n Default : True\n :type automatic_options: bool"} {"_id": "q_4678", "text": "This call returns an array of mutual fund symbols that IEX Cloud supports for API calls.\n\n https://iexcloud.io/docs/api/#mutual-fund-symbols\n 8am, 9am, 12pm, 1pm UTC daily\n\n Args:\n token (string); Access token\n version (string); API version\n\n Returns:\n DataFrame: result"} {"_id": "q_4679", "text": "This call returns an array of OTC symbols that IEX Cloud supports for API calls.\n\n https://iexcloud.io/docs/api/#otc-symbols\n 8am, 9am, 12pm, 1pm UTC daily\n\n Args:\n token (string); Access token\n version (string); API version\n\n Returns:\n DataFrame: result"} {"_id": "q_4680", "text": "for backwards compat, accepting token and version but ignoring"} {"_id": "q_4681", "text": "for iex cloud"} {"_id": "q_4682", "text": "News about market\n\n https://iexcloud.io/docs/api/#news\n Continuous\n\n Args:\n count (int): limit number of results\n token (string); Access token\n version (string); API version\n\n Returns:\n DataFrame: result"} {"_id": "q_4683", "text": "Returns the official open and close for whole market.\n\n https://iexcloud.io/docs/api/#news\n 9:30am-5pm ET Mon-Fri\n\n Args:\n token (string); Access token\n version (string); API version\n\n Returns:\n DataFrame: result"} {"_id": "q_4684", "text": "This returns previous day adjusted price data for whole market\n\n https://iexcloud.io/docs/api/#previous-day-prices\n Available after 4am ET Tue-Sat\n\n Args:\n symbol (string); Ticker to request\n token (string); Access token\n version (string); API version\n\n Returns:\n DataFrame: result"} {"_id": "q_4685", "text": "Stock split history\n\n https://iexcloud.io/docs/api/#splits\n Updated at 9am UTC every day\n\n Args:\n symbol (string); Ticker to request\n token (string); Access token\n version (string); API version\n\n Returns:\n DataFrame: result"} {"_id": "q_4686", "text": "This will return an array of quotes for all Cryptocurrencies supported by the IEX API. Each element is a standard quote object with four additional keys.\n\n https://iexcloud.io/docs/api/#crypto\n\n Args:\n token (string); Access token\n version (string); API version\n\n Returns:\n DataFrame: result"} {"_id": "q_4687", "text": "benjamini hocheberg fdr correction. inspired by statsmodels"} {"_id": "q_4688", "text": "Standardize the mean and variance of the data axis Parameters.\n\n :param data2d: DataFrame to normalize.\n :param axis: int, Which axis to normalize across. If 0, normalize across rows,\n if 1, normalize across columns. If None, don't change data\n \n :Returns: Normalized DataFrame. Normalized data with a mean of 0 and variance of 1\n across the specified axis."} {"_id": "q_4689", "text": "Prepare argparser object. New options will be added in this function first."} {"_id": "q_4690", "text": "Add function 'prerank' argument parsers."} {"_id": "q_4691", "text": "Add function 'plot' argument parsers."} {"_id": "q_4692", "text": "This is the most important function of GSEApy. It has the same algorithm with GSEA and ssGSEA.\n\n :param gene_list: The ordered gene list gene_name_list, rank_metric.index.values\n :param gene_set: gene_sets in gmt file, please use gsea_gmt_parser to get gene_set.\n :param weighted_score_type: It's the same with gsea's weighted_score method. Weighting by the correlation\n is a very reasonable choice that allows significant gene sets with less than perfect coherence.\n options: 0(classic),1,1.5,2. default:1. if one is interested in penalizing sets for lack of\n coherence or to discover sets with any type of nonrandom distribution of tags, a value p < 1\n might be appropriate. On the other hand, if one uses sets with large number of genes and only\n a small subset of those is expected to be coherent, then one could consider using p > 1.\n Our recommendation is to use p = 1 and use other settings only if you are very experienced\n with the method and its behavior.\n\n :param correl_vector: A vector with the correlations (e.g. signal to noise scores) corresponding to the genes in\n the gene list. Or rankings, rank_metric.values\n :param nperm: Only use this parameter when computing esnull for statistical testing. Set the esnull value\n equal to the permutation number.\n :param rs: Random state for initializing gene list shuffling. Default: np.random.RandomState(seed=None)\n\n :return:\n\n ES: Enrichment score (real number between -1 and +1)\n\n ESNULL: Enrichment score calculated from random permutations.\n\n Hits_Indices: Index of a gene in gene_list, if gene is included in gene_set.\n\n RES: Numerical vector containing the running enrichment score for all locations in the gene list ."} {"_id": "q_4693", "text": "Build shuffled ranking matrix when permutation_type eq to phenotype.\n\n :param exprs: gene_expression DataFrame, gene_name indexed.\n :param str method: calculate correlation or ranking. methods including:\n 1. 'signal_to_noise'.\n 2. 't_test'.\n 3. 'ratio_of_classes' (also referred to as fold change).\n 4. 'diff_of_classes'.\n 5. 'log2_ratio_of_classes'.\n :param int permuation_num: how many times of classes is being shuffled\n :param str pos: one of labels of phenotype's names.\n :param str neg: one of labels of phenotype's names.\n :param list classes: a list of phenotype labels, to specify which column of\n dataframe belongs to what class of phenotype.\n :param bool ascending: bool. Sort ascending vs. descending.\n\n :return:\n returns two 2d ndarray with shape (nperm, gene_num).\n\n | cor_mat_indices: the indices of sorted and permutated (exclude last row) ranking matrix.\n | cor_mat: sorted and permutated (exclude last row) ranking matrix."} {"_id": "q_4694", "text": "Compute nominal pvals, normalized ES, and FDR q value.\n\n For a given NES(S) = NES* >= 0. The FDR is the ratio of the percentage of all (S,pi) with\n NES(S,pi) >= 0, whose NES(S,pi) >= NES*, divided by the percentage of\n observed S wih NES(S) >= 0, whose NES(S) >= NES*, and similarly if NES(S) = NES* <= 0."} {"_id": "q_4695", "text": "Get available marts and their names."} {"_id": "q_4696", "text": "mapping ids using BioMart. \n\n :param dataset: str, default: 'hsapiens_gene_ensembl'\n :param attributes: str, list, tuple\n :param filters: dict, {'filter name': list(filter value)}\n :param host: www.ensembl.org, asia.ensembl.org, useast.ensembl.org\n :return: a dataframe contains all attributes you selected.\n\n **Note**: it will take a couple of minutes to get the results.\n A xml template for querying biomart. (see https://gist.github.com/keithshep/7776579)\n \n exampleTaxonomy = \"mmusculus_gene_ensembl\"\n exampleGene = \"ENSMUSG00000086981,ENSMUSG00000086982,ENSMUSG00000086983\"\n urlTemplate = \\\n '''http://ensembl.org/biomart/martservice?query=''' \\\n '''''' \\\n '''''' \\\n '''''' \\\n '''''' \\\n '''''' \\\n '''''' \\\n '''''' \\\n '''''' \\\n '''''' \n \n exampleURL = urlTemplate % (exampleTaxonomy, exampleGene)\n req = requests.get(exampleURL, stream=True)"} {"_id": "q_4697", "text": "Run Gene Set Enrichment Analysis with single sample GSEA tool\n\n :param data: Expression table, pd.Series, pd.DataFrame, GCT file, or .rnk file format.\n :param gene_sets: Enrichr Library name or .gmt gene sets file or dict of gene sets. Same input with GSEA.\n :param outdir: Results output directory.\n :param str sample_norm_method: \"Sample normalization method. Choose from {'rank', 'log', 'log_rank'}. Default: rank.\n\n 1. 'rank': Rank your expression data, and transform by 10000*rank_dat/gene_numbers\n 2. 'log' : Do not rank, but transform data by log(data + exp(1)), while data = data[data<1] =1.\n 3. 'log_rank': Rank your expression data, and transform by log(10000*rank_dat/gene_numbers+ exp(1))\n 4. 'custom': Do nothing, and use your own rank value to calculate enrichment score.\n \n see here: https://github.com/GSEA-MSigDB/ssGSEAProjection-gpmodule/blob/master/src/ssGSEAProjection.Library.R, line 86\n\n :param int min_size: Minimum allowed number of genes from gene set also the data set. Default: 15.\n :param int max_size: Maximum allowed number of genes from gene set also the data set. Default: 2000.\n :param int permutation_num: Number of permutations for significance computation. Default: 0.\n :param str weighted_score_type: Refer to :func:`algorithm.enrichment_score`. Default:0.25.\n :param bool scale: If True, normalize the scores by number of genes in the gene sets.\n :param bool ascending: Sorting order of rankings. Default: False.\n :param int processes: Number of Processes you are going to use. Default: 1.\n :param list figsize: Matplotlib figsize, accept a tuple or list, e.g. [width,height]. Default: [7,6].\n :param str format: Matplotlib figure format. Default: 'pdf'.\n :param int graph_num: Plot graphs for top sets of each phenotype.\n :param bool no_plot: If equals to True, no figure will be drawn. Default: False.\n :param seed: Random seed. expect an integer. Default:None.\n :param bool verbose: Bool, increase output verbosity, print out progress of your job, Default: False.\n\n :return: Return a ssGSEA obj. \n All results store to a dictionary, access enrichment score by obj.resultsOnSamples,\n and normalized enrichment score by obj.res2d.\n if permutation_num > 0, additional results contain::\n\n | {es: enrichment score,\n | nes: normalized enrichment score,\n | p: P-value,\n | fdr: FDR,\n | size: gene set size,\n | matched_size: genes matched to the data,\n | genes: gene names from the data set\n | ledge_genes: leading edge genes, if permutation_num >0}"} {"_id": "q_4698", "text": "Run Gene Set Enrichment Analysis with pre-ranked correlation defined by user.\n\n :param rnk: pre-ranked correlation table or pandas DataFrame. Same input with ``GSEA`` .rnk file.\n :param gene_sets: Enrichr Library name or .gmt gene sets file or dict of gene sets. Same input with GSEA.\n :param outdir: results output directory.\n :param int permutation_num: Number of permutations for significance computation. Default: 1000.\n :param int min_size: Minimum allowed number of genes from gene set also the data set. Default: 15.\n :param int max_size: Maximum allowed number of genes from gene set also the data set. Defaults: 500.\n :param str weighted_score_type: Refer to :func:`algorithm.enrichment_score`. Default:1.\n :param bool ascending: Sorting order of rankings. Default: False.\n :param int processes: Number of Processes you are going to use. Default: 1.\n :param list figsize: Matplotlib figsize, accept a tuple or list, e.g. [width,height]. Default: [6.5,6].\n :param str format: Matplotlib figure format. Default: 'pdf'.\n :param int graph_num: Plot graphs for top sets of each phenotype.\n :param bool no_plot: If equals to True, no figure will be drawn. Default: False.\n :param seed: Random seed. expect an integer. Default:None.\n :param bool verbose: Bool, increase output verbosity, print out progress of your job, Default: False.\n\n :return: Return a Prerank obj. All results store to a dictionary, obj.results,\n where contains::\n\n | {es: enrichment score,\n | nes: normalized enrichment score,\n | p: P-value,\n | fdr: FDR,\n | size: gene set size,\n | matched_size: genes matched to the data,\n | genes: gene names from the data set\n | ledge_genes: leading edge genes}"} {"_id": "q_4699", "text": "The main function to reproduce GSEA desktop outputs.\n\n :param indir: GSEA desktop results directory. In the sub folder, you must contain edb file folder.\n :param outdir: Output directory.\n :param float weighted_score_type: weighted score type. choose from {0,1,1.5,2}. Default: 1.\n :param list figsize: Matplotlib output figure figsize. Default: [6.5,6].\n :param str format: Matplotlib output figure format. Default: 'pdf'.\n :param int min_size: Min size of input genes presented in Gene Sets. Default: 3.\n :param int max_size: Max size of input genes presented in Gene Sets. Default: 5000.\n You are not encouraged to use min_size, or max_size argument in :func:`replot` function.\n Because gmt file has already been filtered.\n :param verbose: Bool, increase output verbosity, print out progress of your job, Default: False.\n\n :return: Generate new figures with selected figure format. Default: 'pdf'."} {"_id": "q_4700", "text": "load gene set dict"} {"_id": "q_4701", "text": "download enrichr libraries."} {"_id": "q_4702", "text": "GSEA main procedure"} {"_id": "q_4703", "text": "GSEA prerank workflow"} {"_id": "q_4704", "text": "Single Sample GSEA workflow.\n multiprocessing utility on samples."} {"_id": "q_4705", "text": "main replot function"} {"_id": "q_4706", "text": "Enrichr API.\n\n :param gene_list: Flat file with list of genes, one gene id per row, or a python list object\n :param gene_sets: Enrichr Library to query. Required enrichr library name(s). Separate each name by comma.\n :param organism: Enrichr supported organism. Select from (human, mouse, yeast, fly, fish, worm).\n see here for details: https://amp.pharm.mssm.edu/modEnrichr\n :param description: name of analysis. optional.\n :param outdir: Output file directory\n :param float cutoff: Adjusted P-value (benjamini-hochberg correction) cutoff. Default: 0.05\n :param int background: BioMart dataset name for retrieving background gene information.\n This argument only works when gene_sets input is a gmt file or python dict.\n You could also specify a number by yourself, e.g. total expressed genes number.\n In this case, you will skip retrieving background infos from biomart.\n \n Use the code below to see valid background dataset names from BioMart.\n Here are example code:\n >>> from gseapy.parser import Biomart \n >>> bm = Biomart(verbose=False, host=\"asia.ensembl.org\")\n >>> ## view validated marts\n >>> marts = bm.get_marts()\n >>> ## view validated dataset\n >>> datasets = bm.get_datasets(mart='ENSEMBL_MART_ENSEMBL')\n\n :param str format: Output figure format supported by matplotlib,('pdf','png','eps'...). Default: 'pdf'.\n :param list figsize: Matplotlib figsize, accept a tuple or list, e.g. (width,height). Default: (6.5,6).\n :param bool no_plot: If equals to True, no figure will be drawn. Default: False.\n :param bool verbose: Increase output verbosity, print out progress of your job, Default: False.\n\n :return: An Enrichr object, which obj.res2d stores your last query, obj.results stores your all queries."} {"_id": "q_4707", "text": "parse gene list"} {"_id": "q_4708", "text": "send gene list to enrichr server"} {"_id": "q_4709", "text": "Compare the genes sent and received to get successfully recognized genes"} {"_id": "q_4710", "text": "get background gene"} {"_id": "q_4711", "text": "Perform the App's actions as configured."} {"_id": "q_4712", "text": "Initializes client id and client secret based on the settings.\n\n Args:\n settings_instance: An instance of ``django.conf.settings``.\n\n Returns:\n A 2-tuple, the first item is the client id and the second\n item is the client secret."} {"_id": "q_4713", "text": "Gets a Credentials storage object provided by the Django OAuth2 Helper\n object.\n\n Args:\n request: Reference to the current request object.\n\n Returns:\n An :class:`oauth2.client.Storage` object."} {"_id": "q_4714", "text": "Helper method to create a redirect response with URL params.\n\n This builds a redirect string that converts kwargs into a\n query string.\n\n Args:\n url_name: The name of the url to redirect to.\n kwargs: the query string param and their values to build.\n\n Returns:\n A properly formatted redirect string."} {"_id": "q_4715", "text": "Gets the authorized credentials for this flow, if they exist."} {"_id": "q_4716", "text": "Returns the scopes associated with this object, kept up to\n date for incremental auth."} {"_id": "q_4717", "text": "Retrieve stored credential.\n\n Returns:\n A :class:`oauth2client.Credentials` instance or `None`."} {"_id": "q_4718", "text": "Write a credentials to the SQLAlchemy datastore.\n\n Args:\n credentials: :class:`oauth2client.Credentials`"} {"_id": "q_4719", "text": "Delete credentials from the SQLAlchemy datastore."} {"_id": "q_4720", "text": "Utility function that creates JSON repr. of a credentials object.\n\n Over-ride is needed since PKCS#12 keys will not in general be JSON\n serializable.\n\n Args:\n strip: array, An array of names of members to exclude from the\n JSON.\n to_serialize: dict, (Optional) The properties for this object\n that will be serialized. This allows callers to\n modify before serializing.\n\n Returns:\n string, a JSON representation of this instance, suitable to pass to\n from_json()."} {"_id": "q_4721", "text": "Helper for factory constructors from JSON keyfile.\n\n Args:\n keyfile_dict: dict-like object, The parsed dictionary-like object\n containing the contents of the JSON keyfile.\n scopes: List or string, Scopes to use when acquiring an\n access token.\n token_uri: string, URI for OAuth 2.0 provider token endpoint.\n If unset and not present in keyfile_dict, defaults\n to Google's endpoints.\n revoke_uri: string, URI for OAuth 2.0 provider revoke endpoint.\n If unset and not present in keyfile_dict, defaults\n to Google's endpoints.\n\n Returns:\n ServiceAccountCredentials, a credentials object created from\n the keyfile contents.\n\n Raises:\n ValueError, if the credential type is not :data:`SERVICE_ACCOUNT`.\n KeyError, if one of the expected keys is not present in\n the keyfile."} {"_id": "q_4722", "text": "Factory constructor from JSON keyfile by name.\n\n Args:\n filename: string, The location of the keyfile.\n scopes: List or string, (Optional) Scopes to use when acquiring an\n access token.\n token_uri: string, URI for OAuth 2.0 provider token endpoint.\n If unset and not present in the key file, defaults\n to Google's endpoints.\n revoke_uri: string, URI for OAuth 2.0 provider revoke endpoint.\n If unset and not present in the key file, defaults\n to Google's endpoints.\n\n Returns:\n ServiceAccountCredentials, a credentials object created from\n the keyfile.\n\n Raises:\n ValueError, if the credential type is not :data:`SERVICE_ACCOUNT`.\n KeyError, if one of the expected keys is not present in\n the keyfile."} {"_id": "q_4723", "text": "Factory constructor from parsed JSON keyfile.\n\n Args:\n keyfile_dict: dict-like object, The parsed dictionary-like object\n containing the contents of the JSON keyfile.\n scopes: List or string, (Optional) Scopes to use when acquiring an\n access token.\n token_uri: string, URI for OAuth 2.0 provider token endpoint.\n If unset and not present in keyfile_dict, defaults\n to Google's endpoints.\n revoke_uri: string, URI for OAuth 2.0 provider revoke endpoint.\n If unset and not present in keyfile_dict, defaults\n to Google's endpoints.\n\n Returns:\n ServiceAccountCredentials, a credentials object created from\n the keyfile.\n\n Raises:\n ValueError, if the credential type is not :data:`SERVICE_ACCOUNT`.\n KeyError, if one of the expected keys is not present in\n the keyfile."} {"_id": "q_4724", "text": "Generate the assertion that will be used in the request."} {"_id": "q_4725", "text": "Deserialize a JSON-serialized instance.\n\n Inverse to :meth:`to_json`.\n\n Args:\n json_data: dict or string, Serialized JSON (as a string or an\n already parsed dictionary) representing a credential.\n\n Returns:\n ServiceAccountCredentials from the serialized data."} {"_id": "q_4726", "text": "Create credentials that specify additional claims.\n\n Args:\n claims: dict, key-value pairs for claims.\n\n Returns:\n ServiceAccountCredentials, a copy of the current service account\n credentials with updated claims to use when obtaining access\n tokens."} {"_id": "q_4727", "text": "Create a signed jwt.\n\n Args:\n http: unused\n additional_claims: dict, additional claims to add to\n the payload of the JWT.\n Returns:\n An AccessTokenInfo with the signed jwt"} {"_id": "q_4728", "text": "Determine if the current environment is Compute Engine.\n\n Returns:\n Boolean indicating whether or not the current environment is Google\n Compute Engine."} {"_id": "q_4729", "text": "Detects if the code is running in the App Engine environment.\n\n Returns:\n True if running in the GAE environment, False otherwise."} {"_id": "q_4730", "text": "Detect if the code is running in the Compute Engine environment.\n\n Returns:\n True if running in the GCE environment, False otherwise."} {"_id": "q_4731", "text": "Saves a file with read-write permissions on for the owner.\n\n Args:\n filename: String. Absolute path to file.\n json_contents: JSON serializable object to be saved."} {"_id": "q_4732", "text": "Save the provided GoogleCredentials to the well known file.\n\n Args:\n credentials: the credentials to be saved to the well known file;\n it should be an instance of GoogleCredentials\n well_known_file: the name of the file where the credentials are to be\n saved; this parameter is supposed to be used for\n testing only"} {"_id": "q_4733", "text": "Get the well known file produced by command 'gcloud auth login'."} {"_id": "q_4734", "text": "Build the Application Default Credentials from file."} {"_id": "q_4735", "text": "Verifies a signed JWT id_token.\n\n This function requires PyOpenSSL and because of that it does not work on\n App Engine.\n\n Args:\n id_token: string, A Signed JWT.\n audience: string, The audience 'aud' that the token should be for.\n http: httplib2.Http, instance to use to make the HTTP request. Callers\n should supply an instance that has caching enabled.\n cert_uri: string, URI of the certificates in JSON format to\n verify the JWT against.\n\n Returns:\n The deserialized JSON in the JWT.\n\n Raises:\n oauth2client.crypt.AppIdentityError: if the JWT fails to verify.\n CryptoUnavailableError: if no crypto library is available."} {"_id": "q_4736", "text": "Exchanges an authorization code for an OAuth2Credentials object.\n\n Args:\n client_id: string, client identifier.\n client_secret: string, client secret.\n scope: string or iterable of strings, scope(s) to request.\n code: string, An authorization code, most likely passed down from\n the client\n redirect_uri: string, this is generally set to 'postmessage' to match\n the redirect_uri that the client specified\n http: httplib2.Http, optional http instance to use to do the fetch\n token_uri: string, URI for token endpoint. For convenience defaults\n to Google's endpoints but any OAuth 2.0 provider can be\n used.\n auth_uri: string, URI for authorization endpoint. For convenience\n defaults to Google's endpoints but any OAuth 2.0 provider\n can be used.\n revoke_uri: string, URI for revoke endpoint. For convenience\n defaults to Google's endpoints but any OAuth 2.0 provider\n can be used.\n device_uri: string, URI for device authorization endpoint. For\n convenience defaults to Google's endpoints but any OAuth\n 2.0 provider can be used.\n pkce: boolean, default: False, Generate and include a \"Proof Key\n for Code Exchange\" (PKCE) with your authorization and token\n requests. This adds security for installed applications that\n cannot protect a client_secret. See RFC 7636 for details.\n code_verifier: bytestring or None, default: None, parameter passed\n as part of the code exchange when pkce=True. If\n None, a code_verifier will automatically be\n generated as part of step1_get_authorize_url(). See\n RFC 7636 for details.\n\n Returns:\n An OAuth2Credentials object.\n\n Raises:\n FlowExchangeError if the authorization code cannot be exchanged for an\n access token"} {"_id": "q_4737", "text": "Returns OAuth2Credentials from a clientsecrets file and an auth code.\n\n Will create the right kind of Flow based on the contents of the\n clientsecrets file or will raise InvalidClientSecretsError for unknown\n types of Flows.\n\n Args:\n filename: string, File name of clientsecrets.\n scope: string or iterable of strings, scope(s) to request.\n code: string, An authorization code, most likely passed down from\n the client\n message: string, A friendly string to display to the user if the\n clientsecrets file is missing or invalid. If message is\n provided then sys.exit will be called in the case of an error.\n If message in not provided then\n clientsecrets.InvalidClientSecretsError will be raised.\n redirect_uri: string, this is generally set to 'postmessage' to match\n the redirect_uri that the client specified\n http: httplib2.Http, optional http instance to use to do the fetch\n cache: An optional cache service client that implements get() and set()\n methods. See clientsecrets.loadfile() for details.\n device_uri: string, OAuth 2.0 device authorization endpoint\n pkce: boolean, default: False, Generate and include a \"Proof Key\n for Code Exchange\" (PKCE) with your authorization and token\n requests. This adds security for installed applications that\n cannot protect a client_secret. See RFC 7636 for details.\n code_verifier: bytestring or None, default: None, parameter passed\n as part of the code exchange when pkce=True. If\n None, a code_verifier will automatically be\n generated as part of step1_get_authorize_url(). See\n RFC 7636 for details.\n\n Returns:\n An OAuth2Credentials object.\n\n Raises:\n FlowExchangeError: if the authorization code cannot be exchanged for an\n access token\n UnknownClientSecretsFlowError: if the file describes an unknown kind\n of Flow.\n clientsecrets.InvalidClientSecretsError: if the clientsecrets file is\n invalid."} {"_id": "q_4738", "text": "Utility class method to instantiate a Credentials subclass from JSON.\n\n Expects the JSON string to have been produced by to_json().\n\n Args:\n json_data: string or bytes, JSON from to_json().\n\n Returns:\n An instance of the subclass of Credentials that was serialized with\n to_json()."} {"_id": "q_4739", "text": "Write a credential.\n\n The Storage lock must be held when this is called.\n\n Args:\n credentials: Credentials, the credentials to store."} {"_id": "q_4740", "text": "Verify that the credentials are authorized for the given scopes.\n\n Returns True if the credentials authorized scopes contain all of the\n scopes given.\n\n Args:\n scopes: list or string, the scopes to check.\n\n Notes:\n There are cases where the credentials are unaware of which scopes\n are authorized. Notably, credentials obtained and stored before\n this code was added will not have scopes, AccessTokenCredentials do\n not have scopes. In both cases, you can use refresh_scopes() to\n obtain the canonical set of scopes."} {"_id": "q_4741", "text": "Return the access token and its expiration information.\n\n If the token does not exist, get one.\n If the token expired, refresh it."} {"_id": "q_4742", "text": "Return the number of seconds until this token expires.\n\n If token_expiry is in the past, this method will return 0, meaning the\n token has already expired.\n\n If token_expiry is None, this method will return None. Note that\n returning 0 in such a case would not be fair: the token may still be\n valid; we just don't know anything about it."} {"_id": "q_4743", "text": "Refreshes the access_token.\n\n This method first checks by reading the Storage object if available.\n If a refresh is still needed, it holds the Storage lock until the\n refresh is completed.\n\n Args:\n http: an object to be used to make HTTP requests.\n\n Raises:\n HttpAccessTokenRefreshError: When the refresh fails."} {"_id": "q_4744", "text": "Refresh the access_token using the refresh_token.\n\n Args:\n http: an object to be used to make HTTP requests.\n\n Raises:\n HttpAccessTokenRefreshError: When the refresh fails."} {"_id": "q_4745", "text": "Retrieves the list of authorized scopes from the OAuth2 provider.\n\n Args:\n http: an object to be used to make HTTP requests.\n token: A string used as the token to identify the credentials to\n the provider.\n\n Raises:\n Error: When refresh fails, indicating the the access token is\n invalid."} {"_id": "q_4746", "text": "Attempts to get implicit credentials from local credential files.\n\n First checks if the environment variable GOOGLE_APPLICATION_CREDENTIALS\n is set with a filename and then falls back to a configuration file (the\n \"well known\" file) associated with the 'gcloud' command line tool.\n\n Returns:\n Credentials object associated with the\n GOOGLE_APPLICATION_CREDENTIALS file or the \"well known\" file if\n either exist. If neither file is define, returns None, indicating\n no credentials from a file can detected from the current\n environment."} {"_id": "q_4747", "text": "Gets credentials implicitly from the environment.\n\n Checks environment in order of precedence:\n - Environment variable GOOGLE_APPLICATION_CREDENTIALS pointing to\n a file with stored credentials information.\n - Stored \"well known\" file associated with `gcloud` command line tool.\n - Google App Engine (production and testing)\n - Google Compute Engine production environment.\n\n Raises:\n ApplicationDefaultCredentialsError: raised when the credentials\n fail to be retrieved."} {"_id": "q_4748", "text": "Create a Credentials object by reading information from a file.\n\n It returns an object of type GoogleCredentials.\n\n Args:\n credential_filename: the path to the file from where the\n credentials are to be read\n\n Raises:\n ApplicationDefaultCredentialsError: raised when the credentials\n fail to be retrieved."} {"_id": "q_4749", "text": "Create a DeviceFlowInfo from a server response.\n\n The response should be a dict containing entries as described here:\n\n http://tools.ietf.org/html/draft-ietf-oauth-v2-05#section-3.7.1"} {"_id": "q_4750", "text": "Returns a URI to redirect to the provider.\n\n Args:\n redirect_uri: string, Either the string 'urn:ietf:wg:oauth:2.0:oob'\n for a non-web-based application, or a URI that\n handles the callback from the authorization server.\n This parameter is deprecated, please move to passing\n the redirect_uri in via the constructor.\n state: string, Opaque state string which is passed through the\n OAuth2 flow and returned to the client as a query parameter\n in the callback.\n\n Returns:\n A URI as a string to redirect the user to begin the authorization\n flow."} {"_id": "q_4751", "text": "Returns a user code and the verification URL where to enter it\n\n Returns:\n A user code as a string for the user to authorize the application\n An URL as a string where the user has to enter the code"} {"_id": "q_4752", "text": "Construct an RsaVerifier instance from a string.\n\n Args:\n key_pem: string, public key in PEM format.\n is_x509_cert: bool, True if key_pem is an X509 cert, otherwise it\n is expected to be an RSA key in PEM format.\n\n Returns:\n RsaVerifier instance.\n\n Raises:\n ValueError: if the key_pem can't be parsed. In either case, error\n will begin with 'No PEM start marker'. If\n ``is_x509_cert`` is True, will fail to find the\n \"-----BEGIN CERTIFICATE-----\" error, otherwise fails\n to find \"-----BEGIN RSA PUBLIC KEY-----\"."} {"_id": "q_4753", "text": "Construct an RsaSigner instance from a string.\n\n Args:\n key: string, private key in PEM format.\n password: string, password for private key file. Unused for PEM\n files.\n\n Returns:\n RsaSigner instance.\n\n Raises:\n ValueError if the key cannot be parsed as PKCS#1 or PKCS#8 in\n PEM format."} {"_id": "q_4754", "text": "Load credentials from the given file handle.\n\n The file is expected to be in this format:\n\n {\n \"file_version\": 2,\n \"credentials\": {\n \"key\": \"base64 encoded json representation of credentials.\"\n }\n }\n\n This function will warn and return empty credentials instead of raising\n exceptions.\n\n Args:\n credentials_file: An open file handle.\n\n Returns:\n A dictionary mapping user-defined keys to an instance of\n :class:`oauth2client.client.Credentials`."} {"_id": "q_4755", "text": "Retrieves the current credentials from the store.\n\n Returns:\n An instance of :class:`oauth2client.client.Credentials` or `None`."} {"_id": "q_4756", "text": "A decorator to declare that only the first N arguments my be positional.\n\n This decorator makes it easy to support Python 3 style keyword-only\n parameters. For example, in Python 3 it is possible to write::\n\n def fn(pos1, *, kwonly1=None, kwonly1=None):\n ...\n\n All named parameters after ``*`` must be a keyword::\n\n fn(10, 'kw1', 'kw2') # Raises exception.\n fn(10, kwonly1='kw1') # Ok.\n\n Example\n ^^^^^^^\n\n To define a function like above, do::\n\n @positional(1)\n def fn(pos1, kwonly1=None, kwonly2=None):\n ...\n\n If no default value is provided to a keyword argument, it becomes a\n required keyword argument::\n\n @positional(0)\n def fn(required_kw):\n ...\n\n This must be called with the keyword parameter::\n\n fn() # Raises exception.\n fn(10) # Raises exception.\n fn(required_kw=10) # Ok.\n\n When defining instance or class methods always remember to account for\n ``self`` and ``cls``::\n\n class MyClass(object):\n\n @positional(2)\n def my_method(self, pos1, kwonly1=None):\n ...\n\n @classmethod\n @positional(2)\n def my_method(cls, pos1, kwonly1=None):\n ...\n\n The positional decorator behavior is controlled by\n ``_helpers.positional_parameters_enforcement``, which may be set to\n ``POSITIONAL_EXCEPTION``, ``POSITIONAL_WARNING`` or\n ``POSITIONAL_IGNORE`` to raise an exception, log a warning, or do\n nothing, respectively, if a declaration is violated.\n\n Args:\n max_positional_arguments: Maximum number of positional arguments. All\n parameters after the this index must be\n keyword only.\n\n Returns:\n A decorator that prevents using arguments after max_positional_args\n from being used as positional parameters.\n\n Raises:\n TypeError: if a key-word only argument is provided as a positional\n parameter, but only if\n _helpers.positional_parameters_enforcement is set to\n POSITIONAL_EXCEPTION."} {"_id": "q_4757", "text": "Converts stringifed scope value to a list.\n\n If scopes is a list then it is simply passed through. If scopes is an\n string then a list of each individual scope is returned.\n\n Args:\n scopes: a string or iterable of strings, the scopes.\n\n Returns:\n The scopes in a list."} {"_id": "q_4758", "text": "Parses unique key-value parameters from urlencoded content.\n\n Args:\n content: string, URL-encoded key-value pairs.\n\n Returns:\n dict, The key-value pairs from ``content``.\n\n Raises:\n ValueError: if one of the keys is repeated."} {"_id": "q_4759", "text": "Updates a URI with new query parameters.\n\n If a given key from ``params`` is repeated in the ``uri``, then\n the URI will be considered invalid and an error will occur.\n\n If the URI is valid, then each value from ``params`` will\n replace the corresponding value in the query parameters (if\n it exists).\n\n Args:\n uri: string, A valid URI, with potential existing query parameters.\n params: dict, A dictionary of query parameters.\n\n Returns:\n The same URI but with the new query parameters added."} {"_id": "q_4760", "text": "Adds a query parameter to a url.\n\n Replaces the current value if it already exists in the URL.\n\n Args:\n url: string, url to add the query parameter to.\n name: string, query parameter name.\n value: string, query parameter value.\n\n Returns:\n Updated query parameter. Does not update the url if value is None."} {"_id": "q_4761", "text": "Adds a user-agent to the headers.\n\n Args:\n headers: dict, request headers to add / modify user\n agent within.\n user_agent: str, the user agent to add.\n\n Returns:\n dict, the original headers passed in, but modified if the\n user agent is not None."} {"_id": "q_4762", "text": "Forces header keys and values to be strings, i.e not unicode.\n\n The httplib module just concats the header keys and values in a way that\n may make the message header a unicode string, which, if it then tries to\n contatenate to a binary request body may result in a unicode decode error.\n\n Args:\n headers: dict, A dictionary of headers.\n\n Returns:\n The same dictionary but with all the keys converted to strings."} {"_id": "q_4763", "text": "Prepares an HTTP object's request method for auth.\n\n Wraps HTTP requests with logic to catch auth failures (typically\n identified via a 401 status code). In the event of failure, tries\n to refresh the token used and then retry the original request.\n\n Args:\n credentials: Credentials, the credentials used to identify\n the authenticated user.\n http: httplib2.Http, an http object to be used to make\n auth requests."} {"_id": "q_4764", "text": "Prepares an HTTP object's request method for JWT access.\n\n Wraps HTTP requests with logic to catch auth failures (typically\n identified via a 401 status code). In the event of failure, tries\n to refresh the token used and then retry the original request.\n\n Args:\n credentials: _JWTAccessCredentials, the credentials used to identify\n a service account that uses JWT access tokens.\n http: httplib2.Http, an http object to be used to make\n auth requests."} {"_id": "q_4765", "text": "Retrieves the flow instance associated with a given CSRF token from\n the Flask session."} {"_id": "q_4766", "text": "Loads oauth2 configuration in order of priority.\n\n Priority:\n 1. Config passed to the constructor or init_app.\n 2. Config passed via the GOOGLE_OAUTH2_CLIENT_SECRETS_FILE app\n config.\n 3. Config passed via the GOOGLE_OAUTH2_CLIENT_ID and\n GOOGLE_OAUTH2_CLIENT_SECRET app config.\n\n Raises:\n ValueError if no config could be found."} {"_id": "q_4767", "text": "Flask view that starts the authorization flow.\n\n Starts flow by redirecting the user to the OAuth2 provider."} {"_id": "q_4768", "text": "The credentials for the current user or None if unavailable."} {"_id": "q_4769", "text": "Returns True if there are valid credentials for the current user."} {"_id": "q_4770", "text": "Returns the user's email address or None if there are no credentials.\n\n The email address is provided by the current credentials' id_token.\n This should not be used as unique identifier as the user can change\n their email. If you need a unique identifier, use user_id."} {"_id": "q_4771", "text": "Fetch an oauth token for the\n\n Args:\n http: an object to be used to make HTTP requests.\n service_account: An email specifying the service account this token\n should represent. Default will be a token for the \"default\" service\n account of the current compute engine instance.\n\n Returns:\n A tuple of (access token, token expiration), where access token is the\n access token as a string and token expiration is a datetime object\n that indicates when the access token will expire."} {"_id": "q_4772", "text": "Composes the value for the 'state' parameter.\n\n Packs the current request URI and an XSRF token into an opaque string that\n can be passed to the authentication server via the 'state' parameter.\n\n Args:\n request_handler: webapp.RequestHandler, The request.\n user: google.appengine.api.users.User, The current user.\n\n Returns:\n The state value as a string."} {"_id": "q_4773", "text": "Creates an OAuth2Decorator populated from a clientsecrets file.\n\n Args:\n filename: string, File name of client secrets.\n scope: string or list of strings, scope(s) of the credentials being\n requested.\n message: string, A friendly string to display to the user if the\n clientsecrets file is missing or invalid. The message may\n contain HTML and will be presented on the web interface for\n any method that uses the decorator.\n cache: An optional cache service client that implements get() and set()\n methods. See clientsecrets.loadfile() for details.\n\n Returns: An OAuth2Decorator"} {"_id": "q_4774", "text": "Get the email for the current service account.\n\n Returns:\n string, The email associated with the Google App Engine\n service account."} {"_id": "q_4775", "text": "Determine whether the model of the instance is an NDB model.\n\n Returns:\n Boolean indicating whether or not the model is an NDB or DB model."} {"_id": "q_4776", "text": "Retrieve entity from datastore.\n\n Uses a different model method for db or ndb models.\n\n Returns:\n Instance of the model corresponding to the current storage object\n and stored using the key name of the storage object."} {"_id": "q_4777", "text": "Delete entity from datastore.\n\n Attempts to delete using the key_name stored on the object, whether or\n not the given key is in the datastore."} {"_id": "q_4778", "text": "Write a Credentials to the datastore.\n\n Args:\n credentials: Credentials, the credentials to store."} {"_id": "q_4779", "text": "Delete Credential from datastore."} {"_id": "q_4780", "text": "Decorator that starts the OAuth 2.0 dance.\n\n Starts the OAuth dance for the logged in user if they haven't already\n granted access for this application.\n\n Args:\n method: callable, to be decorated method of a webapp.RequestHandler\n instance."} {"_id": "q_4781", "text": "Decorator that sets up for OAuth 2.0 dance, but doesn't do it.\n\n Does all the setup for the OAuth dance, but doesn't initiate it.\n This decorator is useful if you want to create a page that knows\n whether or not the user has granted access to this application.\n From within a method decorated with @oauth_aware the has_credentials()\n and authorize_url() methods can be called.\n\n Args:\n method: callable, to be decorated method of a webapp.RequestHandler\n instance."} {"_id": "q_4782", "text": "Validate parsed client secrets from a file.\n\n Args:\n clientsecrets_dict: dict, a dictionary holding the client secrets.\n\n Returns:\n tuple, a string of the client type and the information parsed\n from the file."} {"_id": "q_4783", "text": "Communicate with the Developer Shell server socket."} {"_id": "q_4784", "text": "Core code for a command-line application.\n\n The ``run()`` function is called from your application and runs\n through all the steps to obtain credentials. It takes a ``Flow``\n argument and attempts to open an authorization server page in the\n user's default web browser. The server asks the user to grant your\n application access to the user's data. If the user grants access,\n the ``run()`` function returns new credentials. The new credentials\n are also stored in the ``storage`` argument, which updates the file\n associated with the ``Storage`` object.\n\n It presumes it is run from a command-line application and supports the\n following flags:\n\n ``--auth_host_name`` (string, default: ``localhost``)\n Host name to use when running a local web server to handle\n redirects during OAuth authorization.\n\n ``--auth_host_port`` (integer, default: ``[8080, 8090]``)\n Port to use when running a local web server to handle redirects\n during OAuth authorization. Repeat this option to specify a list\n of values.\n\n ``--[no]auth_local_webserver`` (boolean, default: ``True``)\n Run a local web server to handle redirects during OAuth\n authorization.\n\n The tools module defines an ``ArgumentParser`` the already contains the\n flag definitions that ``run()`` requires. You can pass that\n ``ArgumentParser`` to your ``ArgumentParser`` constructor::\n\n parser = argparse.ArgumentParser(\n description=__doc__,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n parents=[tools.argparser])\n flags = parser.parse_args(argv)\n\n Args:\n flow: Flow, an OAuth 2.0 Flow to step through.\n storage: Storage, a ``Storage`` to store the credential in.\n flags: ``argparse.Namespace``, (Optional) The command-line flags. This\n is the object returned from calling ``parse_args()`` on\n ``argparse.ArgumentParser`` as described above. Defaults\n to ``argparser.parse_args()``.\n http: An instance of ``httplib2.Http.request`` or something that\n acts like it.\n\n Returns:\n Credentials, the obtained credential."} {"_id": "q_4785", "text": "Handle a GET request.\n\n Parses the query parameters and prints a message\n if the flow has completed. Note that we can't detect\n if an error occurred."} {"_id": "q_4786", "text": "Creates a 'code_challenge' as described in section 4.2 of RFC 7636\n by taking the sha256 hash of the verifier and then urlsafe\n base64-encoding it.\n\n Args:\n verifier: bytestring, representing a code_verifier as generated by\n code_verifier().\n\n Returns:\n Bytestring, representing a urlsafe base64-encoded sha256 hash digest,\n without '=' padding."} {"_id": "q_4787", "text": "Retrieve stored credential from the Django ORM.\n\n Returns:\n oauth2client.Credentials retrieved from the Django ORM, associated\n with the ``model``, ``key_value``->``key_name`` pair used to query\n for the model, and ``property_name`` identifying the\n ``CredentialsProperty`` field, all of which are defined in the\n constructor for this Storage object."} {"_id": "q_4788", "text": "Delete Credentials from the datastore."} {"_id": "q_4789", "text": "Retrieve the credentials from the dictionary, if they exist.\n\n Returns: A :class:`oauth2client.client.OAuth2Credentials` instance."} {"_id": "q_4790", "text": "Save the credentials to the dictionary.\n\n Args:\n credentials: A :class:`oauth2client.client.OAuth2Credentials`\n instance."} {"_id": "q_4791", "text": "Validates a value as a proper Flow object.\n\n Args:\n value: A value to be set on the property.\n\n Raises:\n TypeError if the value is not an instance of Flow."} {"_id": "q_4792", "text": "Converts our stored JSON string back to the desired type.\n\n Args:\n value: A value from the datastore to be converted to the\n desired type.\n\n Returns:\n A deserialized Credentials (or subclass) object, else None if\n the value can't be parsed."} {"_id": "q_4793", "text": "Looks up the flow in session to recover information about requested\n scopes.\n\n Args:\n csrf_token: The token passed in the callback request that should\n match the one previously generated and stored in the request on the\n initial authorization view.\n\n Returns:\n The OAuth2 Flow object associated with this flow based on the\n CSRF token."} {"_id": "q_4794", "text": "View that handles the user's return from OAuth2 provider.\n\n This view verifies the CSRF state and OAuth authorization code, and on\n success stores the credentials obtained in the storage provider,\n and redirects to the return_url specified in the authorize view and\n stored in the session.\n\n Args:\n request: Django request.\n\n Returns:\n A redirect response back to the return_url."} {"_id": "q_4795", "text": "View to start the OAuth2 Authorization flow.\n\n This view starts the OAuth2 authorization flow. If scopes is passed in\n as a GET URL parameter, it will authorize those scopes, otherwise the\n default scopes specified in settings. The return_url can also be\n specified as a GET parameter, otherwise the referer header will be\n checked, and if that isn't found it will return to the root path.\n\n Args:\n request: The Django request object.\n\n Returns:\n A redirect to Google OAuth2 Authorization."} {"_id": "q_4796", "text": "Create an empty file if necessary.\n\n This method will not initialize the file. Instead it implements a\n simple version of \"touch\" to ensure the file has been created."} {"_id": "q_4797", "text": "Overrides ``models.Field`` method. This is used to convert\n the value from an instances of this class to bytes that can be\n inserted into the database."} {"_id": "q_4798", "text": "Convert the field value from the provided model to a string.\n\n Used during model serialization.\n\n Args:\n obj: db.Model, model object\n\n Returns:\n string, the serialized field value"} {"_id": "q_4799", "text": "Make a signed JWT.\n\n See http://self-issued.info/docs/draft-jones-json-web-token.html.\n\n Args:\n signer: crypt.Signer, Cryptographic signer.\n payload: dict, Dictionary of data to convert to JSON and then sign.\n key_id: string, (Optional) Key ID header.\n\n Returns:\n string, The JWT for the payload."} {"_id": "q_4800", "text": "Verifies signed content using a list of certificates.\n\n Args:\n message: string or bytes, The message to verify.\n signature: string or bytes, The signature on the message.\n certs: iterable, certificates in PEM format.\n\n Raises:\n AppIdentityError: If none of the certificates can verify the message\n against the signature."} {"_id": "q_4801", "text": "Checks audience field from a JWT payload.\n\n Does nothing if the passed in ``audience`` is null.\n\n Args:\n payload_dict: dict, A dictionary containing a JWT payload.\n audience: string or NoneType, an audience to check for in\n the JWT payload.\n\n Raises:\n AppIdentityError: If there is no ``'aud'`` field in the payload\n dictionary but there is an ``audience`` to check.\n AppIdentityError: If the ``'aud'`` field in the payload dictionary\n does not match the ``audience``."} {"_id": "q_4802", "text": "Verifies the issued at and expiration from a JWT payload.\n\n Makes sure the current time (in UTC) falls between the issued at and\n expiration for the JWT (with some skew allowed for via\n ``CLOCK_SKEW_SECS``).\n\n Args:\n payload_dict: dict, A dictionary containing a JWT payload.\n\n Raises:\n AppIdentityError: If there is no ``'iat'`` field in the payload\n dictionary.\n AppIdentityError: If there is no ``'exp'`` field in the payload\n dictionary.\n AppIdentityError: If the JWT expiration is too far in the future (i.e.\n if the expiration would imply a token lifetime\n longer than what is allowed.)\n AppIdentityError: If the token appears to have been issued in the\n future (up to clock skew).\n AppIdentityError: If the token appears to have expired in the past\n (up to clock skew)."} {"_id": "q_4803", "text": "Verify a JWT against public certs.\n\n See http://self-issued.info/docs/draft-jones-json-web-token.html.\n\n Args:\n jwt: string, A JWT.\n certs: dict, Dictionary where values of public keys in PEM format.\n audience: string, The audience, 'aud', that this JWT should contain. If\n None then the JWT's 'aud' parameter is not verified.\n\n Returns:\n dict, The deserialized JSON payload in the JWT.\n\n Raises:\n AppIdentityError: if any checks are failed."} {"_id": "q_4804", "text": "Create a cube primitive\n\n Note that this is made of 6 quads, not triangles"} {"_id": "q_4805", "text": "create an icosphere mesh\n\n radius Radius of the sphere\n # subdivisions = Subdivision level; Number of the recursive subdivision of the\n # surface. Default is 3 (a sphere approximation composed by 1280 faces).\n # Admitted values are in the range 0 (an icosahedron) to 8 (a 1.3 MegaTris\n # approximation of a sphere). Formula for number of faces: F=20*4^subdiv\n # color = specify a color name to apply vertex colors to the newly\n # created mesh"} {"_id": "q_4806", "text": "Create a box with user defined number of segments in each direction.\n\n Grid spacing is the same as its dimensions (spacing = 1) and its\n thickness is one. Intended to be used for e.g. deforming using functions\n or a height map (lithopanes) and can be resized after creation.\n\n Warnings: function uses layers.join\n\n top_option\n 0 open\n 1 full\n 2 simple\n bottom_option\n 0 open\n 1 full\n 2 simple"} {"_id": "q_4807", "text": "Check if a variable is a list and is the correct length.\n\n If variable is not a list it will make it a list of the correct length with\n all terms identical."} {"_id": "q_4808", "text": "Write filter to FilterScript object or filename\n\n Args:\n script (FilterScript object or filename str): the FilterScript object\n or script filename to write the filter to.\n filter_xml (str): the xml filter string"} {"_id": "q_4809", "text": "Apply LS3 Subdivision Surface algorithm using Loop's weights.\n\n This refinement method take normals into account.\n See: Boye', S. Guennebaud, G. & Schlick, C.\n \"Least squares subdivision surfaces\"\n Computer Graphics Forum, 2010.\n\n Alternatives weighting schemes are based on the paper:\n Barthe, L. & Kobbelt, L.\n \"Subdivision scheme tuning around extraordinary vertices\"\n Computer Aided Geometric Design, 2004, 21, 561-583.\n\n The current implementation of these schemes don't handle vertices of\n valence > 12\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n iterations (int): Number of times the model is subdivided.\n loop_weight (int): Change the weights used. Allow to optimize some\n behaviours in spite of others. Valid values are:\n 0 - Loop (default)\n 1 - Enhance regularity\n 2 - Enhance continuity\n edge_threshold (float): All the edges longer than this threshold will\n be refined. Setting this value to zero will force a uniform\n refinement.\n selected (bool): If selected the filter is performed only on the\n selected faces.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4810", "text": "Merge together all the vertices that are nearer than the specified\n threshold. Like a unify duplicate vertices but with some tolerance.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n threshold (float): Merging distance. All the vertices that are closer\n than this threshold are merged together. Use very small values,\n default is zero.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4811", "text": "Split non-manifold vertices until it becomes two-manifold.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n vert_displacement_ratio (float): When a vertex is split it is moved\n along the average vector going from its position to the centroid\n of the FF connected faces sharing it.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4812", "text": "Try to snap together adjacent borders that are slightly mismatched.\n\n This situation can happen on badly triangulated adjacent patches defined by\n high order surfaces. For each border vertex the filter snaps it onto the\n closest boundary edge only if it is closest of edge_legth*threshold. When\n vertex is snapped the corresponding face it split and a new vertex is\n created.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n edge_dist_ratio (float): Collapse edge when the edge / distance ratio\n is greater than this value. E.g. for default value 1000 two\n straight border edges are collapsed if the central vertex dist from\n the straight line composed by the two edges less than a 1/1000 of\n the sum of the edges length. Larger values enforce that only\n vertexes very close to the line are removed.\n unify_vert (bool): If true the snap vertices are welded together.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4813", "text": "An alternative scale implementation that uses a geometric function.\n This is more accurate than the built-in version."} {"_id": "q_4814", "text": "Deform mesh around cylinder of radius and axis z\n\n y = 0 will be on the surface of radius \"radius\"\n pitch != 0 will create a helix, with distance \"pitch\" traveled in z for each rotation\n taper = change in r over z. E.g. a value of 0.5 will shrink r by 0.5 for every z length of 1"} {"_id": "q_4815", "text": "Bends mesh around cylinder of radius radius and axis z to a certain angle\n\n straight_ends: Only apply twist (pitch) over the area that is bent\n\n outside_limit_end (bool): should values outside of the bend radius_limit be considered part\n of the end (True) or the start (False)?"} {"_id": "q_4816", "text": "Transfer vertex colors to texture colors\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n tex_name (str): The texture file to be created\n tex_width (int): The texture width\n tex_height (int): The texture height\n overwrite_tex (bool): If current mesh has a texture will be overwritten (with provided texture dimension)\n assign_tex (bool): Assign the newly created texture\n fill_tex (bool): If enabled the unmapped texture space is colored using a pull push filling algorithm, if false is set to black"} {"_id": "q_4817", "text": "Transfer mesh colors to face colors\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n all_visible_layers (bool): If true the color mapping is applied to all the meshes"} {"_id": "q_4818", "text": "This surface reconstruction algorithm creates watertight\n surfaces from oriented point sets.\n\n The filter uses the original code of Michael Kazhdan and Matthew Bolitho\n implementing the algorithm in the following paper:\n\n Michael Kazhdan, Hugues Hoppe,\n \"Screened Poisson surface reconstruction\"\n ACM Trans. Graphics, 32(3), 2013\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n visible_layer (bool): If True all the visible layers will be used for\n providing the points\n depth (int): This integer is the maximum depth of the tree that will\n be used for surface reconstruction. Running at depth d corresponds\n to solving on a voxel grid whose resolution is no larger than\n 2^d x 2^d x 2^d. Note that since the reconstructor adapts the\n octree to the sampling density, the specified reconstruction depth\n is only an upper bound. The default value for this parameter is 8.\n full_depth (int): This integer specifies the depth beyond depth the\n octree will be adapted. At coarser depths, the octree will be\n complete, containing all 2^d x 2^d x 2^d nodes. The default value\n for this parameter is 5.\n cg_depth (int): This integer is the depth up to which a\n conjugate-gradients solver will be used to solve the linear system.\n Beyond this depth Gauss-Seidel relaxation will be used. The default\n value for this parameter is 0.\n scale (float): This floating point value specifies the ratio between\n the diameter of the cube used for reconstruction and the diameter\n of the samples' bounding cube. The default value is 1.1.\n samples_per_node (float): This floating point value specifies the\n minimum number of sample points that should fall within an octree\n node as the octree construction is adapted to sampling density. For\n noise-free samples, small values in the range [1.0 - 5.0] can be\n used. For more noisy samples, larger values in the range\n [15.0 - 20.0] may be needed to provide a smoother, noise-reduced,\n reconstruction. The default value is 1.5.\n point_weight (float): This floating point value specifies the\n importance that interpolation of the point samples is given in the\n formulation of the screened Poisson equation. The results of the\n original (unscreened) Poisson Reconstruction can be obtained by\n setting this value to 0. The default value for this parameter is 4.\n iterations (int): This integer value specifies the number of\n Gauss-Seidel relaxations to be performed at each level of the\n hierarchy. The default value for this parameter is 8.\n confidence (bool): If True this tells the reconstructor to use the\n quality as confidence information; this is done by scaling the unit\n normals with the quality values. When the flag is not enabled, all\n normals are normalized to have unit-length prior to reconstruction.\n pre_clean (bool): If True will force a cleaning pre-pass on the data\n removing all unreferenced vertices or vertices with null normals.\n\n Layer stack:\n Creates 1 new layer 'Poisson mesh'\n Current layer is not changed\n\n MeshLab versions:\n 2016.12"} {"_id": "q_4819", "text": "Turn a model into a surface with Voronoi style holes in it\n\n References:\n http://meshlabstuff.blogspot.com/2009/03/creating-voronoi-sphere.html\n http://meshlabstuff.blogspot.com/2009/04/creating-voronoi-sphere-2.html\n\n Requires FilterScript object\n\n Args:\n script: the FilterScript object to write the filter to. Does not\n work with a script filename.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4820", "text": "Select all the faces of the current mesh\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n faces (bool): If True the filter will select all the faces.\n verts (bool): If True the filter will select all the vertices.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4821", "text": "Boolean function using muparser lib to perform face selection over\n current mesh.\n\n See help(mlx.muparser_ref) for muparser reference documentation.\n\n It's possible to use parenthesis, per-vertex variables and boolean operator:\n (, ), and, or, <, >, =\n It's possible to use per-face variables like attributes associated to the three\n vertices of every face.\n\n Variables (per face):\n x0, y0, z0 for first vertex; x1,y1,z1 for second vertex; x2,y2,z2 for third vertex\n nx0, ny0, nz0, nx1, ny1, nz1, etc. for vertex normals\n r0, g0, b0, a0, etc. for vertex color\n q0, q1, q2 for quality\n wtu0, wtv0, wtu1, wtv1, wtu2, wtv2 (per wedge texture coordinates)\n ti for face texture index (>= ML2016.12)\n vsel0, vsel1, vsel2 for vertex selection (1 yes, 0 no) (>= ML2016.12)\n fr, fg, fb, fa for face color (>= ML2016.12)\n fq for face quality (>= ML2016.12)\n fnx, fny, fnz for face normal (>= ML2016.12)\n fsel face selection (1 yes, 0 no) (>= ML2016.12)\n\n Args:\n script: the FilterScript object or script filename to write\n the filter] to.\n function (str): a boolean function that will be evaluated in order\n to select a subset of faces.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4822", "text": "Boolean function using muparser lib to perform vertex selection over current mesh.\n\n See help(mlx.muparser_ref) for muparser reference documentation.\n\n It's possible to use parenthesis, per-vertex variables and boolean operator:\n (, ), and, or, <, >, =\n It's possible to use the following per-vertex variables in the expression:\n\n Variables:\n x, y, z (coordinates)\n nx, ny, nz (normal)\n r, g, b, a (color)\n q (quality)\n rad\n vi (vertex index)\n vtu, vtv (texture coordinates)\n ti (texture index)\n vsel (is the vertex selected? 1 yes, 0 no)\n and all custom vertex attributes already defined by user.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter] to.\n function (str): a boolean function that will be evaluated in order\n to select a subset of vertices. Example: (y > 0) and (ny > 0)\n strict_face_select (bool): if True a face is selected if ALL its\n vertices are selected. If False a face is selected if at least\n one of its vertices is selected. ML v1.3.4BETA only; this is\n ignored in 2016.12. In 2016.12 only vertices are selected.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4823", "text": "Select all vertices within a cylindrical radius\n\n Args:\n radius (float): radius of the sphere\n center_pt (3 coordinate tuple or list): center point of the sphere\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4824", "text": "Select all vertices within a spherical radius\n\n Args:\n radius (float): radius of the sphere\n center_pt (3 coordinate tuple or list): center point of the sphere\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4825", "text": "Flatten all or only the visible layers into a single new mesh.\n\n Transformations are preserved. Existing layers can be optionally\n deleted.\n\n Args:\n script: the mlx.FilterScript object or script filename to write\n the filter to.\n merge_visible (bool): merge only visible layers\n merge_vert (bool): merge the vertices that are duplicated among\n different layers. Very useful when the layers are spliced portions\n of a single big mesh.\n delete_layer (bool): delete all the merged layers. If all layers are\n visible only a single layer will remain after the invocation of\n this filter.\n keep_unreferenced_vert (bool): Do not discard unreferenced vertices\n from source layers. Necessary for point-only layers.\n\n Layer stack:\n Creates a new layer \"Merged Mesh\"\n Changes current layer to the new layer\n Optionally deletes all other layers\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA\n\n Bugs:\n UV textures: not currently preserved, however will be in a future\n release. https://github.com/cnr-isti-vclab/meshlab/issues/128\n merge_visible: it is not currently possible to change the layer\n visibility from meshlabserver, however this will be possible\n in the future https://github.com/cnr-isti-vclab/meshlab/issues/123"} {"_id": "q_4826", "text": "Change the current layer by specifying the new layer number.\n\n Args:\n script: the mlx.FilterScript object or script filename to write\n the filter to.\n layer_num (int): the number of the layer to change to. Default is the\n last layer if script is a mlx.FilterScript object; if script is a\n filename the default is the first layer.\n\n Layer stack:\n Modifies current layer\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4827", "text": "Delete all layers below the specified one.\n\n Useful for MeshLab ver 2016.12, whcih will only output layer 0."} {"_id": "q_4828", "text": "Subprocess program error handling\n\n Args:\n program_name (str): name of the subprocess program\n\n Returns:\n break_now (bool): indicate whether calling program should break out of loop"} {"_id": "q_4829", "text": "Create new mlx script and write opening tags.\n\n Performs special processing on stl files.\n\n If no input files are provided this will create a dummy\n file and delete it as the first filter. This works around\n the meshlab limitation that it must be provided an input\n file, even if you will be creating a mesh as the first\n filter."} {"_id": "q_4830", "text": "Add new mesh layer to the end of the stack\n\n Args:\n label (str): new label for the mesh layer\n change_layer (bool): change to the newly created layer"} {"_id": "q_4831", "text": "Delete mesh layer"} {"_id": "q_4832", "text": "Run main script"} {"_id": "q_4833", "text": "Create a new layer populated with a point sampling of the current mesh.\n\n Samples are generated according to a Poisson-disk distribution, using the\n algorithm described in:\n\n 'Efficient and Flexible Sampling with Blue Noise Properties of Triangular Meshes'\n Massimiliano Corsini, Paolo Cignoni, Roberto Scopigno\n IEEE TVCG 2012\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n sample_num (int): The desired number of samples. The radius of the disk\n is calculated according to the sampling density.\n radius (float): If not zero this parameter overrides the previous\n parameter to allow exact radius specification.\n montecarlo_rate (int): The over-sampling rate that is used to generate\n the intial Monte Carlo samples (e.g. if this parameter is 'K' means\n that 'K * sample_num' points will be used). The generated\n Poisson-disk samples are a subset of these initial Monte Carlo\n samples. Larger numbers slow the process but make it a bit more\n accurate.\n save_montecarlo (bool): If True, it will generate an additional Layer\n with the Monte Carlo sampling that was pruned to build the Poisson\n distribution.\n approx_geodesic_dist (bool): If True Poisson-disk distances are\n computed using an approximate geodesic distance, e.g. an Euclidean\n distance weighted by a function of the difference between the\n normals of the two points.\n subsample (bool): If True the original vertices of the base mesh are\n used as base set of points. In this case the sample_num should be\n obviously much smaller than the original vertex number. Note that\n this option is very useful in the case you want to subsample a\n dense point cloud.\n refine (bool): If True the vertices of the refine_layer mesh layer are\n used as starting vertices, and they will be utterly refined by\n adding more and more points until possible.\n refine_layer (int): Used only if refine is True.\n best_sample (bool): If True it will use a simple heuristic for choosing\n the samples. At a small cost (it can slow the process a bit) it\n usually improves the maximality of the generated sampling.\n best_sample_pool (bool): Used only if best_sample is True. It controls\n the number of attempts that it makes to get the best sample. It is\n reasonable that it is smaller than the Monte Carlo oversampling\n factor.\n exact_num (bool): If True it will try to do a dicotomic search for the\n best Poisson-disk radius that will generate the requested number of\n samples with a tolerance of the 0.5%. Obviously it takes much\n longer.\n radius_variance (float): The radius of the disk is allowed to vary\n between r and r*var. If this parameter is 1 the sampling is the\n same as the Poisson-disk Sampling.\n\n Layer stack:\n Creates new layer 'Poisson-disk Samples'. Current layer is NOT changed\n to the new layer (see Bugs).\n If save_montecarlo is True, creates a new layer 'Montecarlo Samples'.\n Current layer is NOT changed to the new layer (see Bugs).\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA\n\n Bugs:\n Current layer is NOT changed to the new layer, which is inconsistent\n with the majority of filters that create new layers."} {"_id": "q_4834", "text": "\"Create a new layer populated with a subsampling of the vertexes of the\n current mesh\n\n The subsampling is driven by a simple one-per-gridded cell strategy.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n cell_size (float): The size of the cell of the clustering grid. Smaller the cell finer the resulting mesh. For obtaining a very coarse mesh use larger values.\n strategy (enum 'AVERAGE' or 'CENTER'): <b>Average</b>: for each cell we take the average of the sample falling into. The resulting point is a new point.<br><b>Closest to center</b>: for each cell we take the sample that is closest to the center of the cell. Choosen vertices are a subset of the original ones.\n selected (bool): If true only for the filter is applied only on the selected subset of the mesh.\n\n Layer stack:\n Creates new layer 'Cluster Samples'. Current layer is changed to the new\n layer.\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4835", "text": "Trivial Per-Triangle parameterization"} {"_id": "q_4836", "text": "Voronoi Atlas parameterization"} {"_id": "q_4837", "text": "Compute a set of topological measures over a mesh\n\n Args:\n script: the mlx.FilterScript object or script filename to write\n the filter to.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4838", "text": "Parse the ml_log file generated by the measure_topology function.\n\n Args:\n ml_log (str): MeshLab log file to parse\n log (str): filename to log output\n\n Returns:\n dict: dictionary with the following keys:\n vert_num (int): number of vertices\n edge_num (int): number of edges\n face_num (int): number of faces\n unref_vert_num (int): number or unreferenced vertices\n boundry_edge_num (int): number of boundary edges\n part_num (int): number of parts (components) in the mesh.\n manifold (bool): True if mesh is two-manifold, otherwise false.\n non_manifold_edge (int): number of non_manifold edges.\n non_manifold_vert (int): number of non-manifold verices\n genus (int or str): genus of the mesh, either a number or\n 'undefined' if the mesh is non-manifold.\n holes (int or str): number of holes in the mesh, either a number\n or 'undefined' if the mesh is non-manifold."} {"_id": "q_4839", "text": "Parse the ml_log file generated by the hausdorff_distance function.\n\n Args:\n ml_log (str): MeshLab log file to parse\n log (str): filename to log output\n\n Returns:\n dict: dictionary with the following keys:\n number_points (int): number of points in mesh\n min_distance (float): minimum hausdorff distance\n max_distance (float): maximum hausdorff distance\n mean_distance (float): mean hausdorff distance\n rms_distance (float): root mean square distance"} {"_id": "q_4840", "text": "Given a Mesh 'M' and a Pointset 'P', the filter projects each vertex of\n P over M and color M according to the geodesic distance from these\n projected points. Projection and coloring are done on a per vertex\n basis.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n target_layer (int): The mesh layer whose surface is colored. For each\n vertex of this mesh we decide the color according to the following\n arguments.\n source_layer (int): The mesh layer whose vertexes are used as seed\n points for the color computation. These seeds point are projected\n onto the target_layer mesh.\n backward (bool): If True the mesh is colored according to the distance\n from the frontier of the voronoi diagram induced by the\n source_layer seeds.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4841", "text": "Color mesh vertices in a repeating sinusiodal rainbow pattern\n\n Sine wave follows the following equation for each color channel (RGBA):\n channel = sin(freq*increment + phase)*amplitude + center\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n direction (str) = the direction that the sine wave will travel; this\n and the start_pt determine the 'increment' of the sine function.\n Valid values are:\n 'sphere' - radiate sine wave outward from start_pt (default)\n 'x' - sine wave travels along the X axis\n 'y' - sine wave travels along the Y axis\n 'z' - sine wave travels along the Z axis\n or define the increment directly using a muparser function, e.g.\n '2x + y'. In this case start_pt will not be used; include it in\n the function directly.\n start_pt (3 coordinate tuple or list): start point of the sine wave. For a\n sphere this is the center of the sphere.\n amplitude (float [0, 255], single value or 4 term tuple or list): amplitude\n of the sine wave, with range between 0-255. If a single value is\n specified it will be used for all channels, otherwise specify each\n channel individually.\n center (float [0, 255], single value or 4 term tuple or list): center\n of the sine wave, with range between 0-255. If a single value is\n specified it will be used for all channels, otherwise specify each\n channel individually.\n freq (float, single value or 4 term tuple or list): frequency of the sine\n wave. If a single value is specified it will be used for all channels,\n otherwise specifiy each channel individually.\n phase (float [0, 360], single value or 4 term tuple or list): phase\n of the sine wave in degrees, with range between 0-360. If a single\n value is specified it will be used for all channels, otherwise specify\n each channel individually.\n alpha (bool): if False the alpha channel will be set to 255 (full opacity).\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4842", "text": "muparser atan2 function\n\n Implements an atan2(y,x) function for older muparser versions (<2.1.0);\n atan2 was added as a built-in function in muparser 2.1.0\n\n Args:\n y (str): y argument of the atan2(y,x) function\n x (str): x argument of the atan2(y,x) function\n\n Returns:\n A muparser string that calculates atan2(y,x)"} {"_id": "q_4843", "text": "muparser cross product function\n\n Compute the cross product of two 3x1 vectors\n\n Args:\n u (list or tuple of 3 strings): first vector\n v (list or tuple of 3 strings): second vector\n Returns:\n A list containing a muparser string of the cross product"} {"_id": "q_4844", "text": "Add a new Per-Vertex scalar attribute to current mesh and fill it with\n the defined function.\n\n The specified name can be used in other filter functions.\n\n It's possible to use parenthesis, per-vertex variables and boolean operator:\n (, ), and, or, <, >, =\n It's possible to use the following per-vertex variables in the expression:\n\n Variables:\n x, y, z (coordinates)\n nx, ny, nz (normal)\n r, g, b, a (color)\n q (quality)\n rad\n vi (vertex index)\n ?vtu, vtv (texture coordinates)\n ?ti (texture index)\n ?vsel (is the vertex selected? 1 yes, 0 no)\n and all custom vertex attributes already defined by user.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter] to.\n name (str): the name of new attribute. You can access attribute in\n other filters through this name.\n function (str): function to calculate custom attribute value for each\n vertex\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4845", "text": "Invert faces orientation, flipping the normals of the mesh.\n\n If requested, it tries to guess the right orientation; mainly it decides to\n flip all the faces if the minimum/maximum vertexes have not outward point\n normals for a few directions. Works well for single component watertight\n objects.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n force_flip (bool): If selected, the normals will always be flipped;\n otherwise, the filter tries to set them outside.\n selected (bool): If selected, only selected faces will be affected.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4846", "text": "Compute the normals of the vertices of a mesh without exploiting the\n triangle connectivity, useful for dataset with no faces.\n\n Args:\n script: the FilterScript object or script filename to write\n the filter to.\n neighbors (int): The number of neighbors used to estimate normals.\n smooth_iteration (int): The number of smoothing iteration done on the\n p used to estimate and propagate normals.\n flip (bool): Flip normals w.r.t. viewpoint. If the 'viewpoint' (i.e.\n scanner position) is known, it can be used to disambiguate normals\n orientation, so that all the normals will be oriented in the same\n direction.\n viewpoint_pos (single xyz point, tuple or list): Set the x, y, z\n coordinates of the viewpoint position.\n\n Layer stack:\n No impacts\n\n MeshLab versions:\n 2016.12\n 1.3.4BETA"} {"_id": "q_4847", "text": "Sort separate line segments in obj format into a continuous polyline or polylines.\n NOT FINISHED; DO NOT USE\n\n Also measures the length of each polyline\n\n Return polyline and polylineMeta (lengths)"} {"_id": "q_4848", "text": "Measures mesh topology\n\n Args:\n fbasename (str): input filename.\n log (str): filename to log output\n\n Returns:\n dict: dictionary with the following keys:\n vert_num (int): number of vertices\n edge_num (int): number of edges\n face_num (int): number of faces\n unref_vert_num (int): number or unreferenced vertices\n boundry_edge_num (int): number of boundary edges\n part_num (int): number of parts (components) in the mesh.\n manifold (bool): True if mesh is two-manifold, otherwise false.\n non_manifold_edge (int): number of non_manifold edges.\n non_manifold_vert (int): number of non-manifold verices\n genus (int or str): genus of the mesh, either a number or\n 'undefined' if the mesh is non-manifold.\n holes (int or str): number of holes in the mesh, either a number\n or 'undefined' if the mesh is non-manifold."} {"_id": "q_4849", "text": "Measures mesh geometry, aabb and topology."} {"_id": "q_4850", "text": "Measure a dimension of a mesh"} {"_id": "q_4851", "text": "This is a helper used by UploadSet.save to provide lowercase extensions for\n all processed files, to compare with configured extensions in the same\n case.\n\n .. versionchanged:: 0.1.4\n Filenames without extensions are no longer lowercased, only the\n extension is returned in lowercase, if an extension exists.\n\n :param filename: The filename to ensure has a lowercase extension."} {"_id": "q_4852", "text": "By default, Flask will accept uploads to an arbitrary size. While Werkzeug\n switches uploads from memory to a temporary file when they hit 500 KiB,\n it's still possible for someone to overload your disk space with a\n gigantic file.\n\n This patches the app's request class's\n `~werkzeug.BaseRequest.max_content_length` attribute so that any upload\n larger than the given size is rejected with an HTTP error.\n\n .. note::\n\n In Flask 0.6, you can do this by setting the `MAX_CONTENT_LENGTH`\n setting, without patching the request class. To emulate this behavior,\n you can pass `None` as the size (you must pass it explicitly). That is\n the best way to call this function, as it won't break the Flask 0.6\n functionality if it exists.\n\n .. versionchanged:: 0.1.1\n\n :param app: The app to patch the request class of.\n :param size: The maximum size to accept, in bytes. The default is 64 MiB.\n If it is `None`, the app's `MAX_CONTENT_LENGTH` configuration\n setting will be used to patch."} {"_id": "q_4853", "text": "This is a helper function for `configure_uploads` that extracts the\n configuration for a single set.\n\n :param uset: The upload set.\n :param app: The app to load the configuration from.\n :param defaults: A dict with keys `url` and `dest` from the\n `UPLOADS_DEFAULT_DEST` and `DEFAULT_UPLOADS_URL`\n settings."} {"_id": "q_4854", "text": "Call this after the app has been configured. It will go through all the\n upload sets, get their configuration, and store the configuration on the\n app. It will also register the uploads module if it hasn't been set. This\n can be called multiple times with different upload sets.\n\n .. versionchanged:: 0.1.3\n The uploads module/blueprint will only be registered if it is needed\n to serve the upload sets.\n\n :param app: The `~flask.Flask` instance to get the configuration from.\n :param upload_sets: The `UploadSet` instances to configure."} {"_id": "q_4855", "text": "This gets the current configuration. By default, it looks up the\n current application and gets the configuration from there. But if you\n don't want to go to the full effort of setting an application, or it's\n otherwise outside of a request context, set the `_config` attribute to\n an `UploadConfiguration` instance, then set it back to `None` when\n you're done."} {"_id": "q_4856", "text": "This function gets the URL a file uploaded to this set would be\n accessed at. It doesn't check whether said file exists.\n\n :param filename: The filename to return the URL for."} {"_id": "q_4857", "text": "This returns the absolute path of a file uploaded to this set. It\n doesn't actually check whether said file exists.\n\n :param filename: The filename to return the path for.\n :param folder: The subfolder within the upload set previously used\n to save to."} {"_id": "q_4858", "text": "This determines whether a specific extension is allowed. It is called\n by `file_allowed`, so if you override that but still want to check\n extensions, call back into this.\n\n :param ext: The extension to check, without the dot."} {"_id": "q_4859", "text": "If a file with the selected name already exists in the target folder,\n this method is called to resolve the conflict. It should return a new\n basename for the file.\n\n The default implementation splits the name and extension and adds a\n suffix to the name consisting of an underscore and a number, and tries\n that until it finds one that doesn't exist.\n\n :param target_folder: The absolute path to the target.\n :param basename: The file's original basename."} {"_id": "q_4860", "text": "Returns actual version specified in filename."} {"_id": "q_4861", "text": "Removes duplicate objects.\n\n http://www.peterbe.com/plog/uniqifiers-benchmark."} {"_id": "q_4862", "text": "Returns count difference in two collections of Python objects."} {"_id": "q_4863", "text": "Checks memory usage when 'line' event occur."} {"_id": "q_4864", "text": "Returns processed memory usage."} {"_id": "q_4865", "text": "Returns memory overhead."} {"_id": "q_4866", "text": "Returns memory stats for a function."} {"_id": "q_4867", "text": "Returns module filenames from package.\n\n Args:\n package_path: Path to Python package.\n Returns:\n A set of module filenames."} {"_id": "q_4868", "text": "Runs function in separate process.\n\n This function is used instead of a decorator, since Python multiprocessing\n module can't serialize decorated function on all platforms."} {"_id": "q_4869", "text": "Initializes profiler with a module."} {"_id": "q_4870", "text": "Initializes profiler with a function."} {"_id": "q_4871", "text": "Replaces sys.argv with proper args to pass to script."} {"_id": "q_4872", "text": "Samples current stack and adds result in self._stats.\n\n Args:\n signum: Signal that activates handler.\n frame: Frame on top of the stack when signal is handled."} {"_id": "q_4873", "text": "Returns call tree."} {"_id": "q_4874", "text": "Runs statistical profiler on a package."} {"_id": "q_4875", "text": "Runs statistical profiler on a module."} {"_id": "q_4876", "text": "Processes collected stats for UI."} {"_id": "q_4877", "text": "Runs cProfile on a module."} {"_id": "q_4878", "text": "Runs cProfile on a function."} {"_id": "q_4879", "text": "Initializes DB."} {"_id": "q_4880", "text": "Returns all existing guestbook records."} {"_id": "q_4881", "text": "Adds single guestbook record."} {"_id": "q_4882", "text": "Handles index.html requests."} {"_id": "q_4883", "text": "Handles HTTP POST requests."} {"_id": "q_4884", "text": "Sends HTTP response code, message and headers."} {"_id": "q_4885", "text": "Fills code heatmap and execution count dictionaries."} {"_id": "q_4886", "text": "Skips lines in src_code specified by skip map."} {"_id": "q_4887", "text": "Calculates heatmap for package."} {"_id": "q_4888", "text": "Formats heatmap for UI."} {"_id": "q_4889", "text": "Runs profilers on run_object.\n\n Args:\n run_object: An object (string or tuple) for profiling.\n prof_config: A string with profilers configuration.\n verbose: True if info about running profilers should be shown.\n Returns:\n An ordered dictionary with collected stats.\n Raises:\n AmbiguousConfigurationError: when prof_config is ambiguous.\n BadOptionError: when unknown options are present in configuration."} {"_id": "q_4890", "text": "Runs profilers on a function.\n\n Args:\n func: A Python function.\n options: A string with profilers configuration (i.e. 'cmh').\n args: func non-keyword arguments.\n kwargs: func keyword arguments.\n host: Host name to send collected data.\n port: Port number to send collected data.\n\n Returns:\n A result of func execution."} {"_id": "q_4891", "text": "Get information about a specific template.\n\n :param template_id: The unique id for the template.\n :type template_id: :py:class:`str`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"} {"_id": "q_4892", "text": "Delete a specific template.\n\n :param template_id: The unique id for the template.\n :type template_id: :py:class:`str`"} {"_id": "q_4893", "text": "The MD5 hash of the lowercase version of the list member's email.\n Used as subscriber_hash\n\n :param member_email: The member's email address\n :type member_email: :py:class:`str`\n :returns: The MD5 hash in hex\n :rtype: :py:class:`str`"} {"_id": "q_4894", "text": "Function that verifies that the string passed is a valid url.\n\n Original regex author Diego Perini (http://www.iport.it)\n regex ported to Python by adamrofer (https://github.com/adamrofer)\n Used under MIT license.\n\n :param url:\n :return: Nothing"} {"_id": "q_4895", "text": "Given two dicts, x and y, merge them into a new dict as a shallow copy.\n\n The result only differs from `x.update(y)` in the way that it handles list\n values when both x and y have list values for the same key. In which case\n the returned dictionary, z, has a value according to:\n z[key] = x[key] + z[key]\n\n :param x: The first dictionary\n :type x: :py:class:`dict`\n :param y: The second dictionary\n :type y: :py:class:`dict`\n :returns: The merged dictionary\n :rtype: :py:class:`dict`"} {"_id": "q_4896", "text": "Batch subscribe or unsubscribe list members.\n\n Only the members array is required in the request body parameters.\n Within the members array, each member requires an email_address\n and either a status or status_if_new. The update_existing parameter\n will also be considered required to help prevent accidental updates\n to existing members and will default to false if not present.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"members\": array*\n [\n {\n \"email_address\": string*,\n \"status\": string* (Must be one of 'subscribed', 'unsubscribed', 'cleaned', or 'pending'),\n \"status_if_new\": string* (Must be one of 'subscribed', 'unsubscribed', 'cleaned', or 'pending')\n }\n ],\n \"update_existing\": boolean*\n }"} {"_id": "q_4897", "text": "Add a new line item to an existing order.\n\n :param store_id: The store id.\n :type store_id: :py:class:`str`\n :param order_id: The id for the order in a store.\n :type order_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"id\": string*,\n \"product_id\": string*,\n \"product_variant_id\": string*,\n \"quantity\": integer*,\n \"price\": number*\n }"} {"_id": "q_4898", "text": "Get links to all other resources available in the API.\n\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"} {"_id": "q_4899", "text": "Retrieve OAuth2-based credentials to associate API calls with your\n application.\n\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"client_id\": string*,\n \"client_secret\": string*\n }"} {"_id": "q_4900", "text": "Get information about a specific authorized application\n\n :param app_id: The unique id for the connected authorized application\n :type app_id: :py:class:`str`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"} {"_id": "q_4901", "text": "Add new promo rule to a store\n\n :param store_id: The store id\n :type store_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict'\n data = {\n \"id\": string*,\n \"title\": string,\n \"description\": string*,\n \"starts_at\": string,\n \"ends_at\": string,\n \"amount\": number*,\n \"type\": string*,\n \"target\": string*,\n \"enabled\": boolean,\n \"created_at_foreign\": string,\n \"updated_at_foreign\": string,\n }"} {"_id": "q_4902", "text": "Get information about a specific folder used to organize campaigns.\n\n :param folder_id: The unique id for the campaign folder.\n :type folder_id: :py:class:`str`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"} {"_id": "q_4903", "text": "Get information about an individual Automation workflow email.\n\n :param workflow_id: The unique id for the Automation workflow.\n :type workflow_id: :py:class:`str`\n :param email_id: The unique id for the Automation workflow email.\n :type email_id: :py:class:`str`"} {"_id": "q_4904", "text": "Upload a new image or file to the File Manager.\n\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"name\": string*,\n \"file_data\": string*\n }"} {"_id": "q_4905", "text": "Get information about a specific file in the File Manager.\n\n :param file_id: The unique id for the File Manager file.\n :type file_id: :py:class:`str`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"} {"_id": "q_4906", "text": "Update a file in the File Manager.\n\n :param file_id: The unique id for the File Manager file.\n :type file_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"name\": string*,\n \"file_data\": string*\n }"} {"_id": "q_4907", "text": "Remove a specific file from the File Manager.\n\n :param file_id: The unique id for the File Manager file.\n :type file_id: :py:class:`str`"} {"_id": "q_4908", "text": "Get information about subscribers who were removed from an Automation\n workflow.\n\n :param workflow_id: The unique id for the Automation workflow.\n :type workflow_id: :py:class:`str`"} {"_id": "q_4909", "text": "Create a new webhook for a specific list.\n\n The documentation does not include any required request body\n parameters but the url parameter is being listed here as a required\n parameter in documentation and error-checking based on the description\n of the method\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"url\": string*\n }"} {"_id": "q_4910", "text": "Get information about a specific webhook.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param webhook_id: The unique id for the webhook.\n :type webhook_id: :py:class:`str`"} {"_id": "q_4911", "text": "Update the settings for an existing webhook.\n\n :param list_id: The unique id for the list\n :type list_id: :py:class:`str`\n :param webhook_id: The unique id for the webhook\n :type webhook_id: :py:class:`str`"} {"_id": "q_4912", "text": "Delete a specific webhook in a list.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param webhook_id: The unique id for the webhook.\n :type webhook_id: :py:class:`str`"} {"_id": "q_4913", "text": "returns the specified list segment."} {"_id": "q_4914", "text": "updates an existing list segment."} {"_id": "q_4915", "text": "removes an existing list segment from the list. This cannot be undone."} {"_id": "q_4916", "text": "adds a new segment to the list."} {"_id": "q_4917", "text": "Get the metadata returned after authentication"} {"_id": "q_4918", "text": "Get details about an individual conversation.\n\n :param conversation_id: The unique id for the conversation.\n :type conversation_id: :py:class:`str`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"} {"_id": "q_4919", "text": "Get information about members who have unsubscribed from a specific\n campaign.\n\n :param campaign_id: The unique id for the campaign.\n :type campaign_id: :py:class:`str`\n :param get_all: Should the query get all results\n :type get_all: :py:class:`bool`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []\n queryparams['count'] = integer\n queryparams['offset'] = integer"} {"_id": "q_4920", "text": "Get information about an Automation email queue.\n\n :param workflow_id: The unique id for the Automation workflow.\n :type workflow_id: :py:class:`str`\n :param email_id: The unique id for the Automation workflow email.\n :type email_id: :py:class:`str`"} {"_id": "q_4921", "text": "Get information about a specific subscriber in an Automation email\n queue.\n\n :param workflow_id: The unique id for the Automation workflow.\n :type workflow_id: :py:class:`str`\n :param email_id: The unique id for the Automation workflow email.\n :type email_id: :py:class:`str`\n :param subscriber_hash: The MD5 hash of the lowercase version of the\n list member\u2019s email address.\n :type subscriber_hash: :py:class:`str`"} {"_id": "q_4922", "text": "Pause an RSS-Driven campaign.\n\n :param campaign_id: The unique id for the campaign.\n :type campaign_id: :py:class:`str`"} {"_id": "q_4923", "text": "Replicate a campaign in saved or send status.\n\n :param campaign_id: The unique id for the campaign.\n :type campaign_id: :py:class:`str`"} {"_id": "q_4924", "text": "Resume an RSS-Driven campaign.\n\n :param campaign_id: The unique id for the campaign.\n :type campaign_id: :py:class:`str`"} {"_id": "q_4925", "text": "Send a MailChimp campaign. For RSS Campaigns, the campaign will send\n according to its schedule. All other campaigns will send immediately.\n\n :param campaign_id: The unique id for the campaign.\n :type campaign_id: :py:class:`str`"} {"_id": "q_4926", "text": "Add a new customer to a store.\n\n :param store_id: The store id.\n :type store_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"id\": string*,\n \"email_address\": string*,\n \"opt_in_status\": boolean*\n }"} {"_id": "q_4927", "text": "Add or update a product variant.\n\n :param store_id: The store id.\n :type store_id: :py:class:`str`\n :param product_id: The id for the product of a store.\n :type product_id: :py:class:`str`\n :param variant_id: The id for the product variant.\n :type variant_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"id\": string*,\n \"title\": string*\n }"} {"_id": "q_4928", "text": "Update a specific feedback message for a campaign.\n\n :param campaign_id: The unique id for the campaign.\n :type campaign_id: :py:class:`str`\n :param feedback_id: The unique id for the feedback message.\n :type feedback_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"message\": string*\n }"} {"_id": "q_4929", "text": "Get information about a specific merge field in a list.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param merge_id: The id for the merge field.\n :type merge_id: :py:class:`str`"} {"_id": "q_4930", "text": "Get information about a specific batch webhook.\n\n :param batch_webhook_id: The unique id for the batch webhook.\n :type batch_webhook_id: :py:class:`str`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"} {"_id": "q_4931", "text": "Update a webhook that will fire whenever any batch request completes\n processing.\n\n :param batch_webhook_id: The unique id for the batch webhook.\n :type batch_webhook_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"url\": string*\n }"} {"_id": "q_4932", "text": "Add a new image to the product.\n\n :param store_id: The store id.\n :type store_id: :py:class:`str`\n :param product_id: The id for the product of a store.\n :type product_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"id\": string*,\n \"url\": string*\n }"} {"_id": "q_4933", "text": "Get information about a specific product image.\n\n :param store_id: The store id.\n :type store_id: :py:class:`str`\n :param product_id: The id for the product of a store.\n :type product_id: :py:class:`str`\n :param image_id: The id for the product image.\n :type image_id: :py:class:`str`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"} {"_id": "q_4934", "text": "Post a new message to a conversation.\n\n :param conversation_id: The unique id for the conversation.\n :type conversation_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"from_email\": string*,\n \"read\": boolean*\n }"} {"_id": "q_4935", "text": "Add a new order to a store.\n\n :param store_id: The store id.\n :type store_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"id\": string*,\n \"customer\": object*\n {\n \"'id\": string*\n },\n \"curency_code\": string*,\n \"order_total\": number*,\n \"lines\": array*\n [\n {\n \"id\": string*,\n \"product_id\": string*,\n \"product_variant_id\": string*,\n \"quantity\": integer*,\n \"price\": number*\n }\n ]\n }"} {"_id": "q_4936", "text": "Update tags for a specific subscriber.\n\n The documentation lists only the tags request body parameter so it is\n being documented and error-checked as if it were required based on the\n description of the method.\n\n The data list needs to include a \"status\" key. This determines if the\n tag should be added or removed from the user:\n\n data = {\n 'tags': [\n {'name': 'foo', 'status': 'active'},\n {'name': 'bar', 'status': 'inactive'}\n ]\n }\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param subscriber_hash: The MD5 hash of the lowercase version of the\n list member\u2019s email address.\n :type subscriber_hash: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"tags\": list*\n }"} {"_id": "q_4937", "text": "Update a specific segment in a list.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param segment_id: The unique id for the segment.\n :type segment_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"name\": string*\n }"} {"_id": "q_4938", "text": "Update a specific folder used to organize templates.\n\n :param folder_id: The unique id for the File Manager folder.\n :type folder_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"name\": string*\n }"} {"_id": "q_4939", "text": "Add a new member to the list.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"status\": string*, (Must be one of 'subscribed', 'unsubscribed', 'cleaned',\n 'pending', or 'transactional')\n \"email_address\": string*\n }"} {"_id": "q_4940", "text": "Update information for a specific list member.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param subscriber_hash: The MD5 hash of the lowercase version of the\n list member\u2019s email address.\n :type subscriber_hash: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`"} {"_id": "q_4941", "text": "Add or update a list member.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param subscriber_hash: The MD5 hash of the lowercase version of the\n list member\u2019s email address.\n :type subscriber_hash: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"email_address\": string*,\n \"status_if_new\": string* (Must be one of 'subscribed',\n 'unsubscribed', 'cleaned', 'pending', or 'transactional')\n }"} {"_id": "q_4942", "text": "Delete a member from a list.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param subscriber_hash: The MD5 hash of the lowercase version of the\n list member\u2019s email address.\n :type subscriber_hash: :py:class:`str`"} {"_id": "q_4943", "text": "Delete permanently a member from a list.\n\n :param list_id: The unique id for the list.\n :type list_id: :py:class:`str`\n :param subscriber_hash: The MD5 hash of the lowercase version of the\n list member\u2019s email address.\n :type subscriber_hash: :py:class:`str`"} {"_id": "q_4944", "text": "Pause an automated email.\n\n :param workflow_id: The unique id for the Automation workflow.\n :type workflow_id: :py:class:`str`\n :param email_id: The unique id for the Automation workflow email.\n :type email_id: :py:class:`str`"} {"_id": "q_4945", "text": "Start an automated email.\n\n :param workflow_id: The unique id for the Automation workflow.\n :type workflow_id: :py:class:`str`\n :param email_id: The unique id for the Automation workflow email.\n :type email_id: :py:class:`str`"} {"_id": "q_4946", "text": "Removes an individual Automation workflow email.\n\n :param workflow_id: The unique id for the Automation workflow.\n :type workflow_id: :py:class:`str`\n :param email_id: The unique id for the Automation workflow email.\n :type email_id: :py:class:`str`"} {"_id": "q_4947", "text": "Create a new MailChimp campaign.\n\n The ValueError raised by an invalid type in data does not mention\n 'absplit' as a potential value because the documentation indicates\n that the absplit type has been deprecated.\n\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"recipients\": object*\n {\n \"list_id\": string*\n },\n \"settings\": object*\n {\n \"subject_line\": string*,\n \"from_name\": string*,\n \"reply_to\": string*\n },\n \"variate_settings\": object* (Required if type is \"variate\")\n {\n \"winner_criteria\": string* (Must be one of \"opens\", \"clicks\", \"total_revenue\", or \"manual\")\n },\n \"rss_opts\": object* (Required if type is \"rss\")\n {\n \"feed_url\": string*,\n \"frequency\": string* (Must be one of \"daily\", \"weekly\", or \"monthly\")\n },\n \"type\": string* (Must be one of \"regular\", \"plaintext\", \"rss\", \"variate\", or \"absplit\")\n }"} {"_id": "q_4948", "text": "Update some or all of the settings for a specific campaign.\n\n :param campaign_id: The unique id for the campaign.\n :type campaign_id: :py:class:`str`\n :param data: The request body parameters\n :type data: :py:class:`dict`\n data = {\n \"settings\": object*\n {\n \"subject_line\": string*,\n \"from_name\": string*,\n \"reply_to\": string*\n },\n }"} {"_id": "q_4949", "text": "Remove a campaign from your MailChimp account.\n\n :param campaign_id: The unique id for the campaign.\n :type campaign_id: :py:class:`str`"} {"_id": "q_4950", "text": "Delete a cart.\n\n :param store_id: The store id.\n :type store_id: :py:class:`str`\n :param cart_id: The id for the cart.\n :type cart_id: :py:class:`str`\n :param line_id: The id for the line item of a cart.\n :type line_id: :py:class:`str`"} {"_id": "q_4951", "text": "Get a summary of batch requests that have been made.\n\n :param get_all: Should the query get all results\n :type get_all: :py:class:`bool`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []\n queryparams['count'] = integer\n queryparams['offset'] = integer"} {"_id": "q_4952", "text": "Get the status of a batch request.\n\n :param batch_id: The unique id for the batch operation.\n :type batch_id: :py:class:`str`\n :param queryparams: The query string parameters\n queryparams['fields'] = []\n queryparams['exclude_fields'] = []"} {"_id": "q_4953", "text": "Policies returned from boto3 are massive, ugly, and difficult to read.\n This method flattens and reformats the policy.\n\n :param policy: Result from invoking describe_load_balancer_policies(...)\n :return: Returns a tuple containing policy_name and the reformatted policy dict."} {"_id": "q_4954", "text": "Retrieve key from Cache.\n\n :param key: key to look up in cache.\n :type key: ``object``\n\n :param delete_if_expired: remove value from cache if it is expired.\n Default is True.\n :type delete_if_expired: ``bool``\n\n :returns: value from cache or None\n :rtype: varies or None"} {"_id": "q_4955", "text": "Get access details in cache."} {"_id": "q_4956", "text": "Gets the VPC Flow Logs for a VPC"} {"_id": "q_4957", "text": "Gets the Classic Link details about a VPC"} {"_id": "q_4958", "text": "Gets the VPC Route Tables"} {"_id": "q_4959", "text": "Private GCP client builder.\n\n :param project: Google Cloud project string.\n :type project: ``str``\n\n :param mod_name: Module name to load. Should be found in sys.path.\n :type mod_name: ``str``\n\n :param pkg_name: package name that mod_name is part of. Default is 'google.cloud' .\n :type pkg_name: ``str``\n\n :param key_file: Default is None.\n :type key_file: ``str`` or None\n\n :param http_auth: httplib2 authorized client. Default is None.\n :type http_auth: :class: `HTTPLib2`\n\n :param user_agent: User Agent string to use in requests. Default is None.\n :type http_auth: ``str`` or None\n\n :return: GCP client\n :rtype: ``object``"} {"_id": "q_4960", "text": "Google http_auth helper.\n\n If key_file is not specified, default credentials will be used.\n\n If scopes is specified (and key_file), will be used instead of DEFAULT_SCOPES\n\n :param key_file: path to key file to use. Default is None\n :type key_file: ``str``\n\n :param scopes: scopes to set. Default is DEFAUL_SCOPES\n :type scopes: ``list``\n\n :param user_agent: User Agent string to use in requests. Default is None.\n :type http_auth: ``str`` or None\n\n :return: HTTPLib2 authorized client.\n :rtype: :class: `HTTPLib2`"} {"_id": "q_4961", "text": "Google build client helper.\n\n :param service: service to build client for\n :type service: ``str``\n\n :param api_version: API version to use.\n :type api_version: ``str``\n\n :param http_auth: Initialized HTTP client to use.\n :type http_auth: ``object``\n\n :return: google-python-api client initialized to use 'service'\n :rtype: ``object``"} {"_id": "q_4962", "text": "Call decorated function for each item in project list.\n\n Note: the function 'decorated' is expected to return a value plus a dictionary of exceptions.\n\n If item in list is a dictionary, we look for a 'project' and 'key_file' entry, respectively.\n If item in list is of type string_types, we assume it is the project string. Default credentials\n will be used by the underlying client library.\n\n :param projects: list of project strings or list of dictionaries\n Example: {'project':..., 'keyfile':...}. Required.\n :type projects: ``list`` of ``str`` or ``list`` of ``dict``\n\n :param key_file: path on disk to keyfile, for use with all projects\n :type key_file: ``str``\n\n :returns: tuple containing a list of function output and an exceptions map\n :rtype: ``tuple of ``list``, ``dict``"} {"_id": "q_4963", "text": "Helper to get creds out of kwargs."} {"_id": "q_4964", "text": "Manipulate connection keywords.\n \n Modifieds keywords based on connection type.\n\n There is an assumption here that the client has\n already been created and that these keywords are being\n passed into methods for interacting with various services.\n\n Current modifications:\n - if conn_type is not cloud and module is 'compute', \n then rewrite project as name.\n - if conn_type is cloud and module is 'storage',\n then remove 'project' from dict.\n\n :param conn_type: E.g. 'cloud' or 'general'\n :type conn_type: ``str``\n\n :param kwargs: Dictionary of keywords sent in by user.\n :type kwargs: ``dict``\n\n :param module_name: Name of specific module that will be loaded.\n Default is None.\n :type conn_type: ``str`` or None\n\n :returns kwargs with client and module specific changes\n :rtype: ``dict``"} {"_id": "q_4965", "text": "General aggregated list function for the GCE service."} {"_id": "q_4966", "text": "General list function for the GCE service."} {"_id": "q_4967", "text": "General list function for Google APIs."} {"_id": "q_4968", "text": "Retrieve detailed cache information."} {"_id": "q_4969", "text": "Get default User Agent String.\n\n Try to import pkg_name to get an accurate version number.\n \n return: string"} {"_id": "q_4970", "text": "Rule='string'"} {"_id": "q_4971", "text": "List objects in bucket.\n\n :param Bucket: name of bucket\n :type Bucket: ``str``\n\n :returns list of objects in bucket\n :rtype: ``list``"} {"_id": "q_4972", "text": "Calls _modify and either passes the inflection.camelize method or the inflection.underscore method.\n\n :param item: dictionary representing item to be modified\n :param output: string 'camelized' or 'underscored'\n :return:"} {"_id": "q_4973", "text": "Retrieve the currently active policy version document for every managed policy that is attached to the role."} {"_id": "q_4974", "text": "Fetch the base IAM Server Certificate."} {"_id": "q_4975", "text": "Used to obtain a boto3 client or resource connection.\n For cross account, provide both account_number and assume_role.\n\n :usage:\n\n # Same Account:\n client = boto3_cached_conn('iam')\n resource = boto3_cached_conn('iam', service_type='resource')\n\n # Cross Account Client:\n client = boto3_cached_conn('iam', account_number='000000000000', assume_role='role_name')\n\n # Cross Account Resource:\n resource = boto3_cached_conn('iam', service_type='resource', account_number='000000000000', assume_role='role_name')\n\n :param service: AWS service (i.e. 'iam', 'ec2', 'kms')\n :param service_type: 'client' or 'resource'\n :param future_expiration_minutes: Connections will expire from the cache\n when their expiration is within this many minutes of the present time. [Default 15]\n :param account_number: Required if assume_role is provided.\n :param assume_role: Name of the role to assume into for account described by account_number.\n :param session_name: Session name to attach to requests. [Default 'cloudaux']\n :param region: Region name for connection. [Default us-east-1]\n :param return_credentials: Indicates if the STS credentials should be returned with the client [Default False]\n :param external_id: Optional external id to pass to sts:AssumeRole.\n See https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html\n :param arn_partition: Optional parameter to specify other aws partitions such as aws-us-gov for aws govcloud\n :return: boto3 client or resource connection"} {"_id": "q_4976", "text": "just store the AWS formatted rules"} {"_id": "q_4977", "text": "Get the inline policies for the group."} {"_id": "q_4978", "text": "Get a list of the managed policy names that are attached to the group."} {"_id": "q_4979", "text": "Gets a list of the usernames that are a part of this group."} {"_id": "q_4980", "text": "Fetch the base IAM Group."} {"_id": "q_4981", "text": "Returns a list of stores in the catalog. If workspaces is specified will only return stores in those workspaces.\n If names is specified, will only return stores that match.\n names can either be a comma delimited string or an array.\n Will return an empty list if no stores are found."} {"_id": "q_4982", "text": "Returns a single store object.\n Will return None if no store is found.\n Will raise an error if more than one store with the same name is found."} {"_id": "q_4983", "text": "List granules of an imagemosaic"} {"_id": "q_4984", "text": "Returns all coverages in a coverage store"} {"_id": "q_4985", "text": "Publish a featuretype from data in an existing store"} {"_id": "q_4986", "text": "returns a single resource object.\n Will return None if no resource is found.\n Will raise an error if more than one resource with the same name is found."} {"_id": "q_4987", "text": "returns a single layergroup object.\n Will return None if no layergroup is found.\n Will raise an error if more than one layergroup with the same name is found."} {"_id": "q_4988", "text": "returns a single style object.\n Will return None if no style is found.\n Will raise an error if more than one style with the same name is found."} {"_id": "q_4989", "text": "Returns a list of workspaces in the catalog.\n If names is specified, will only return workspaces that match.\n names can either be a comma delimited string or an array.\n Will return an empty list if no workspaces are found."} {"_id": "q_4990", "text": "returns a single workspace object.\n Will return None if no workspace is found.\n Will raise an error if more than one workspace with the same name is found."} {"_id": "q_4991", "text": "Extract a metadata link tuple from an xml node"} {"_id": "q_4992", "text": "Create a URL from a list of path segments and an optional dict of query\n parameters."} {"_id": "q_4993", "text": "GeoServer's REST API uses ZIP archives as containers for file formats such\n as Shapefile and WorldImage which include several 'boxcar' files alongside\n the main data. In such archives, GeoServer assumes that all of the relevant\n files will have the same base name and appropriate extensions, and live in\n the root of the ZIP archive. This method produces a zip file that matches\n these expectations, based on a basename, and a dict of extensions to paths or\n file-like objects. The client code is responsible for deleting the zip\n archive when it's done."} {"_id": "q_4994", "text": "Extract metadata Dimension Info from an xml node"} {"_id": "q_4995", "text": "Extract metadata Dynamic Default Values from an xml node"} {"_id": "q_4996", "text": "Change semantic of MOVE to change resource tags."} {"_id": "q_4997", "text": "Return _VirtualResource object for path.\n\n path is expected to be\n categoryType/category/name/artifact\n for example:\n 'by_tag/cool/My doc 2/info.html'\n\n See DAVProvider.get_resource_inst()"} {"_id": "q_4998", "text": "Add a provider to the provider_map routing table."} {"_id": "q_4999", "text": "Get the registered DAVProvider for a given path.\n\n Returns:\n tuple: (share, provider)"} {"_id": "q_5000", "text": "Computes digest hash.\n\n Calculation of the A1 (HA1) part is delegated to the dc interface method\n `digest_auth_user()`.\n\n Args:\n realm (str):\n user_name (str):\n method (str): WebDAV Request Method\n uri (str):\n nonce (str): server generated nonce value\n cnonce (str): client generated cnonce value\n qop (str): quality of protection\n nc (str) (number), nonce counter incremented by client\n Returns:\n MD5 hash string\n or False if user rejected by domain controller"} {"_id": "q_5001", "text": "Handle a COPY request natively."} {"_id": "q_5002", "text": "Read log entries into a list of dictionaries."} {"_id": "q_5003", "text": "Return a dictionary containing all files under source control.\n\n dirinfos:\n Dictionary containing direct members for every collection.\n {folderpath: (collectionlist, filelist), ...}\n files:\n Sorted list of all file paths in the manifest.\n filedict:\n Dictionary containing all files under source control.\n\n ::\n\n {'dirinfos': {'': (['wsgidav',\n 'tools',\n 'WsgiDAV.egg-info',\n 'tests'],\n ['index.rst',\n 'wsgidav MAKE_DAILY_BUILD.launch',\n 'wsgidav run_server.py DEBUG.launch',\n 'wsgidav-paste.conf',\n ...\n 'setup.py']),\n 'wsgidav': (['addons', 'samples', 'server', 'interfaces'],\n ['__init__.pyc',\n 'dav_error.pyc',\n 'dav_provider.pyc',\n ...\n 'wsgidav_app.py']),\n },\n 'files': ['.hgignore',\n 'ADDONS.txt',\n 'wsgidav/samples/mysql_dav_provider.py',\n ...\n ],\n 'filedict': {'.hgignore': True,\n 'README.txt': True,\n 'WsgiDAV.egg-info/PKG-INFO': True,\n }\n }"} {"_id": "q_5004", "text": "Return preferred mapping for a resource mapping.\n\n Different URLs may map to the same resource, e.g.:\n '/a/b' == '/A/b' == '/a/b/'\n get_preferred_path() returns the same value for all these variants, e.g.:\n '/a/b/' (assuming resource names considered case insensitive)\n\n @param path: a UTF-8 encoded, unquoted byte string.\n @return: a UTF-8 encoded, unquoted byte string."} {"_id": "q_5005", "text": "Convert path to a URL that can be passed to XML responses.\n\n Byte string, UTF-8 encoded, quoted.\n\n See http://www.webdav.org/specs/rfc4918.html#rfc.section.8.3\n We are using the path-absolute option. i.e. starting with '/'.\n URI ; See section 3.2.1 of [RFC2068]"} {"_id": "q_5006", "text": "Remove all associated dead properties."} {"_id": "q_5007", "text": "Set application location for this resource provider.\n\n @param share_path: a UTF-8 encoded, unquoted byte string."} {"_id": "q_5008", "text": "Convert a refUrl to a path, by stripping the share prefix.\n\n Used to calculate the from a storage key by inverting get_ref_url()."} {"_id": "q_5009", "text": "Return True, if path maps to an existing collection resource.\n\n This method should only be used, if no other information is queried\n for . Otherwise a _DAVResource should be created first."} {"_id": "q_5010", "text": "Convert XML string into etree.Element."} {"_id": "q_5011", "text": "Wrapper for etree.tostring, that takes care of unsupported pretty_print\n option and prepends an encoding header."} {"_id": "q_5012", "text": "Serialize etree.Element.\n\n Note: element may contain more than one child or only text (i.e. no child\n at all). Therefore the resulting string may raise an exception, when\n passed back to etree.XML()."} {"_id": "q_5013", "text": "Convert path to absolute if not None."} {"_id": "q_5014", "text": "Read configuration file options into a dictionary."} {"_id": "q_5015", "text": "Run WsgiDAV using paste.httpserver, if Paste is installed.\n\n See http://pythonpaste.org/modules/httpserver.html for more options"} {"_id": "q_5016", "text": "Run WsgiDAV using gevent if gevent is installed.\n\n See\n https://github.com/gevent/gevent/blob/master/src/gevent/pywsgi.py#L1356\n https://github.com/gevent/gevent/blob/master/src/gevent/server.py#L38\n for more options"} {"_id": "q_5017", "text": "Run WsgiDAV using cherrypy.wsgiserver if CherryPy is installed."} {"_id": "q_5018", "text": "Run WsgiDAV using flup.server.fcgi if Flup is installed."} {"_id": "q_5019", "text": "Run WsgiDAV using ext_wsgiutils_server from the wsgidav package."} {"_id": "q_5020", "text": "Handle PROPPATCH request to set or remove a property.\n\n @see http://www.webdav.org/specs/rfc4918.html#METHOD_PROPPATCH"} {"_id": "q_5021", "text": "Handle MKCOL request to create a new collection.\n\n @see http://www.webdav.org/specs/rfc4918.html#METHOD_MKCOL"} {"_id": "q_5022", "text": "Get the data from a chunked transfer."} {"_id": "q_5023", "text": "Get the data from a non-chunked transfer."} {"_id": "q_5024", "text": "Return properties document for path."} {"_id": "q_5025", "text": "Computes digest hash A1 part."} {"_id": "q_5026", "text": "Return a lock dictionary for a token.\n\n If the lock does not exist or is expired, None is returned.\n\n token:\n lock token\n Returns:\n Lock dictionary or \n\n Side effect: if lock is expired, it will be purged and None is returned."} {"_id": "q_5027", "text": "Create a direct lock for a resource path.\n\n path:\n Normalized path (utf8 encoded string, no trailing '/')\n lock:\n lock dictionary, without a token entry\n Returns:\n New unique lock token.: \n - lock['timeout'] may be normalized and shorter than requested\n - lock['token'] is added"} {"_id": "q_5028", "text": "Delete lock.\n\n Returns True on success. False, if token does not exist, or is expired."} {"_id": "q_5029", "text": "Delete all entries."} {"_id": "q_5030", "text": "Return readable rep."} {"_id": "q_5031", "text": "Acquire lock and return lock_dict.\n\n principal\n Name of the principal.\n lock_type\n Must be 'write'.\n lock_scope\n Must be 'shared' or 'exclusive'.\n lock_depth\n Must be '0' or 'infinity'.\n lock_owner\n String identifying the owner.\n path\n Resource URL.\n timeout\n Seconds to live\n\n This function does NOT check, if the new lock creates a conflict!"} {"_id": "q_5032", "text": "Check for permissions and acquire a lock.\n\n On success return new lock dictionary.\n On error raise a DAVError with an embedded DAVErrorCondition."} {"_id": "q_5033", "text": "Set new timeout for lock, if existing and valid."} {"_id": "q_5034", "text": "Return lock_dict, or None, if not found or invalid.\n\n Side effect: if lock is expired, it will be purged and None is returned.\n\n key:\n name of lock attribute that will be returned instead of a dictionary."} {"_id": "q_5035", "text": "Acquire a read lock for the current thread, waiting at most\n timeout seconds or doing a non-blocking check in case timeout is <= 0.\n\n In case timeout is None, the call to acquire_read blocks until the\n lock request can be serviced.\n\n In case the timeout expires before the lock could be serviced, a\n RuntimeError is thrown."} {"_id": "q_5036", "text": "Acquire a write lock for the current thread, waiting at most\n timeout seconds or doing a non-blocking check in case timeout is <= 0.\n\n In case the write lock cannot be serviced due to the deadlock\n condition mentioned above, a ValueError is raised.\n\n In case timeout is None, the call to acquire_write blocks until the\n lock request can be serviced.\n\n In case the timeout expires before the lock could be serviced, a\n RuntimeError is thrown."} {"_id": "q_5037", "text": "Release the currently held lock.\n\n In case the current thread holds no lock, a ValueError is thrown."} {"_id": "q_5038", "text": "Initialize base logger named 'wsgidav'.\n\n The base logger is filtered by the `verbose` configuration option.\n Log entries will have a time stamp and thread id.\n\n :Parameters:\n verbose : int\n Verbosity configuration (0..5)\n enable_loggers : string list\n List of module logger names, that will be switched to DEBUG level.\n\n Module loggers\n ~~~~~~~~~~~~~~\n Module loggers (e.g 'wsgidav.lock_manager') are named loggers, that can be\n independently switched to DEBUG mode.\n\n Except for verbosity, they will inherit settings from the base logger.\n\n They will suppress DEBUG level messages, unless they are enabled by passing\n their name to util.init_logging().\n\n If enabled, module loggers will print DEBUG messages, even if verbose == 3.\n\n Example initialize and use a module logger, that will generate output,\n if enabled (and verbose >= 2)::\n\n _logger = util.get_module_logger(__name__)\n [..]\n _logger.debug(\"foo: '{}'\".format(s))\n\n This logger would be enabled by passing its name to init_logging()::\n\n enable_loggers = [\"lock_manager\",\n \"property_manager\",\n ]\n util.init_logging(2, enable_loggers)\n\n\n Log Level Matrix\n ~~~~~~~~~~~~~~~~\n\n +---------+--------+---------------------------------------------------------------+\n | Verbose | Option | Log level |\n | level | +-------------+------------------------+------------------------+\n | | | base logger | module logger(default) | module logger(enabled) |\n +=========+========+=============+========================+========================+\n | 0 | -qqq | CRITICAL | CRITICAL | CRITICAL |\n +---------+--------+-------------+------------------------+------------------------+\n | 1 | -qq | ERROR | ERROR | ERROR |\n +---------+--------+-------------+------------------------+------------------------+\n | 2 | -q | WARN | WARN | WARN |\n +---------+--------+-------------+------------------------+------------------------+\n | 3 | | INFO | INFO | **DEBUG** |\n +---------+--------+-------------+------------------------+------------------------+\n | 4 | -v | DEBUG | DEBUG | DEBUG |\n +---------+--------+-------------+------------------------+------------------------+\n | 5 | -vv | DEBUG | DEBUG | DEBUG |\n +---------+--------+-------------+------------------------+------------------------+"} {"_id": "q_5039", "text": "Read 1 byte from wsgi.input, if this has not been done yet.\n\n Returning a response without reading from a request body might confuse the\n WebDAV client.\n This may happen, if an exception like '401 Not authorized', or\n '500 Internal error' was raised BEFORE anything was read from the request\n stream.\n\n See GC issue 13, issue 23\n See http://groups.google.com/group/paste-users/browse_frm/thread/fc0c9476047e9a47?hl=en\n\n Note that with persistent sessions (HTTP/1.1) we must make sure, that the\n 'Connection: closed' header is set with the response, to prevent reusing\n the current stream."} {"_id": "q_5040", "text": "Append segments to URI.\n\n Example: join_uri(\"/a/b\", \"c\", \"d\")"} {"_id": "q_5041", "text": "Return True, if childUri is a child of parentUri.\n\n This function accounts for the fact that '/a/b/c' and 'a/b/c/' are\n children of '/a/b' (and also of '/a/b/').\n Note that '/a/b/cd' is NOT a child of 'a/b/c'."} {"_id": "q_5042", "text": "Read request body XML into an etree.Element.\n\n Return None, if no request body was sent.\n Raise HTTP_BAD_REQUEST, if something else went wrong.\n\n TODO: this is a very relaxed interpretation: should we raise HTTP_BAD_REQUEST\n instead, if CONTENT_LENGTH is missing, invalid, or 0?\n\n RFC: For compatibility with HTTP/1.0 applications, HTTP/1.1 requests containing\n a message-body MUST include a valid Content-Length header field unless the\n server is known to be HTTP/1.1 compliant.\n If a request contains a message-body and a Content-Length is not given, the\n server SHOULD respond with 400 (bad request) if it cannot determine the\n length of the message, or with 411 (length required) if it wishes to insist\n on receiving a valid Content-Length.\"\n\n So I'd say, we should accept a missing CONTENT_LENGTH, and try to read the\n content anyway.\n But WSGI doesn't guarantee to support input.read() without length(?).\n At least it locked, when I tried it with a request that had a missing\n content-type and no body.\n\n Current approach: if CONTENT_LENGTH is\n\n - valid and >0:\n read body (exactly bytes) and parse the result.\n - 0:\n Assume empty body and return None or raise exception.\n - invalid (negative or not a number:\n raise HTTP_BAD_REQUEST\n - missing:\n NOT: Try to read body until end and parse the result.\n BUT: assume '0'\n - empty string:\n WSGI allows it to be empty or absent: treated like 'missing'."} {"_id": "q_5043", "text": "Start a WSGI response for a DAVError or status code."} {"_id": "q_5044", "text": "Return base64 encoded binarystring."} {"_id": "q_5045", "text": "Use the mimetypes module to lookup the type for an extension.\n\n This function also adds some extensions required for HTML5"} {"_id": "q_5046", "text": "Return probability estimates for the RDD containing test vector X.\n\n Parameters\n ----------\n X : RDD containing array-like items, shape = [m_samples, n_features]\n\n Returns\n -------\n C : RDD with array-like items , shape = [n_samples, n_classes]\n Returns the probability of the samples for each class in\n the models for each RDD block. The columns correspond to the classes\n in sorted order, as they appear in the attribute `classes_`."} {"_id": "q_5047", "text": "Return log-probability estimates for the RDD containing the\n test vector X.\n\n Parameters\n ----------\n X : RDD containing array-like items, shape = [m_samples, n_features]\n\n Returns\n -------\n C : RDD with array-like items, shape = [n_samples, n_classes]\n Returns the log-probability of the samples for each class in\n the model for each RDD block. The columns correspond to the classes\n in sorted order, as they appear in the attribute `classes_`."} {"_id": "q_5048", "text": "Fit Gaussian Naive Bayes according to X, y\n\n Parameters\n ----------\n X : array-like, shape (n_samples, n_features)\n Training vectors, where n_samples is the number of samples\n and n_features is the number of features.\n\n y : array-like, shape (n_samples,)\n Target values.\n\n Returns\n -------\n self : object\n Returns self."} {"_id": "q_5049", "text": "Sort features by name\n\n Returns a reordered matrix and modifies the vocabulary in place"} {"_id": "q_5050", "text": "Learn the vocabulary dictionary and return term-document matrix.\n\n This is equivalent to fit followed by transform, but more efficiently\n implemented.\n\n Parameters\n ----------\n Z : iterable or DictRDD with column 'X'\n An iterable of raw_documents which yields either str, unicode or\n file objects; or a DictRDD with column 'X' containing such\n iterables.\n\n Returns\n -------\n X : array, [n_samples, n_features] or DictRDD\n Document-term matrix."} {"_id": "q_5051", "text": "Transform documents to document-term matrix.\n\n Extract token counts out of raw text documents using the vocabulary\n fitted with fit or the one provided to the constructor.\n\n Parameters\n ----------\n raw_documents : iterable\n An iterable which yields either str, unicode or file objects.\n\n Returns\n -------\n X : sparse matrix, [n_samples, n_features]\n Document-term matrix."} {"_id": "q_5052", "text": "Wraps a Scikit-learn Linear model's fit method to use with RDD\n input.\n\n Parameters\n ----------\n cls : class object\n The sklearn linear model's class to wrap.\n Z : TupleRDD or DictRDD\n The distributed train data in a DictRDD.\n\n Returns\n -------\n self: the wrapped class"} {"_id": "q_5053", "text": "Wraps a Scikit-learn Linear model's predict method to use with RDD\n input.\n\n Parameters\n ----------\n cls : class object\n The sklearn linear model's class to wrap.\n Z : ArrayRDD\n The distributed data to predict in a DictRDD.\n\n Returns\n -------\n self: the wrapped class"} {"_id": "q_5054", "text": "Fit linear model.\n\n Parameters\n ----------\n Z : DictRDD with (X, y) values\n X containing numpy array or sparse matrix - The training data\n y containing the target values\n\n Returns\n -------\n self : returns an instance of self."} {"_id": "q_5055", "text": "Fit all the transforms one after the other and transform the\n data, then fit the transformed data using the final estimator.\n\n Parameters\n ----------\n Z : ArrayRDD, TupleRDD or DictRDD\n Input data in blocked distributed format.\n\n Returns\n -------\n self : SparkPipeline"} {"_id": "q_5056", "text": "Applies transforms to the data, and the score method of the\n final estimator. Valid only if the final estimator implements\n score."} {"_id": "q_5057", "text": "Actual fitting, performing the search over parameters."} {"_id": "q_5058", "text": "Compute the score of an estimator on a given test set."} {"_id": "q_5059", "text": "Predict the closest cluster each sample in X belongs to.\n\n In the vector quantization literature, `cluster_centers_` is called\n the code book and each value returned by `predict` is the index of\n the closest code in the code book.\n\n Parameters\n ----------\n X : ArrayRDD containing array-like, sparse matrix\n New data to predict.\n\n Returns\n -------\n labels : ArrayRDD with predictions\n Index of the cluster each sample belongs to."} {"_id": "q_5060", "text": "Distributed method to predict class labels for samples in X.\n\n Parameters\n ----------\n X : ArrayRDD containing {array-like, sparse matrix}\n Samples.\n\n Returns\n -------\n C : ArrayRDD\n Predicted class label per sample."} {"_id": "q_5061", "text": "Learn a list of feature name -> indices mappings.\n\n Parameters\n ----------\n Z : DictRDD with column 'X'\n Dict(s) or Mapping(s) from feature names (arbitrary Python\n objects) to feature values (strings or convertible to dtype).\n\n Returns\n -------\n self"} {"_id": "q_5062", "text": "Fit LSI model to X and perform dimensionality reduction on X.\n\n Parameters\n ----------\n X : {array-like, sparse matrix}, shape (n_samples, n_features)\n Training data.\n\n Returns\n -------\n X_new : array, shape (n_samples, n_components)\n Reduced version of X. This will always be a dense array."} {"_id": "q_5063", "text": "Pack rdd with a specific collection constructor."} {"_id": "q_5064", "text": "Pack rdd of tuples as tuples of arrays or scipy.sparse matrices."} {"_id": "q_5065", "text": "Block an RDD\n\n Parameters\n ----------\n\n rdd : RDD\n RDD of data points to block into either numpy arrays,\n scipy sparse matrices, or pandas data frames.\n Type of data point will be automatically inferred\n and blocked accordingly.\n\n bsize : int, optional, default None\n Size of each block (number of elements), if None all data points\n from each partition will be combined in a block.\n\n Returns\n -------\n\n rdd : ArrayRDD or TupleRDD or DictRDD\n The transformed rdd with added functionality"} {"_id": "q_5066", "text": "Returns the shape of the data."} {"_id": "q_5067", "text": "Returns the data as numpy.array from each partition."} {"_id": "q_5068", "text": "Execute a transformation on a column or columns. Returns the modified\n DictRDD.\n\n Parameters\n ----------\n f : function\n The function to execute on the columns.\n column : {str, list or None}\n The column(s) to transform. If None is specified the method is\n equivalent to map.\n column : {str, list or None}\n The dtype of the column(s) to transform.\n\n Returns\n -------\n result : DictRDD\n DictRDD with transformed column(s).\n\n TODO: optimize"} {"_id": "q_5069", "text": "Remove objects from the group.\n\n Parameters\n ----------\n to_remove : list\n A list of cobra objects to remove from the group"} {"_id": "q_5070", "text": "Perform geometric FBA to obtain a unique, centered flux distribution.\n\n Geometric FBA [1]_ formulates the problem as a polyhedron and\n then solves it by bounding the convex hull of the polyhedron.\n The bounding forms a box around the convex hull which reduces\n with every iteration and extracts a unique solution in this way.\n\n Parameters\n ----------\n model: cobra.Model\n The model to perform geometric FBA on.\n epsilon: float, optional\n The convergence tolerance of the model (default 1E-06).\n max_tries: int, optional\n Maximum number of iterations (default 200).\n processes : int, optional\n The number of parallel processes to run. If not explicitly passed,\n will be set from the global configuration singleton.\n\n Returns\n -------\n cobra.Solution\n The solution object containing all the constraints required\n for geometric FBA.\n\n References\n ----------\n .. [1] Smallbone, Kieran & Simeonidis, Vangelis. (2009).\n Flux balance analysis: A geometric perspective.\n Journal of theoretical biology.258. 311-5.\n 10.1016/j.jtbi.2009.01.027."} {"_id": "q_5071", "text": "Query the list\n\n Parameters\n ----------\n search_function : a string, regular expression or function\n Used to find the matching elements in the list.\n - a regular expression (possibly compiled), in which case the\n given attribute of the object should match the regular expression.\n - a function which takes one argument and returns True for\n desired values\n\n attribute : string or None\n the name attribute of the object to passed as argument to the\n `search_function`. If this is None, the object itself is used.\n\n Returns\n -------\n DictList\n a new list of objects which match the query\n\n Examples\n --------\n >>> import cobra.test\n >>> model = cobra.test.create_test_model('textbook')\n >>> model.reactions.query(lambda x: x.boundary)\n >>> import re\n >>> regex = re.compile('^g', flags=re.IGNORECASE)\n >>> model.metabolites.query(regex, attribute='name')"} {"_id": "q_5072", "text": "adds elements with id's not already in the model"} {"_id": "q_5073", "text": "extend list by appending elements from the iterable"} {"_id": "q_5074", "text": "extends without checking for uniqueness\n\n This function should only be used internally by DictList when it\n can guarantee elements are already unique (as in when coming from\n self or other DictList). It will be faster because it skips these\n checks."} {"_id": "q_5075", "text": "Determine the position in the list\n\n id: A string or a :class:`~cobra.core.Object.Object`"} {"_id": "q_5076", "text": "insert object before index"} {"_id": "q_5077", "text": "The shadow price in the most recent solution.\n\n Shadow price is the dual value of the corresponding constraint in the\n model.\n\n Warnings\n --------\n * Accessing shadow prices through a `Solution` object is the safer,\n preferred, and only guaranteed to be correct way. You can see how to\n do so easily in the examples.\n * Shadow price is retrieved from the currently defined\n `self._model.solver`. The solver status is checked but there are no\n guarantees that the current solver state is the one you are looking\n for.\n * If you modify the underlying model after an optimization, you will\n retrieve the old optimization values.\n\n Raises\n ------\n RuntimeError\n If the underlying model was never optimized beforehand or the\n metabolite is not part of a model.\n OptimizationError\n If the solver status is anything other than 'optimal'.\n\n Examples\n --------\n >>> import cobra\n >>> import cobra.test\n >>> model = cobra.test.create_test_model(\"textbook\")\n >>> solution = model.optimize()\n >>> model.metabolites.glc__D_e.shadow_price\n -0.09166474637510488\n >>> solution.shadow_prices.glc__D_e\n -0.091664746375104883"} {"_id": "q_5078", "text": "Load a cobra model from a file in YAML format.\n\n Parameters\n ----------\n filename : str or file-like\n File path or descriptor that contains the YAML document describing the\n cobra model.\n\n Returns\n -------\n cobra.Model\n The cobra model as represented in the YAML document.\n\n See Also\n --------\n from_yaml : Load from a string."} {"_id": "q_5079", "text": "Some common methods for processing a database of flux information into\n print-ready formats. Used in both model_summary and metabolite_summary."} {"_id": "q_5080", "text": "Coefficient for the reactions in a linear objective.\n\n Parameters\n ----------\n model : cobra model\n the model object that defined the objective\n reactions : list\n an optional list for the reactions to get the coefficients for. All\n reactions if left missing.\n\n Returns\n -------\n dict\n A dictionary where the key is the reaction object and the value is\n the corresponding coefficient. Empty dictionary if there are no\n linear terms in the objective."} {"_id": "q_5081", "text": "Check whether a sympy expression references the correct variables.\n\n Parameters\n ----------\n model : cobra.Model\n The model in which to check for variables.\n expression : sympy.Basic\n A sympy expression.\n\n Returns\n -------\n boolean\n True if all referenced variables are contained in model, False\n otherwise."} {"_id": "q_5082", "text": "Set the model objective.\n\n Parameters\n ----------\n model : cobra model\n The model to set the objective for\n value : model.problem.Objective,\n e.g. optlang.glpk_interface.Objective, sympy.Basic or dict\n\n If the model objective is linear, the value can be a new Objective\n object or a dictionary with linear coefficients where each key is a\n reaction and the element the new coefficient (float).\n\n If the objective is not linear and `additive` is true, only values\n of class Objective.\n\n additive : boolmodel.reactions.Biomass_Ecoli_core.bounds = (0.1, 0.1)\n If true, add the terms to the current objective, otherwise start with\n an empty objective."} {"_id": "q_5083", "text": "Give a string representation for an optlang interface.\n\n Parameters\n ----------\n interface : string, ModuleType\n Full name of the interface in optlang or cobra representation.\n For instance 'optlang.glpk_interface' or 'optlang-glpk'.\n\n Returns\n -------\n string\n The name of the interface as a string"} {"_id": "q_5084", "text": "Choose a solver given a solver name and model.\n\n This will choose a solver compatible with the model and required\n capabilities. Also respects model.solver where it can.\n\n Parameters\n ----------\n model : a cobra model\n The model for which to choose the solver.\n solver : str, optional\n The name of the solver to be used.\n qp : boolean, optional\n Whether the solver needs Quadratic Programming capabilities.\n\n Returns\n -------\n solver : an optlang solver interface\n Returns a valid solver for the problem.\n\n Raises\n ------\n SolverNotFound\n If no suitable solver could be found."} {"_id": "q_5085", "text": "Add variables and constraints to a Model's solver object.\n\n Useful for variables and constraints that can not be expressed with\n reactions and lower/upper bounds. Will integrate with the Model's context\n manager in order to revert changes upon leaving the context.\n\n Parameters\n ----------\n model : a cobra model\n The model to which to add the variables and constraints.\n what : list or tuple of optlang variables or constraints.\n The variables or constraints to add to the model. Must be of class\n `model.problem.Variable` or\n `model.problem.Constraint`.\n **kwargs : keyword arguments\n passed to solver.add()"} {"_id": "q_5086", "text": "Remove variables and constraints from a Model's solver object.\n\n Useful to temporarily remove variables and constraints from a Models's\n solver object.\n\n Parameters\n ----------\n model : a cobra model\n The model from which to remove the variables and constraints.\n what : list or tuple of optlang variables or constraints.\n The variables or constraints to remove from the model. Must be of\n class `model.problem.Variable` or\n `model.problem.Constraint`."} {"_id": "q_5087", "text": "Fix current objective as an additional constraint.\n\n When adding constraints to a model, such as done in pFBA which\n minimizes total flux, these constraints can become too powerful,\n resulting in solutions that satisfy optimality but sacrifices too\n much for the original objective function. To avoid that, we can fix\n the current objective value as a constraint to ignore solutions that\n give a lower (or higher depending on the optimization direction)\n objective value than the original model.\n\n When done with the model as a context, the modification to the\n objective will be reverted when exiting that context.\n\n Parameters\n ----------\n model : cobra.Model\n The model to operate on\n fraction : float\n The fraction of the optimum the objective is allowed to reach.\n bound : float, None\n The bound to use instead of fraction of maximum optimal value. If\n not None, fraction is ignored.\n name : str\n Name of the objective. May contain one `{}` placeholder which is filled\n with the name of the old objective.\n\n Returns\n -------\n The value of the optimized objective * fraction"} {"_id": "q_5088", "text": "Perform standard checks on a solver's status."} {"_id": "q_5089", "text": "Add a new objective and variables to ensure a feasible solution.\n\n The optimized objective will be zero for a feasible solution and otherwise\n represent the distance from feasibility (please see [1]_ for more\n information).\n\n Parameters\n ----------\n model : cobra.Model\n The model whose feasibility is to be tested.\n\n References\n ----------\n .. [1] Gomez, Jose A., Kai H\u00f6ffner, and Paul I. Barton.\n \u201cDFBAlab: A Fast and Reliable MATLAB Code for Dynamic Flux Balance\n Analysis.\u201d BMC Bioinformatics 15, no. 1 (December 18, 2014): 409.\n https://doi.org/10.1186/s12859-014-0409-8."} {"_id": "q_5090", "text": "Successively optimize separate targets in a specific order.\n\n For each objective, optimize the model and set the optimal value as a\n constraint. Proceed in the order of the objectives given. Due to the\n specific order this is called lexicographic FBA [1]_. This\n procedure is useful for returning unique solutions for a set of important\n fluxes. Typically this is applied to exchange fluxes.\n\n Parameters\n ----------\n model : cobra.Model\n The model to be optimized.\n objectives : list\n A list of reactions (or objectives) in the model for which unique\n fluxes are to be determined.\n objective_direction : str or list, optional\n The desired objective direction for each reaction (if a list) or the\n objective direction to use for all reactions (default maximize).\n\n Returns\n -------\n optimized_fluxes : pandas.Series\n A vector containing the optimized fluxes for each of the given\n reactions in `objectives`.\n\n References\n ----------\n .. [1] Gomez, Jose A., Kai H\u00f6ffner, and Paul I. Barton.\n \u201cDFBAlab: A Fast and Reliable MATLAB Code for Dynamic Flux Balance\n Analysis.\u201d BMC Bioinformatics 15, no. 1 (December 18, 2014): 409.\n https://doi.org/10.1186/s12859-014-0409-8."} {"_id": "q_5091", "text": "Create a new numpy array that resides in shared memory.\n\n Parameters\n ----------\n shape : tuple of ints\n The shape of the new array.\n data : numpy.array\n Data to copy to the new array. Has to have the same shape.\n integer : boolean\n Whether to use an integer array. Defaults to False which means\n float array."} {"_id": "q_5092", "text": "Reproject a point into the feasibility region.\n\n This function is guaranteed to return a new feasible point. However,\n no guarantees in terms of proximity to the original point can be made.\n\n Parameters\n ----------\n p : numpy.array\n The current sample point.\n\n Returns\n -------\n numpy.array\n A new feasible point. If `p` was feasible it wil return p."} {"_id": "q_5093", "text": "Find an approximately random point in the flux cone."} {"_id": "q_5094", "text": "Identify rdeundant rows in a matrix that can be removed."} {"_id": "q_5095", "text": "Get the lower and upper bound distances. Negative is bad."} {"_id": "q_5096", "text": "Create a batch generator.\n\n This is useful to generate n batches of m samples each.\n\n Parameters\n ----------\n batch_size : int\n The number of samples contained in each batch (m).\n batch_num : int\n The number of batches in the generator (n).\n fluxes : boolean\n Whether to return fluxes or the internal solver variables. If set\n to False will return a variable for each forward and backward flux\n as well as all additional variables you might have defined in the\n model.\n\n Yields\n ------\n pandas.DataFrame\n A DataFrame with dimensions (batch_size x n_r) containing\n a valid flux sample for a total of n_r reactions (or variables if\n fluxes=False) in each row."} {"_id": "q_5097", "text": "Validate a set of samples for equality and inequality feasibility.\n\n Can be used to check whether the generated samples and warmup points\n are feasible.\n\n Parameters\n ----------\n samples : numpy.matrix\n Must be of dimension (n_samples x n_reactions). Contains the\n samples to be validated. Samples must be from fluxes.\n\n Returns\n -------\n numpy.array\n A one-dimensional numpy array of length containing\n a code of 1 to 3 letters denoting the validation result:\n\n - 'v' means feasible in bounds and equality constraints\n - 'l' means a lower bound violation\n - 'u' means a lower bound validation\n - 'e' means and equality constraint violation"} {"_id": "q_5098", "text": "Remove metabolites that are not involved in any reactions and\n returns pruned model\n\n Parameters\n ----------\n cobra_model: class:`~cobra.core.Model.Model` object\n the model to remove unused metabolites from\n\n Returns\n -------\n output_model: class:`~cobra.core.Model.Model` object\n input model with unused metabolites removed\n inactive_metabolites: list of class:`~cobra.core.reaction.Reaction`\n list of metabolites that were removed"} {"_id": "q_5099", "text": "Remove reactions with no assigned metabolites, returns pruned model\n\n Parameters\n ----------\n cobra_model: class:`~cobra.core.Model.Model` object\n the model to remove unused reactions from\n\n Returns\n -------\n output_model: class:`~cobra.core.Model.Model` object\n input model with unused reactions removed\n reactions_to_prune: list of class:`~cobra.core.reaction.Reaction`\n list of reactions that were removed"} {"_id": "q_5100", "text": "Undoes the effects of a call to delete_model_genes in place.\n\n cobra_model: A cobra.Model which will be modified in place"} {"_id": "q_5101", "text": "identify reactions which will be disabled when the genes are knocked out\n\n cobra_model: :class:`~cobra.core.Model.Model`\n\n gene_list: iterable of :class:`~cobra.core.Gene.Gene`\n\n compiled_gene_reaction_rules: dict of {reaction_id: compiled_string}\n If provided, this gives pre-compiled gene_reaction_rule strings.\n The compiled rule strings can be evaluated much faster. If a rule\n is not provided, the regular expression evaluation will be used.\n Because not all gene_reaction_rule strings can be evaluated, this\n dict must exclude any rules which can not be used with eval."} {"_id": "q_5102", "text": "Perform gapfilling on a model.\n\n See documentation for the class GapFiller.\n\n Parameters\n ----------\n model : cobra.Model\n The model to perform gap filling on.\n universal : cobra.Model, None\n A universal model with reactions that can be used to complete the\n model. Only gapfill considering demand and exchange reactions if\n left missing.\n lower_bound : float\n The minimally accepted flux for the objective in the filled model.\n penalties : dict, None\n A dictionary with keys being 'universal' (all reactions included in\n the universal model), 'exchange' and 'demand' (all additionally\n added exchange and demand reactions) for the three reaction types.\n Can also have reaction identifiers for reaction specific costs.\n Defaults are 1, 100 and 1 respectively.\n iterations : int\n The number of rounds of gapfilling to perform. For every iteration,\n the penalty for every used reaction increases linearly. This way,\n the algorithm is encouraged to search for alternative solutions\n which may include previously used reactions. I.e., with enough\n iterations pathways including 10 steps will eventually be reported\n even if the shortest pathway is a single reaction.\n exchange_reactions : bool\n Consider adding exchange (uptake) reactions for all metabolites\n in the model.\n demand_reactions : bool\n Consider adding demand reactions for all metabolites.\n\n Returns\n -------\n iterable\n list of lists with on set of reactions that completes the model per\n requested iteration.\n\n Examples\n --------\n >>> import cobra.test as ct\n >>> from cobra import Model\n >>> from cobra.flux_analysis import gapfill\n >>> model = ct.create_test_model(\"salmonella\")\n >>> universal = Model('universal')\n >>> universal.add_reactions(model.reactions.GF6PTA.copy())\n >>> model.remove_reactions([model.reactions.GF6PTA])\n >>> gapfill(model, universal)"} {"_id": "q_5103", "text": "Update the coefficients for the indicator variables in the objective.\n\n Done incrementally so that second time the function is called,\n active indicators in the current solutions gets higher cost than the\n unused indicators."} {"_id": "q_5104", "text": "Perform the gapfilling by iteratively solving the model, updating\n the costs and recording the used reactions.\n\n\n Parameters\n ----------\n iterations : int\n The number of rounds of gapfilling to perform. For every\n iteration, the penalty for every used reaction increases\n linearly. This way, the algorithm is encouraged to search for\n alternative solutions which may include previously used\n reactions. I.e., with enough iterations pathways including 10\n steps will eventually be reported even if the shortest pathway\n is a single reaction.\n\n Returns\n -------\n iterable\n A list of lists where each element is a list reactions that were\n used to gapfill the model.\n\n Raises\n ------\n RuntimeError\n If the model fails to be validated (i.e. the original model with\n the proposed reactions added, still cannot get the required flux\n through the objective)."} {"_id": "q_5105", "text": "Check whether a reaction is an exchange reaction.\n\n Arguments\n ---------\n reaction : cobra.Reaction\n The reaction to check.\n boundary_type : str\n What boundary type to check for. Must be one of\n \"exchange\", \"demand\", or \"sink\".\n external_compartment : str\n The id for the external compartment.\n\n Returns\n -------\n boolean\n Whether the reaction looks like the requested type. Might be based\n on a heuristic."} {"_id": "q_5106", "text": "Find specific boundary reactions.\n\n Arguments\n ---------\n model : cobra.Model\n A cobra model.\n boundary_type : str\n What boundary type to check for. Must be one of\n \"exchange\", \"demand\", or \"sink\".\n external_compartment : str or None\n The id for the external compartment. If None it will be detected\n automatically.\n\n Returns\n -------\n list of cobra.reaction\n A list of likely boundary reactions of a user defined type."} {"_id": "q_5107", "text": "Sample a single chain for OptGPSampler.\n\n center and n_samples are updated locally and forgotten afterwards."} {"_id": "q_5108", "text": "parse gpr into AST\n\n Parameters\n ----------\n str_expr : string\n string with the gene reaction rule to parse\n\n Returns\n -------\n tuple\n elements ast_tree and gene_ids as a set"} {"_id": "q_5109", "text": "Knockout gene by marking it as non-functional and setting all\n associated reactions bounds to zero.\n\n The change is reverted upon exit if executed within the model as\n context."} {"_id": "q_5110", "text": "r\"\"\"Add constraints and objective representing for MOMA.\n\n This adds variables and constraints for the minimization of metabolic\n adjustment (MOMA) to the model.\n\n Parameters\n ----------\n model : cobra.Model\n The model to add MOMA constraints and objective to.\n solution : cobra.Solution, optional\n A previous solution to use as a reference. If no solution is given,\n one will be computed using pFBA.\n linear : bool, optional\n Whether to use the linear MOMA formulation or not (default True).\n\n Notes\n -----\n In the original MOMA [1]_ specification one looks for the flux distribution\n of the deletion (v^d) closest to the fluxes without the deletion (v).\n In math this means:\n\n minimize \\sum_i (v^d_i - v_i)^2\n s.t. Sv^d = 0\n lb_i <= v^d_i <= ub_i\n\n Here, we use a variable transformation v^t := v^d_i - v_i. Substituting\n and using the fact that Sv = 0 gives:\n\n minimize \\sum_i (v^t_i)^2\n s.t. Sv^d = 0\n v^t = v^d_i - v_i\n lb_i <= v^d_i <= ub_i\n\n So basically we just re-center the flux space at the old solution and then\n find the flux distribution closest to the new zero (center). This is the\n same strategy as used in cameo.\n\n In the case of linear MOMA [2]_, we instead minimize \\sum_i abs(v^t_i). The\n linear MOMA is typically significantly faster. Also quadratic MOMA tends\n to give flux distributions in which all fluxes deviate from the reference\n fluxes a little bit whereas linear MOMA tends to give flux distributions\n where the majority of fluxes are the same reference with few fluxes\n deviating a lot (typical effect of L2 norm vs L1 norm).\n\n The former objective function is saved in the optlang solver interface as\n ``\"moma_old_objective\"`` and this can be used to immediately extract the\n value of the former objective after MOMA optimization.\n\n See Also\n --------\n pfba : parsimonious FBA\n\n References\n ----------\n .. [1] Segr\u00e8, Daniel, Dennis Vitkup, and George M. Church. \u201cAnalysis of\n Optimality in Natural and Perturbed Metabolic Networks.\u201d\n Proceedings of the National Academy of Sciences 99, no. 23\n (November 12, 2002): 15112. https://doi.org/10.1073/pnas.232349399.\n .. [2] Becker, Scott A, Adam M Feist, Monica L Mo, Gregory Hannum,\n Bernhard \u00d8 Palsson, and Markus J Herrgard. \u201cQuantitative\n Prediction of Cellular Metabolism with Constraint-Based Models:\n The COBRA Toolbox.\u201d Nature Protocols 2 (March 29, 2007): 727."} {"_id": "q_5111", "text": "convert possible types to str, float, and bool"} {"_id": "q_5112", "text": "update new_dict with optional attributes from cobra_object"} {"_id": "q_5113", "text": "Convert model to a dict.\n\n Parameters\n ----------\n model : cobra.Model\n The model to reformulate as a dict.\n sort : bool, optional\n Whether to sort the metabolites, reactions, and genes or maintain the\n order defined in the model.\n\n Returns\n -------\n OrderedDict\n A dictionary with elements, 'genes', 'compartments', 'id',\n 'metabolites', 'notes' and 'reactions'; where 'metabolites', 'genes'\n and 'metabolites' are in turn lists with dictionaries holding all\n attributes to form the corresponding object.\n\n See Also\n --------\n cobra.io.model_from_dict"} {"_id": "q_5114", "text": "Build a model from a dict.\n\n Models stored in json are first formulated as a dict that can be read to\n cobra model using this function.\n\n Parameters\n ----------\n obj : dict\n A dictionary with elements, 'genes', 'compartments', 'id',\n 'metabolites', 'notes' and 'reactions'; where 'metabolites', 'genes'\n and 'metabolites' are in turn lists with dictionaries holding all\n attributes to form the corresponding object.\n\n Returns\n -------\n cora.core.Model\n The generated model.\n\n See Also\n --------\n cobra.io.model_to_dict"} {"_id": "q_5115", "text": "extract the compartment from the id string"} {"_id": "q_5116", "text": "translate an array x into a MATLAB cell array"} {"_id": "q_5117", "text": "Load a cobra model stored as a .mat file\n\n Parameters\n ----------\n infile_path: str\n path to the file to to read\n variable_name: str, optional\n The variable name of the model in the .mat file. If this is not\n specified, then the first MATLAB variable which looks like a COBRA\n model will be used\n inf: value\n The value to use for infinite bounds. Some solvers do not handle\n infinite values so for using those, set this to a high numeric value.\n\n Returns\n -------\n cobra.core.Model.Model:\n The resulting cobra model"} {"_id": "q_5118", "text": "Save the cobra model as a .mat file.\n\n This .mat file can be used directly in the MATLAB version of COBRA.\n\n Parameters\n ----------\n model : cobra.core.Model.Model object\n The model to save\n file_name : str or file-like object\n The file to save to\n varname : string\n The name of the variable within the workspace"} {"_id": "q_5119", "text": "Search for a context manager"} {"_id": "q_5120", "text": "A decorator to simplify the context management of simple object\n attributes. Gets the value of the attribute prior to setting it, and stores\n a function to set the value to the old value in the HistoryManager."} {"_id": "q_5121", "text": "Get or set the constraints on the model exchanges.\n\n `model.medium` returns a dictionary of the bounds for each of the\n boundary reactions, in the form of `{rxn_id: bound}`, where `bound`\n specifies the absolute value of the bound in direction of metabolite\n creation (i.e., lower_bound for `met <--`, upper_bound for `met -->`)\n\n Parameters\n ----------\n medium: dictionary-like\n The medium to initialize. medium should be a dictionary defining\n `{rxn_id: bound}` pairs."} {"_id": "q_5122", "text": "Add a boundary reaction for a given metabolite.\n\n There are three different types of pre-defined boundary reactions:\n exchange, demand, and sink reactions.\n An exchange reaction is a reversible, unbalanced reaction that adds\n to or removes an extracellular metabolite from the extracellular\n compartment.\n A demand reaction is an irreversible reaction that consumes an\n intracellular metabolite.\n A sink is similar to an exchange but specifically for intracellular\n metabolites.\n\n If you set the reaction `type` to something else, you must specify the\n desired identifier of the created reaction along with its upper and\n lower bound. The name will be given by the metabolite name and the\n given `type`.\n\n Parameters\n ----------\n metabolite : cobra.Metabolite\n Any given metabolite. The compartment is not checked but you are\n encouraged to stick to the definition of exchanges and sinks.\n type : str, {\"exchange\", \"demand\", \"sink\"}\n Using one of the pre-defined reaction types is easiest. If you\n want to create your own kind of boundary reaction choose\n any other string, e.g., 'my-boundary'.\n reaction_id : str, optional\n The ID of the resulting reaction. This takes precedence over the\n auto-generated identifiers but beware that it might make boundary\n reactions harder to identify afterwards when using `model.boundary`\n or specifically `model.exchanges` etc.\n lb : float, optional\n The lower bound of the resulting reaction.\n ub : float, optional\n The upper bound of the resulting reaction.\n sbo_term : str, optional\n A correct SBO term is set for the available types. If a custom\n type is chosen, a suitable SBO term should also be set.\n\n Returns\n -------\n cobra.Reaction\n The created boundary reaction.\n\n Examples\n --------\n >>> import cobra.test\n >>> model = cobra.test.create_test_model(\"textbook\")\n >>> demand = model.add_boundary(model.metabolites.atp_c, type=\"demand\")\n >>> demand.id\n 'DM_atp_c'\n >>> demand.name\n 'ATP demand'\n >>> demand.bounds\n (0, 1000.0)\n >>> demand.build_reaction_string()\n 'atp_c --> '"} {"_id": "q_5123", "text": "Add reactions to the model.\n\n Reactions with identifiers identical to a reaction already in the\n model are ignored.\n\n The change is reverted upon exit when using the model as a context.\n\n Parameters\n ----------\n reaction_list : list\n A list of `cobra.Reaction` objects"} {"_id": "q_5124", "text": "Remove reactions from the model.\n\n The change is reverted upon exit when using the model as a context.\n\n Parameters\n ----------\n reactions : list\n A list with reactions (`cobra.Reaction`), or their id's, to remove\n\n remove_orphans : bool\n Remove orphaned genes and metabolites from the model as well"} {"_id": "q_5125", "text": "Add groups to the model.\n\n Groups with identifiers identical to a group already in the model are\n ignored.\n\n If any group contains members that are not in the model, these members\n are added to the model as well. Only metabolites, reactions, and genes\n can have groups.\n\n Parameters\n ----------\n group_list : list\n A list of `cobra.Group` objects to add to the model."} {"_id": "q_5126", "text": "Populate attached solver with constraints and variables that\n model the provided reactions."} {"_id": "q_5127", "text": "Optimize model without creating a solution object.\n\n Creating a full solution object implies fetching shadow prices and\n flux values for all reactions and metabolites from the solver\n object. This necessarily takes some time and in cases where only one\n or two values are of interest, it is recommended to instead use this\n function which does not create a solution object returning only the\n value of the objective. Note however that the `optimize()` function\n uses efficient means to fetch values so if you need fluxes/shadow\n prices for more than say 4 reactions/metabolites, then the total\n speed increase of `slim_optimize` versus `optimize` is expected to\n be small or even negative depending on how you fetch the values\n after optimization.\n\n Parameters\n ----------\n error_value : float, None\n The value to return if optimization failed due to e.g.\n infeasibility. If None, raise `OptimizationError` if the\n optimization fails.\n message : string\n Error message to use if the model optimization did not succeed.\n\n Returns\n -------\n float\n The objective value."} {"_id": "q_5128", "text": "Optimize the model using flux balance analysis.\n\n Parameters\n ----------\n objective_sense : {None, 'maximize' 'minimize'}, optional\n Whether fluxes should be maximized or minimized. In case of None,\n the previous direction is used.\n raise_error : bool\n If true, raise an OptimizationError if solver status is not\n optimal.\n\n Notes\n -----\n Only the most commonly used parameters are presented here. Additional\n parameters for cobra.solvers may be available and specified with the\n appropriate keyword argument."} {"_id": "q_5129", "text": "Update all indexes and pointers in a model\n\n Parameters\n ----------\n rebuild_index : bool\n rebuild the indices kept in reactions, metabolites and genes\n rebuild_relationships : bool\n reset all associations between genes, metabolites, model and\n then re-add them."} {"_id": "q_5130", "text": "Merge two models to create a model with the reactions from both\n models.\n\n Custom constraints and variables from right models are also copied\n to left model, however note that, constraints and variables are\n assumed to be the same if they have the same name.\n\n right : cobra.Model\n The model to add reactions from\n prefix_existing : string\n Prefix the reaction identifier in the right that already exist\n in the left model with this string.\n inplace : bool\n Add reactions from right directly to left model object.\n Otherwise, create a new model leaving the left model untouched.\n When done within the model as context, changes to the models are\n reverted upon exit.\n objective : string\n One of 'left', 'right' or 'sum' for setting the objective of the\n resulting model to that of the corresponding model or the sum of\n both."} {"_id": "q_5131", "text": "makes all ids SBML compliant"} {"_id": "q_5132", "text": "renames genes in a model from the rename_dict"} {"_id": "q_5133", "text": "Return the model as a JSON document.\n\n ``kwargs`` are passed on to ``json.dumps``.\n\n Parameters\n ----------\n model : cobra.Model\n The cobra model to represent.\n sort : bool, optional\n Whether to sort the metabolites, reactions, and genes or maintain the\n order defined in the model.\n\n Returns\n -------\n str\n String representation of the cobra model as a JSON document.\n\n See Also\n --------\n save_json_model : Write directly to a file.\n json.dumps : Base function."} {"_id": "q_5134", "text": "Load a cobra model from a file in JSON format.\n\n Parameters\n ----------\n filename : str or file-like\n File path or descriptor that contains the JSON document describing the\n cobra model.\n\n Returns\n -------\n cobra.Model\n The cobra model as represented in the JSON document.\n\n See Also\n --------\n from_json : Load from a string."} {"_id": "q_5135", "text": "Add a mixed-integer version of a minimal medium to the model.\n\n Changes the optimization objective to finding the medium with the least\n components::\n\n minimize size(R) where R part of import_reactions\n\n Arguments\n ---------\n model : cobra.model\n The model to modify."} {"_id": "q_5136", "text": "Convert a solution to medium.\n\n Arguments\n ---------\n exchanges : list of cobra.reaction\n The exchange reactions to consider.\n tolerance : positive double\n The absolute tolerance for fluxes. Fluxes with an absolute value\n smaller than this number will be ignored.\n exports : bool\n Whether to return export fluxes as well.\n\n Returns\n -------\n pandas.Series\n The \"medium\", meaning all active import fluxes in the solution."} {"_id": "q_5137", "text": "Find the minimal growth medium for the model.\n\n Finds the minimal growth medium for the model which allows for\n model as well as individual growth. Here, a minimal medium can either\n be the medium requiring the smallest total import flux or the medium\n requiring the least components (ergo ingredients), which will be much\n slower due to being a mixed integer problem (MIP).\n\n Arguments\n ---------\n model : cobra.model\n The model to modify.\n min_objective_value : positive float or array-like object\n The minimum growth rate (objective) that has to be achieved.\n exports : boolean\n Whether to include export fluxes in the returned medium. Defaults to\n False which will only return import fluxes.\n minimize_components : boolean or positive int\n Whether to minimize the number of components instead of the total\n import flux. Might be more intuitive if set to True but may also be\n slow to calculate for large communities. If set to a number `n` will\n return up to `n` alternative solutions all with the same number of\n components.\n open_exchanges : boolean or number\n Whether to ignore currently set bounds and make all exchange reactions\n in the model possible. If set to a number all exchange reactions will\n be opened with (-number, number) as bounds.\n\n Returns\n -------\n pandas.Series, pandas.DataFrame or None\n A series giving the import flux for each required import\n reaction and (optionally) the associated export fluxes. All exchange\n fluxes are oriented into the import reaction e.g. positive fluxes\n denote imports and negative fluxes exports. If `minimize_components`\n is a number larger 1 may return a DataFrame where each column is a\n minimal medium. Returns None if the minimization is infeasible\n (for instance if min_growth > maximum growth rate).\n\n Notes\n -----\n Due to numerical issues the `minimize_components` option will usually only\n minimize the number of \"large\" import fluxes. Specifically, the detection\n limit is given by ``integrality_tolerance * max_bound`` where ``max_bound``\n is the largest bound on an import reaction. Thus, if you are interested\n in small import fluxes as well you may have to adjust the integrality\n tolerance at first with\n `model.solver.configuration.tolerances.integrality = 1e-7` for instance.\n However, this will be *very* slow for large models especially with GLPK."} {"_id": "q_5138", "text": "Initialize a global model object for multiprocessing."} {"_id": "q_5139", "text": "Determine the minimum and maximum possible flux value for each reaction.\n\n Parameters\n ----------\n model : cobra.Model\n The model for which to run the analysis. It will *not* be modified.\n reaction_list : list of cobra.Reaction or str, optional\n The reactions for which to obtain min/max fluxes. If None will use\n all reactions in the model (default).\n loopless : boolean, optional\n Whether to return only loopless solutions. This is significantly\n slower. Please also refer to the notes.\n fraction_of_optimum : float, optional\n Must be <= 1.0. Requires that the objective value is at least the\n fraction times maximum objective value. A value of 0.85 for instance\n means that the objective has to be at least at 85% percent of its\n maximum.\n pfba_factor : float, optional\n Add an additional constraint to the model that requires the total sum\n of absolute fluxes must not be larger than this value times the\n smallest possible sum of absolute fluxes, i.e., by setting the value\n to 1.1 the total sum of absolute fluxes must not be more than\n 10% larger than the pFBA solution. Since the pFBA solution is the\n one that optimally minimizes the total flux sum, the ``pfba_factor``\n should, if set, be larger than one. Setting this value may lead to\n more realistic predictions of the effective flux bounds.\n processes : int, optional\n The number of parallel processes to run. If not explicitly passed,\n will be set from the global configuration singleton.\n\n Returns\n -------\n pandas.DataFrame\n A data frame with reaction identifiers as the index and two columns:\n - maximum: indicating the highest possible flux\n - minimum: indicating the lowest possible flux\n\n Notes\n -----\n This implements the fast version as described in [1]_. Please note that\n the flux distribution containing all minimal/maximal fluxes does not have\n to be a feasible solution for the model. Fluxes are minimized/maximized\n individually and a single minimal flux might require all others to be\n suboptimal.\n\n Using the loopless option will lead to a significant increase in\n computation time (about a factor of 100 for large models). However, the\n algorithm used here (see [2]_) is still more than 1000x faster than the\n \"naive\" version using ``add_loopless(model)``. Also note that if you have\n included constraints that force a loop (for instance by setting all fluxes\n in a loop to be non-zero) this loop will be included in the solution.\n\n References\n ----------\n .. [1] Computationally efficient flux variability analysis.\n Gudmundsson S, Thiele I.\n BMC Bioinformatics. 2010 Sep 29;11:489.\n doi: 10.1186/1471-2105-11-489, PMID: 20920235\n\n .. [2] CycleFreeFlux: efficient removal of thermodynamically infeasible\n loops from flux distributions.\n Desouki AA, Jarre F, Gelius-Dietrich G, Lercher MJ.\n Bioinformatics. 2015 Jul 1;31(13):2159-65.\n doi: 10.1093/bioinformatics/btv096."} {"_id": "q_5140", "text": "Find reactions that cannot carry any flux.\n\n The question whether or not a reaction is blocked is highly dependent\n on the current exchange reaction settings for a COBRA model. Hence an\n argument is provided to open all exchange reactions.\n\n Notes\n -----\n Sink and demand reactions are left untouched. Please modify them manually.\n\n Parameters\n ----------\n model : cobra.Model\n The model to analyze.\n reaction_list : list, optional\n List of reactions to consider, the default includes all model\n reactions.\n zero_cutoff : float, optional\n Flux value which is considered to effectively be zero\n (default model.tolerance).\n open_exchanges : bool, optional\n Whether or not to open all exchange reactions to very high flux ranges.\n processes : int, optional\n The number of parallel processes to run. Can speed up the computations\n if the number of reactions is large. If not explicitly\n passed, it will be set from the global configuration singleton.\n\n Returns\n -------\n list\n List with the identifiers of blocked reactions."} {"_id": "q_5141", "text": "Return a set of essential reactions.\n\n A reaction is considered essential if restricting its flux to zero\n causes the objective, e.g., the growth rate, to also be zero, below the\n threshold, or infeasible.\n\n\n Parameters\n ----------\n model : cobra.Model\n The model to find the essential reactions for.\n threshold : float, optional\n Minimal objective flux to be considered viable. By default this is\n 1% of the maximal objective.\n processes : int, optional\n The number of parallel processes to run. Can speed up the computations\n if the number of knockouts to perform is large. If not explicitly\n passed, it will be set from the global configuration singleton.\n\n Returns\n -------\n set\n Set of essential reactions"} {"_id": "q_5142", "text": "adds SBO terms for demands and exchanges\n\n This works for models which follow the standard convention for\n constructing and naming these reactions.\n\n The reaction should only contain the single metabolite being exchanged,\n and the id should be EX_metid or DM_metid"} {"_id": "q_5143", "text": "Knock out each gene pair from the combination of two given lists.\n\n We say 'pair' here but the order order does not matter.\n\n Parameters\n ----------\n model : cobra.Model\n The metabolic model to perform deletions in.\n gene_list1 : iterable, optional\n First iterable of ``cobra.Gene``s to be deleted. If not passed,\n all the genes from the model are used.\n gene_list2 : iterable, optional\n Second iterable of ``cobra.Gene``s to be deleted. If not passed,\n all the genes from the model are used.\n method: {\"fba\", \"moma\", \"linear moma\", \"room\", \"linear room\"}, optional\n Method used to predict the growth rate.\n solution : cobra.Solution, optional\n A previous solution to use as a reference for (linear) MOMA or ROOM.\n processes : int, optional\n The number of parallel processes to run. Can speed up the computations\n if the number of knockouts to perform is large. If not passed,\n will be set to the number of CPUs found.\n kwargs :\n Keyword arguments are passed on to underlying simulation functions\n such as ``add_room``.\n\n Returns\n -------\n pandas.DataFrame\n A representation of all combinations of gene deletions. The\n columns are 'growth' and 'status', where\n\n index : frozenset([str])\n The gene identifiers that were knocked out.\n growth : float\n The growth rate of the adjusted model.\n status : str\n The solution's status."} {"_id": "q_5144", "text": "Generate the id of reverse_variable from the reaction's id."} {"_id": "q_5145", "text": "The flux value in the most recent solution.\n\n Flux is the primal value of the corresponding variable in the model.\n\n Warnings\n --------\n * Accessing reaction fluxes through a `Solution` object is the safer,\n preferred, and only guaranteed to be correct way. You can see how to\n do so easily in the examples.\n * Reaction flux is retrieved from the currently defined\n `self._model.solver`. The solver status is checked but there are no\n guarantees that the current solver state is the one you are looking\n for.\n * If you modify the underlying model after an optimization, you will\n retrieve the old optimization values.\n\n Raises\n ------\n RuntimeError\n If the underlying model was never optimized beforehand or the\n reaction is not part of a model.\n OptimizationError\n If the solver status is anything other than 'optimal'.\n AssertionError\n If the flux value is not within the bounds.\n\n Examples\n --------\n >>> import cobra.test\n >>> model = cobra.test.create_test_model(\"textbook\")\n >>> solution = model.optimize()\n >>> model.reactions.PFK.flux\n 7.477381962160283\n >>> solution.fluxes.PFK\n 7.4773819621602833"} {"_id": "q_5146", "text": "Display gene_reaction_rule with names intead.\n\n Do NOT use this string for computation. It is intended to give a\n representation of the rule using more familiar gene names instead of\n the often cryptic ids."} {"_id": "q_5147", "text": "All required enzymes for reaction are functional.\n\n Returns\n -------\n bool\n True if the gene-protein-reaction (GPR) rule is fulfilled for\n this reaction, or if reaction is not associated to a model,\n otherwise False."} {"_id": "q_5148", "text": "Make sure all metabolites and genes that are associated with\n this reaction are aware of it."} {"_id": "q_5149", "text": "Copy a reaction\n\n The referenced metabolites and genes are also copied."} {"_id": "q_5150", "text": "Add metabolites and stoichiometric coefficients to the reaction.\n If the final coefficient for a metabolite is 0 then it is removed\n from the reaction.\n\n The change is reverted upon exit when using the model as a context.\n\n Parameters\n ----------\n metabolites_to_add : dict\n Dictionary with metabolite objects or metabolite identifiers as\n keys and coefficients as values. If keys are strings (name of a\n metabolite) the reaction must already be part of a model and a\n metabolite with the given name must exist in the model.\n\n combine : bool\n Describes behavior a metabolite already exists in the reaction.\n True causes the coefficients to be added.\n False causes the coefficient to be replaced.\n\n reversibly : bool\n Whether to add the change to the context to make the change\n reversibly or not (primarily intended for internal use)."} {"_id": "q_5151", "text": "Generate a human readable reaction string"} {"_id": "q_5152", "text": "Compute mass and charge balance for the reaction\n\n returns a dict of {element: amount} for unbalanced elements.\n \"charge\" is treated as an element in this dict\n This should be empty for balanced reactions."} {"_id": "q_5153", "text": "Dissociates a cobra.Gene object with a cobra.Reaction.\n\n Parameters\n ----------\n cobra_gene : cobra.core.Gene.Gene"} {"_id": "q_5154", "text": "Builds reaction from reaction equation reaction_str using parser\n\n Takes a string and using the specifications supplied in the optional\n arguments infers a set of metabolites, metabolite compartments and\n stoichiometries for the reaction. It also infers the reversibility\n of the reaction from the reaction arrow.\n\n Changes to the associated model are reverted upon exit when using\n the model as a context.\n\n Parameters\n ----------\n reaction_str : string\n a string containing a reaction formula (equation)\n verbose: bool\n setting verbosity of function\n fwd_arrow : re.compile\n for forward irreversible reaction arrows\n rev_arrow : re.compile\n for backward irreversible reaction arrows\n reversible_arrow : re.compile\n for reversible reaction arrows\n term_split : string\n dividing individual metabolite entries"} {"_id": "q_5155", "text": "Reads SBML model from given filename.\n\n If the given filename ends with the suffix ''.gz'' (for example,\n ''myfile.xml.gz'),' the file is assumed to be compressed in gzip\n format and will be automatically decompressed upon reading. Similarly,\n if the given filename ends with ''.zip'' or ''.bz2',' the file is\n assumed to be compressed in zip or bzip2 format (respectively). Files\n whose names lack these suffixes will be read uncompressed. Note that\n if the file is in zip format but the archive contains more than one\n file, only the first file in the archive will be read and the rest\n ignored.\n\n To read a gzip/zip file, libSBML needs to be configured and linked\n with the zlib library at compile time. It also needs to be linked\n with the bzip2 library to read files in bzip2 format. (Both of these\n are the default configurations for libSBML.)\n\n This function supports SBML with FBC-v1 and FBC-v2. FBC-v1 models\n are converted to FBC-v2 models before reading.\n\n The parser tries to fall back to information in notes dictionaries\n if information is not available in the FBC packages, e.g.,\n CHARGE, FORMULA on species, or GENE_ASSOCIATION, SUBSYSTEM on reactions.\n\n Parameters\n ----------\n filename : path to SBML file, or SBML string, or SBML file handle\n SBML which is read into cobra model\n number: data type of stoichiometry: {float, int}\n In which data type should the stoichiometry be parsed.\n f_replace : dict of replacement functions for id replacement\n Dictionary of replacement functions for gene, specie, and reaction.\n By default the following id changes are performed on import:\n clip G_ from genes, clip M_ from species, clip R_ from reactions\n If no replacements should be performed, set f_replace={}, None\n set_missing_bounds : boolean flag to set missing bounds\n Missing bounds are set to default bounds in configuration.\n\n Returns\n -------\n cobra.core.Model\n\n Notes\n -----\n Provided file handles cannot be opened in binary mode, i.e., use\n with open(path, \"r\" as f):\n read_sbml_model(f)\n File handles to compressed files are not supported yet."} {"_id": "q_5156", "text": "Get SBMLDocument from given filename.\n\n Parameters\n ----------\n filename : path to SBML, or SBML string, or filehandle\n\n Returns\n -------\n libsbml.SBMLDocument"} {"_id": "q_5157", "text": "Writes cobra model to filename.\n\n The created model is SBML level 3 version 1 (L1V3) with\n fbc package v2 (fbc-v2).\n\n If the given filename ends with the suffix \".gz\" (for example,\n \"myfile.xml.gz\"), libSBML assumes the caller wants the file to be\n written compressed in gzip format. Similarly, if the given filename\n ends with \".zip\" or \".bz2\", libSBML assumes the caller wants the\n file to be compressed in zip or bzip2 format (respectively). Files\n whose names lack these suffixes will be written uncompressed. Special\n considerations for the zip format: If the given filename ends with\n \".zip\", the file placed in the zip archive will have the suffix\n \".xml\" or \".sbml\". For example, the file in the zip archive will\n be named \"test.xml\" if the given filename is \"test.xml.zip\" or\n \"test.zip\". Similarly, the filename in the archive will be\n \"test.sbml\" if the given filename is \"test.sbml.zip\".\n\n Parameters\n ----------\n cobra_model : cobra.core.Model\n Model instance which is written to SBML\n filename : string\n path to which the model is written\n use_fbc_package : boolean {True, False}\n should the fbc package be used\n f_replace: dict of replacement functions for id replacement"} {"_id": "q_5158", "text": "Creates bound in model for given reaction.\n\n Adds the parameters for the bounds to the SBML model.\n\n Parameters\n ----------\n model : libsbml.Model\n SBML model instance\n reaction : cobra.core.Reaction\n Cobra reaction instance from which the bounds are read.\n bound_type : {LOWER_BOUND, UPPER_BOUND}\n Type of bound\n f_replace : dict of id replacement functions\n units : flux units\n\n Returns\n -------\n Id of bound parameter."} {"_id": "q_5159", "text": "Create parameter in SBML model."} {"_id": "q_5160", "text": "Checks the libsbml return value and logs error messages.\n\n If 'value' is None, logs an error message constructed using\n 'message' and then exits with status code 1. If 'value' is an integer,\n it assumes it is a libSBML return status code. If the code value is\n LIBSBML_OPERATION_SUCCESS, returns without further action; if it is not,\n prints an error message constructed using 'message' along with text from\n libSBML explaining the meaning of the code, and exits with status code 1."} {"_id": "q_5161", "text": "Creates dictionary of COBRA notes.\n\n Parameters\n ----------\n sbase : libsbml.SBase\n\n Returns\n -------\n dict of notes"} {"_id": "q_5162", "text": "Set SBase notes based on dictionary.\n\n Parameters\n ----------\n sbase : libsbml.SBase\n SBML object to set notes on\n notes : notes object\n notes information from cobra object"} {"_id": "q_5163", "text": "Parses cobra annotations from a given SBase object.\n\n Annotations are dictionaries with the providers as keys.\n\n Parameters\n ----------\n sbase : libsbml.SBase\n SBase from which the SBML annotations are read\n\n Returns\n -------\n dict (annotation dictionary)\n\n FIXME: annotation format must be updated (this is a big collection of\n fixes) - see: https://github.com/opencobra/cobrapy/issues/684)"} {"_id": "q_5164", "text": "Set SBase annotations based on cobra annotations.\n\n Parameters\n ----------\n sbase : libsbml.SBase\n SBML object to annotate\n annotation : cobra annotation structure\n cobra object with annotation information\n\n FIXME: annotation format must be updated\n (https://github.com/opencobra/cobrapy/issues/684)"} {"_id": "q_5165", "text": "String representation of SBMLError.\n\n Parameters\n ----------\n error : libsbml.SBMLError\n k : index of error\n\n Returns\n -------\n string representation of error"} {"_id": "q_5166", "text": "Calculate the objective value conditioned on all combinations of\n fluxes for a set of chosen reactions\n\n The production envelope can be used to analyze a model's ability to\n produce a given compound conditional on the fluxes for another set of\n reactions, such as the uptake rates. The model is alternately optimized\n with respect to minimizing and maximizing the objective and the\n obtained fluxes are recorded. Ranges to compute production is set to the\n effective\n bounds, i.e., the minimum / maximum fluxes that can be obtained given\n current reaction bounds.\n\n Parameters\n ----------\n model : cobra.Model\n The model to compute the production envelope for.\n reactions : list or string\n A list of reactions, reaction identifiers or a single reaction.\n objective : string, dict, model.solver.interface.Objective, optional\n The objective (reaction) to use for the production envelope. Use the\n model's current objective if left missing.\n carbon_sources : list or string, optional\n One or more reactions or reaction identifiers that are the source of\n carbon for computing carbon (mol carbon in output over mol carbon in\n input) and mass yield (gram product over gram output). Only objectives\n with a carbon containing input and output metabolite is supported.\n Will identify active carbon sources in the medium if none are specified.\n points : int, optional\n The number of points to calculate production for.\n threshold : float, optional\n A cut-off under which flux values will be considered to be zero\n (default model.tolerance).\n\n Returns\n -------\n pandas.DataFrame\n A data frame with one row per evaluated point and\n\n - reaction id : one column per input reaction indicating the flux at\n each given point,\n - carbon_source: identifiers of carbon exchange reactions\n\n A column for the maximum and minimum each for the following types:\n\n - flux: the objective flux\n - carbon_yield: if carbon source is defined and the product is a\n single metabolite (mol carbon product per mol carbon feeding source)\n - mass_yield: if carbon source is defined and the product is a\n single metabolite (gram product per 1 g of feeding source)\n\n Examples\n --------\n >>> import cobra.test\n >>> from cobra.flux_analysis import production_envelope\n >>> model = cobra.test.create_test_model(\"textbook\")\n >>> production_envelope(model, [\"EX_glc__D_e\", \"EX_o2_e\"])"} {"_id": "q_5167", "text": "Compute total output per input unit.\n\n Units are typically mol carbon atoms or gram of source and product.\n\n Parameters\n ----------\n input_fluxes : list\n A list of input reaction fluxes in the same order as the\n ``input_components``.\n input_elements : list\n A list of reaction components which are in turn list of numbers.\n output_flux : float\n The output flux value.\n output_elements : list\n A list of stoichiometrically weighted output reaction components.\n\n Returns\n -------\n float\n The ratio between output (mol carbon atoms or grams of product) and\n input (mol carbon atoms or grams of source compounds)."} {"_id": "q_5168", "text": "Split metabolites into the atoms times their stoichiometric coefficients.\n\n Parameters\n ----------\n reaction : Reaction\n The metabolic reaction whose components are desired.\n\n Returns\n -------\n list\n Each of the reaction's metabolites' desired carbon elements (if any)\n times that metabolite's stoichiometric coefficient."} {"_id": "q_5169", "text": "Return the metabolite weight times its stoichiometric coefficient."} {"_id": "q_5170", "text": "Find all active carbon source reactions.\n\n Parameters\n ----------\n model : Model\n A genome-scale metabolic model.\n\n Returns\n -------\n list\n The medium reactions with carbon input flux."} {"_id": "q_5171", "text": "Assesses production capacity.\n\n Assesses the capacity of the model to produce the precursors for the\n reaction and absorb the production of the reaction while the reaction is\n operating at, or above, the specified cutoff.\n\n Parameters\n ----------\n model : cobra.Model\n The cobra model to assess production capacity for\n\n reaction : reaction identifier or cobra.Reaction\n The reaction to assess\n\n flux_coefficient_cutoff : float\n The minimum flux that reaction must carry to be considered active.\n\n solver : basestring\n Solver name. If None, the default solver will be used.\n\n Returns\n -------\n bool or dict\n True if the model can produce the precursors and absorb the products\n for the reaction operating at, or above, flux_coefficient_cutoff.\n Otherwise, a dictionary of {'precursor': Status, 'product': Status}.\n Where Status is the results from assess_precursors and\n assess_products, respectively."} {"_id": "q_5172", "text": "Assesses the ability of the model to provide sufficient precursors for\n a reaction operating at, or beyond, the specified cutoff.\n\n Deprecated: use assess_component instead\n\n Parameters\n ----------\n model : cobra.Model\n The cobra model to assess production capacity for\n\n reaction : reaction identifier or cobra.Reaction\n The reaction to assess\n\n flux_coefficient_cutoff : float\n The minimum flux that reaction must carry to be considered active.\n\n solver : basestring\n Solver name. If None, the default solver will be used.\n\n Returns\n -------\n bool or dict\n True if the precursors can be simultaneously produced at the\n specified cutoff. False, if the model has the capacity to produce\n each individual precursor at the specified threshold but not all\n precursors at the required level simultaneously. Otherwise a\n dictionary of the required and the produced fluxes for each reactant\n that is not produced in sufficient quantities."} {"_id": "q_5173", "text": "Modify a model so all feasible flux distributions are loopless.\n\n In most cases you probably want to use the much faster `loopless_solution`.\n May be used in cases where you want to add complex constraints and\n objecives (for instance quadratic objectives) to the model afterwards\n or use an approximation of Gibbs free energy directions in you model.\n Adds variables and constraints to a model which will disallow flux\n distributions with loops. The used formulation is described in [1]_.\n This function *will* modify your model.\n\n Parameters\n ----------\n model : cobra.Model\n The model to which to add the constraints.\n zero_cutoff : positive float, optional\n Cutoff used for null space. Coefficients with an absolute value smaller\n than `zero_cutoff` are considered to be zero (default model.tolerance).\n\n Returns\n -------\n Nothing\n\n References\n ----------\n .. [1] Elimination of thermodynamically infeasible loops in steady-state\n metabolic models. Schellenberger J, Lewis NE, Palsson BO. Biophys J.\n 2011 Feb 2;100(3):544-53. doi: 10.1016/j.bpj.2010.12.3707. Erratum\n in: Biophys J. 2011 Mar 2;100(5):1381."} {"_id": "q_5174", "text": "Add constraints for CycleFreeFlux."} {"_id": "q_5175", "text": "Convert an existing solution to a loopless one.\n\n Removes as many loops as possible (see Notes).\n Uses the method from CycleFreeFlux [1]_ and is much faster than\n `add_loopless` and should therefore be the preferred option to get loopless\n flux distributions.\n\n Parameters\n ----------\n model : cobra.Model\n The model to which to add the constraints.\n fluxes : dict\n A dictionary {rxn_id: flux} that assigns a flux to each reaction. If\n not None will use the provided flux values to obtain a close loopless\n solution.\n\n Returns\n -------\n cobra.Solution\n A solution object containing the fluxes with the least amount of\n loops possible or None if the optimization failed (usually happening\n if the flux distribution in `fluxes` is infeasible).\n\n Notes\n -----\n The returned flux solution has the following properties:\n\n - it contains the minimal number of loops possible and no loops at all if\n all flux bounds include zero\n - it has an objective value close to the original one and the same\n objective value id the objective expression can not form a cycle\n (which is usually true since it consumes metabolites)\n - it has the same exact exchange fluxes as the previous solution\n - all fluxes have the same sign (flow in the same direction) as the\n previous solution\n\n References\n ----------\n .. [1] CycleFreeFlux: efficient removal of thermodynamically infeasible\n loops from flux distributions. Desouki AA, Jarre F, Gelius-Dietrich\n G, Lercher MJ. Bioinformatics. 2015 Jul 1;31(13):2159-65. doi:\n 10.1093/bioinformatics/btv096."} {"_id": "q_5176", "text": "Plugin to get a loopless FVA solution from single FVA iteration.\n\n Assumes the following about `model` and `reaction`:\n 1. the model objective is set to be `reaction`\n 2. the model has been optimized and contains the minimum/maximum flux for\n `reaction`\n 3. the model contains an auxiliary variable called \"fva_old_objective\"\n denoting the previous objective\n\n Parameters\n ----------\n model : cobra.Model\n The model to be used.\n reaction : cobra.Reaction\n The reaction currently minimized/maximized.\n solution : boolean, optional\n Whether to return the entire solution or only the minimum/maximum for\n `reaction`.\n zero_cutoff : positive float, optional\n Cutoff used for loop removal. Fluxes with an absolute value smaller\n than `zero_cutoff` are considered to be zero (default model.tolerance).\n\n Returns\n -------\n single float or dict\n Returns the minimized/maximized flux through `reaction` if\n all_fluxes == False (default). Otherwise returns a loopless flux\n solution containing the minimum/maximum flux for `reaction`."} {"_id": "q_5177", "text": "Return a stoichiometric array representation of the given model.\n\n The the columns represent the reactions and rows represent\n metabolites. S[i,j] therefore contains the quantity of metabolite `i`\n produced (negative for consumed) by reaction `j`.\n\n Parameters\n ----------\n model : cobra.Model\n The cobra model to construct the matrix for.\n array_type : string\n The type of array to construct. if 'dense', return a standard\n numpy.array, 'dok', or 'lil' will construct a sparse array using\n scipy of the corresponding type and 'DataFrame' will give a\n pandas `DataFrame` with metabolite indices and reaction columns\n dtype : data-type\n The desired data-type for the array. If not given, defaults to float.\n\n Returns\n -------\n matrix of class `dtype`\n The stoichiometric matrix for the given model."} {"_id": "q_5178", "text": "r\"\"\"\n Add constraints and objective for ROOM.\n\n This function adds variables and constraints for applying regulatory\n on/off minimization (ROOM) to the model.\n\n Parameters\n ----------\n model : cobra.Model\n The model to add ROOM constraints and objective to.\n solution : cobra.Solution, optional\n A previous solution to use as a reference. If no solution is given,\n one will be computed using pFBA.\n linear : bool, optional\n Whether to use the linear ROOM formulation or not (default False).\n delta: float, optional\n The relative tolerance range which is additive in nature\n (default 0.03).\n epsilon: float, optional\n The absolute range of tolerance which is multiplicative\n (default 0.001).\n\n Notes\n -----\n The formulation used here is the same as stated in the original paper [1]_.\n The mathematical expression is given below:\n\n minimize \\sum_{i=1}^m y^i\n s.t. Sv = 0\n v_min <= v <= v_max\n v_j = 0\n j \u2208 A\n for 1 <= i <= m\n v_i - y_i(v_{max,i} - w_i^u) <= w_i^u (1)\n v_i - y_i(v_{min,i} - w_i^l) <= w_i^l (2)\n y_i \u2208 {0,1} (3)\n w_i^u = w_i + \\delta|w_i| + \\epsilon\n w_i^l = w_i - \\delta|w_i| - \\epsilon\n\n So, for the linear version of the ROOM , constraint (3) is relaxed to\n 0 <= y_i <= 1.\n\n See Also\n --------\n pfba : parsimonious FBA\n\n References\n ----------\n .. [1] Tomer Shlomi, Omer Berkman and Eytan Ruppin, \"Regulatory on/off\n minimization of metabolic flux changes after genetic perturbations\",\n PNAS 2005 102 (21) 7695-7700; doi:10.1073/pnas.0406346102"} {"_id": "q_5179", "text": "Sample valid flux distributions from a cobra model.\n\n The function samples valid flux distributions from a cobra model.\n Currently we support two methods:\n\n 1. 'optgp' (default) which uses the OptGPSampler that supports parallel\n sampling [1]_. Requires large numbers of samples to be performant\n (n < 1000). For smaller samples 'achr' might be better suited.\n\n or\n\n 2. 'achr' which uses artificial centering hit-and-run. This is a single\n process method with good convergence [2]_.\n\n Parameters\n ----------\n model : cobra.Model\n The model from which to sample flux distributions.\n n : int\n The number of samples to obtain. When using 'optgp' this must be a\n multiple of `processes`, otherwise a larger number of samples will be\n returned.\n method : str, optional\n The sampling algorithm to use.\n thinning : int, optional\n The thinning factor of the generated sampling chain. A thinning of 10\n means samples are returned every 10 steps. Defaults to 100 which in\n benchmarks gives approximately uncorrelated samples. If set to one\n will return all iterates.\n processes : int, optional\n Only used for 'optgp'. The number of processes used to generate\n samples.\n seed : int > 0, optional\n The random number seed to be used. Initialized to current time stamp\n if None.\n\n Returns\n -------\n pandas.DataFrame\n The generated flux samples. Each row corresponds to a sample of the\n fluxes and the columns are the reactions.\n\n Notes\n -----\n The samplers have a correction method to ensure equality feasibility for\n long-running chains, however this will only work for homogeneous models,\n meaning models with no non-zero fixed variables or constraints (\n right-hand side of the equalities are zero).\n\n References\n ----------\n .. [1] Megchelenbrink W, Huynen M, Marchiori E (2014)\n optGpSampler: An Improved Tool for Uniformly Sampling the Solution-Space\n of Genome-Scale Metabolic Networks.\n PLoS ONE 9(2): e86587.\n .. [2] Direction Choice for Accelerated Convergence in Hit-and-Run Sampling\n David E. Kaufman Robert L. Smith\n Operations Research 199846:1 , 84-95"} {"_id": "q_5180", "text": "Optimizely template tag.\n\n Renders Javascript code to set-up A/B testing. You must supply\n your Optimizely account number in the ``OPTIMIZELY_ACCOUNT_NUMBER``\n setting."} {"_id": "q_5181", "text": "Clicky tracking template tag.\n\n Renders Javascript code to track page visits. You must supply\n your Clicky Site ID (as a string) in the ``CLICKY_SITE_ID``\n setting."} {"_id": "q_5182", "text": "Bottom Chartbeat template tag.\n\n Render the bottom Javascript code for Chartbeat. You must supply\n your Chartbeat User ID (as a string) in the ``CHARTBEAT_USER_ID``\n setting."} {"_id": "q_5183", "text": "Spring Metrics tracking template tag.\n\n Renders Javascript code to track page visits. You must supply\n your Spring Metrics Tracking ID in the\n ``SPRING_METRICS_TRACKING_ID`` setting."} {"_id": "q_5184", "text": "SnapEngage set-up template tag.\n\n Renders Javascript code to set-up SnapEngage chat. You must supply\n your widget ID in the ``SNAPENGAGE_WIDGET_ID`` setting."} {"_id": "q_5185", "text": "Coerce strings to hashable bytes."} {"_id": "q_5186", "text": "Return a SHA-256 HMAC `user_hash` as expected by Intercom, if configured.\n\n Return None if the `INTERCOM_HMAC_SECRET_KEY` setting is not configured."} {"_id": "q_5187", "text": "Intercom.io template tag.\n\n Renders Javascript code to intercom.io testing. You must supply\n your APP ID account number in the ``INTERCOM_APP_ID``\n setting."} {"_id": "q_5188", "text": "UserVoice tracking template tag.\n\n Renders Javascript code to track page visits. You must supply\n your UserVoice Widget Key in the ``USERVOICE_WIDGET_KEY``\n setting or the ``uservoice_widget_key`` template context variable."} {"_id": "q_5189", "text": "Piwik tracking template tag.\n\n Renders Javascript code to track page visits. You must supply\n your Piwik domain (plus optional URI path), and tracked site ID\n in the ``PIWIK_DOMAIN_PATH`` and the ``PIWIK_SITE_ID`` setting.\n\n Custom variables can be passed in the ``piwik_vars`` context\n variable. It is an iterable of custom variables as tuples like:\n ``(index, name, value[, scope])`` where scope may be ``'page'``\n (default) or ``'visit'``. Index should be an integer and the\n other parameters should be strings."} {"_id": "q_5190", "text": "Return a constant from ``django.conf.settings``. The `setting`\n argument is the constant name, the `value_re` argument is a regular\n expression used to validate the setting value and the `invalid_msg`\n argument is used as exception message if the value is not valid."} {"_id": "q_5191", "text": "Return whether the visitor is coming from an internal IP address,\n based on information from the template context.\n\n The prefix is used to allow different analytics services to have\n different notions of internal addresses."} {"_id": "q_5192", "text": "Mixpanel tracking template tag.\n\n Renders Javascript code to track page visits. You must supply\n your Mixpanel token in the ``MIXPANEL_API_TOKEN`` setting."} {"_id": "q_5193", "text": "Olark set-up template tag.\n\n Renders Javascript code to set-up Olark chat. You must supply\n your site ID in the ``OLARK_SITE_ID`` setting."} {"_id": "q_5194", "text": "Clickmap tracker template tag.\n\n Renders Javascript code to track page visits. You must supply\n your clickmap tracker ID (as a string) in the ``CLICKMAP_TRACKER_ID``\n setting."} {"_id": "q_5195", "text": "Gaug.es template tag.\n\n Renders Javascript code to gaug.es testing. You must supply\n your Site ID account number in the ``GAUGES_SITE_ID``\n setting."} {"_id": "q_5196", "text": "HubSpot tracking template tag.\n\n Renders Javascript code to track page visits. You must supply\n your portal ID (as a string) in the ``HUBSPOT_PORTAL_ID`` setting."} {"_id": "q_5197", "text": "Manage the printing and in-place updating of a line of characters\n\n .. note::\n If the string is longer than a line, then in-place updating may not\n work (it will print a new line at each refresh)."} {"_id": "q_5198", "text": "Open a subprocess and stream its output without hard-blocking.\n\n :param cmd: the command to execute within the subprocess\n :type cmd: str\n\n :param callback: function that intakes the subprocess' stdout line by line.\n It is called for each line received from the subprocess' stdout stream.\n :type callback: Callable[[Context], bool]\n\n :param timeout: the timeout time of the subprocess\n :type timeout: float\n\n :raises TimeoutError: if the subprocess' execution time exceeds\n the timeout time\n\n :return: the return code of the executed subprocess\n :rtype: int"} {"_id": "q_5199", "text": "Compute an exit code for mutmut mutation testing\n\n The following exit codes are available for mutmut:\n * 0 if all mutants were killed (OK_KILLED)\n * 1 if a fatal error occurred\n * 2 if one or more mutants survived (BAD_SURVIVED)\n * 4 if one or more mutants timed out (BAD_TIMEOUT)\n * 8 if one or more mutants caused tests to take twice as long (OK_SUSPICIOUS)\n\n Exit codes 1 to 8 will be bit-ORed so that it is possible to know what\n different mutant statuses occurred during mutation testing.\n\n :param exception:\n :type exception: Exception\n :param config:\n :type config: Config\n\n :return: integer noting the exit code of the mutation tests.\n :rtype: int"} {"_id": "q_5200", "text": "Called when the specified characteristic has changed its value."} {"_id": "q_5201", "text": "Called when the specified descriptor has changed its value."} {"_id": "q_5202", "text": "Start scanning for BLE devices."} {"_id": "q_5203", "text": "Stop scanning for BLE devices."} {"_id": "q_5204", "text": "Power on Bluetooth."} {"_id": "q_5205", "text": "Power off Bluetooth."} {"_id": "q_5206", "text": "Find the first available device that supports this service and return\n it, or None if no device is found. Will wait for up to timeout_sec\n seconds to find the device."} {"_id": "q_5207", "text": "Wait until the specified device has discovered the expected services\n and characteristics for this service. Should be called once before other\n calls are made on the service. Returns true if the service has been\n discovered in the specified timeout, or false if not discovered."} {"_id": "q_5208", "text": "Return the first child service found that has the specified\n UUID. Will return None if no service that matches is found."} {"_id": "q_5209", "text": "Return a list of GattService objects that have been discovered for\n this device."} {"_id": "q_5210", "text": "Return a list of UUIDs for services that are advertised by this\n device."} {"_id": "q_5211", "text": "Return the first child descriptor found that has the specified\n UUID. Will return None if no descriptor that matches is found."} {"_id": "q_5212", "text": "Read the value of this characteristic."} {"_id": "q_5213", "text": "Read the value of this descriptor."} {"_id": "q_5214", "text": "Called when the BLE adapter found a device while scanning, or has\n new advertisement data for a device."} {"_id": "q_5215", "text": "Called when a device is connected."} {"_id": "q_5216", "text": "Called when descriptor value was read or updated."} {"_id": "q_5217", "text": "Called when a new RSSI value for the peripheral is available."} {"_id": "q_5218", "text": "Clear any internally cached BLE device data. Necessary in some cases\n to prevent issues with stale device data getting cached by the OS."} {"_id": "q_5219", "text": "Disconnect any connected devices that have the specified list of\n service UUIDs. The default is an empty list which means all devices\n are disconnected."} {"_id": "q_5220", "text": "Print tree of all bluez objects, useful for debugging."} {"_id": "q_5221", "text": "Return the first device that advertises the specified service UUIDs or\n has the specified name. Will wait up to timeout_sec seconds for the device\n to be found, and if the timeout is zero then it will not wait at all and\n immediately return a result. When no device is found a value of None is\n returned."} {"_id": "q_5222", "text": "Retrieve a list of metadata objects associated with the specified\n list of CoreBluetooth objects. If an object cannot be found then an\n exception is thrown."} {"_id": "q_5223", "text": "Add the specified CoreBluetooth item with the associated metadata if\n it doesn't already exist. Returns the newly created or preexisting\n metadata item."} {"_id": "q_5224", "text": "Convert Objective-C CBUUID type to native Python UUID type."} {"_id": "q_5225", "text": "Return an instance of the BLE provider for the current platform."} {"_id": "q_5226", "text": "Convert the byte array to a BigInteger"} {"_id": "q_5227", "text": "Return the default set of request headers, which\n can later be expanded, based on the request type"} {"_id": "q_5228", "text": "Search the play store for an app.\n\n nb_result (int): is the maximum number of result to be returned\n\n offset (int): is used to take result starting from an index."} {"_id": "q_5229", "text": "Get app details from a package name.\n\n packageName is the app unique ID (usually starting with 'com.')."} {"_id": "q_5230", "text": "Get several apps details from a list of package names.\n\n This is much more efficient than calling N times details() since it\n requires only one request. If an item is not found it returns an empty object\n instead of throwing a RequestError('Item not found') like the details() function\n\n Args:\n packageNames (list): a list of app IDs (usually starting with 'com.').\n\n Returns:\n a list of dictionaries containing docv2 data, or None\n if the app doesn't exist"} {"_id": "q_5231", "text": "List all possible subcategories for a specific category. If\n also a subcategory is provided, list apps from this category.\n\n Args:\n cat (str): category id\n ctr (str): subcategory id\n nb_results (int): if a subcategory is specified, limit number\n of results to this number\n offset (int): if a subcategory is specified, start counting from this\n result\n Returns:\n A list of categories. If subcategory is specified, a list of apps in this\n category."} {"_id": "q_5232", "text": "Browse reviews for an application\n\n Args:\n packageName (str): app unique ID.\n filterByDevice (bool): filter results for current device\n sort (int): sorting criteria (values are unknown)\n nb_results (int): max number of reviews to return\n offset (int): return reviews starting from an offset value\n\n Returns:\n dict object containing all the protobuf data returned from\n the api"} {"_id": "q_5233", "text": "Download an already purchased app.\n\n Args:\n packageName (str): app unique ID (usually starting with 'com.')\n versionCode (int): version to download\n offerType (int): different type of downloads (mostly unused for apks)\n downloadToken (str): download token returned by 'purchase' API\n progress_bar (bool): wether or not to print a progress bar to stdout\n\n Returns:\n Dictionary containing apk data and a list of expansion files. As stated\n in android documentation, there can be at most 2 expansion files, one with\n main content, and one for patching the main content. Their names should\n follow this format:\n\n [main|patch]...obb\n\n Data to build this name string is provided in the dict object. For more\n info check https://developer.android.com/google/play/expansion-files.html"} {"_id": "q_5234", "text": "Decorator function that injects a requests.Session instance into\n the decorated function's actual parameters if not given."} {"_id": "q_5235", "text": "Generates a secure authentication token.\n\n Our token format follows the JSON Web Token (JWT) standard:\n header.claims.signature\n\n Where:\n 1) 'header' is a stringified, base64-encoded JSON object containing version and algorithm information.\n 2) 'claims' is a stringified, base64-encoded JSON object containing a set of claims:\n Library-generated claims:\n 'iat' -> The issued at time in seconds since the epoch as a number\n 'd' -> The arbitrary JSON object supplied by the user.\n User-supplied claims (these are all optional):\n 'exp' (optional) -> The expiration time of this token, as a number of seconds since the epoch.\n 'nbf' (optional) -> The 'not before' time before which the token should be rejected (seconds since the epoch)\n 'admin' (optional) -> If set to true, this client will bypass all security rules (use this to authenticate servers)\n 'debug' (optional) -> 'set to true to make this client receive debug information about security rule execution.\n 'simulate' (optional, internal-only for now) -> Set to true to neuter all API operations (listens / puts\n will run security rules but not actually write or return data).\n 3) A signature that proves the validity of this token (see: http://tools.ietf.org/html/draft-ietf-jose-json-web-signature-07)\n\n For base64-encoding we use URL-safe base64 encoding. This ensures that the entire token is URL-safe\n and could, for instance, be placed as a query argument without any encoding (and this is what the JWT spec requires).\n\n Args:\n data - a json serializable object of data to be included in the token\n options - An optional dictionary of additional claims for the token. Possible keys include:\n a) 'expires' -- A timestamp (as a number of seconds since the epoch) denoting a time after which\n this token should no longer be valid.\n b) 'notBefore' -- A timestamp (as a number of seconds since the epoch) denoting a time before\n which this token should be rejected by the server.\n c) 'admin' -- Set to true to bypass all security rules (use this for your trusted servers).\n d) 'debug' -- Set to true to enable debug mode (so you can see the results of Rules API operations)\n e) 'simulate' -- (internal-only for now) Set to true to neuter all API operations (listens / puts\n will run security rules but not actually write or return data)\n Returns:\n A signed Firebase Authentication Token\n Raises:\n ValueError: if an invalid key is specified in options"} {"_id": "q_5236", "text": "Method that simply adjusts authentication credentials for the\n request.\n `params` is the querystring of the request.\n `headers` is the header of the request.\n\n If auth instance is not provided to this class, this method simply\n returns without doing anything."} {"_id": "q_5237", "text": "Synchronous GET request."} {"_id": "q_5238", "text": "Asynchronous GET request with the process pool."} {"_id": "q_5239", "text": "Returns zero if there are no permissions for a bit of the perm. of a file. Otherwise it returns a positive value\n\n :param os.stat_result s: os.stat(file) object\n :param str perm: R (Read) or W (Write) or X (eXecute)\n :param str pos: USR (USeR) or GRP (GRouP) or OTH (OTHer)\n :return: mask value\n :rtype: int"} {"_id": "q_5240", "text": "File is only writable by root\n\n :param str path: Path to file\n :return: True if only root can write\n :rtype: bool"} {"_id": "q_5241", "text": "Command to check configuration file. Raises InvalidConfig on error\n\n :param str file: path to config file\n :param printfn: print function for success message\n :return: None"} {"_id": "q_5242", "text": "Parse and validate the config file. The read data is accessible as a dictionary in this instance\n\n :return: None"} {"_id": "q_5243", "text": "Excecute command on thread\n\n :param cmd: Command to execute\n :param cwd: current working directory\n :return: None"} {"_id": "q_5244", "text": "Excecute command on remote machine using SSH\n\n :param cmd: Command to execute\n :param ssh: Server to connect. Port is optional\n :param cwd: current working directory\n :return: None"} {"_id": "q_5245", "text": "Get HTTP Headers to send. By default default_headers\n\n :return: HTTP Headers\n :rtype: dict"} {"_id": "q_5246", "text": "Return \"data\" value on self.data\n\n :return: data to send\n :rtype: str"} {"_id": "q_5247", "text": "Return source mac address for this Scapy Packet\n\n :param scapy.packet.Packet pkt: Scapy Packet\n :return: Mac address. Include (Amazon Device) for these devices\n :rtype: str"} {"_id": "q_5248", "text": "Scandevice callback. Register src mac to avoid src repetition.\n Print device on screen.\n\n :param scapy.packet.Packet pkt: Scapy Packet\n :return: None"} {"_id": "q_5249", "text": "Print help and scan devices on screen.\n\n :return: None"} {"_id": "q_5250", "text": "Send success or error message to configured confirmation\n\n :param str message: Body message to send\n :param bool success: Device executed successfully to personalize message\n :return: None"} {"_id": "q_5251", "text": "Start daemon mode\n\n :param bool root_allowed: Only used for ExecuteCmd\n :return: loop"} {"_id": "q_5252", "text": "Filter queryset based on keywords.\n Support for multiple-selected parent values."} {"_id": "q_5253", "text": "Return True if according to should_index the object should be indexed."} {"_id": "q_5254", "text": "Returns the settings of the index."} {"_id": "q_5255", "text": "Registers the given model with Algolia engine.\n\n If the given model is already registered with Algolia engine, a\n RegistrationError will be raised."} {"_id": "q_5256", "text": "Returns the adapter associated with the given model."} {"_id": "q_5257", "text": "Signal handler for when a registered model has been saved."} {"_id": "q_5258", "text": "Encode a position given in float arguments latitude, longitude to\n a geohash which will have the character count precision."} {"_id": "q_5259", "text": "Pad a string to the target length in characters, or return the original\n string if it's longer than the target length."} {"_id": "q_5260", "text": "Pad short rows to the length of the longest row to help render \"jagged\"\n CSV files"} {"_id": "q_5261", "text": "Pad each cell to the size of the largest cell in its column."} {"_id": "q_5262", "text": "Add dividers and padding to a row of cells and return a string."} {"_id": "q_5263", "text": "Calculate base id and version from a resource id.\n\n :params resource_id: Resource id.\n :params return_version: (optional) True if You need version, returns (resource_id, version)."} {"_id": "q_5264", "text": "Make a bid.\n\n :params trade_id: Trade id.\n :params bid: Amount of credits You want to spend.\n :params fast: True for fastest bidding (skips trade status & credits check)."} {"_id": "q_5265", "text": "Return items in your club, excluding consumables.\n\n :param ctype: [development / ? / ?] Card type.\n :param level: (optional) [?/?/gold] Card level.\n :param category: (optional) [fitness/?/?] Card category.\n :param assetId: (optional) Asset id.\n :param defId: (optional) Definition id.\n :param min_price: (optional) Minimal price.\n :param max_price: (optional) Maximum price.\n :param min_buy: (optional) Minimal buy now price.\n :param max_buy: (optional) Maximum buy now price.\n :param league: (optional) League id.\n :param club: (optional) Club id.\n :param position: (optional) Position.\n :param nationality: (optional) Nation id.\n :param rare: (optional) [boolean] True for searching special cards.\n :param playStyle: (optional) Play style.\n :param start: (optional) Start page sent to server so it supposed to be 12/15, 24/30 etc. (default platform page_size*n)\n :param page_size: (optional) Page size (items per page)"} {"_id": "q_5266", "text": "Return all consumables from club."} {"_id": "q_5267", "text": "Return items in tradepile."} {"_id": "q_5268", "text": "Start auction. Returns trade_id.\n\n :params item_id: Item id.\n :params bid: Stard bid.\n :params buy_now: Buy now price.\n :params duration: Auction duration in seconds (Default: 3600)."} {"_id": "q_5269", "text": "Quick sell.\n\n :params item_id: Item id."} {"_id": "q_5270", "text": "Send to watchlist.\n\n :params trade_id: Trade id."} {"_id": "q_5271", "text": "Send card FROM CLUB to first free slot in sbs squad."} {"_id": "q_5272", "text": "Apply consumable on player.\n\n :params item_id: Item id of player.\n :params resource_id: Resource id of consumable."} {"_id": "q_5273", "text": "Return active messages."} {"_id": "q_5274", "text": "Runs its worker method.\n\n This method will be terminated once its parent's is_running\n property turns False."} {"_id": "q_5275", "text": "Returns a NumPy array that represents the 2D pixel location,\n which is defined by PFNC, of the original image data.\n\n You may use the returned NumPy array for a calculation to map the\n original image to another format.\n\n :return: A NumPy array that represents the 2D pixel location."} {"_id": "q_5276", "text": "Starts image acquisition.\n\n :return: None."} {"_id": "q_5277", "text": "Stops image acquisition.\n\n :return: None."} {"_id": "q_5278", "text": "Adds a CTI file to work with to the CTI file list.\n\n :param file_path: Set a file path to the target CTI file.\n\n :return: None."} {"_id": "q_5279", "text": "Removes the specified CTI file from the CTI file list.\n\n :param file_path: Set a file path to the target CTI file.\n\n :return: None."} {"_id": "q_5280", "text": "Releases all external resources including the controlling device."} {"_id": "q_5281", "text": "Run the unit test suite with each support library and Python version."} {"_id": "q_5282", "text": "Transform README.md into a usable long description.\n\n Replaces relative references to svg images to absolute https references."} {"_id": "q_5283", "text": "Return a PrecalculatedTextMeasurer given a JSON stream.\n\n See precalculate_text.py for details on the required format."} {"_id": "q_5284", "text": "Returns a reasonable default PrecalculatedTextMeasurer."} {"_id": "q_5285", "text": "Creates a github-style badge as an SVG image.\n\n >>> badge(left_text='coverage', right_text='23%', right_color='red')\n ''\n >>> badge(left_text='build', right_text='green', right_color='green',\n ... whole_link=\"http://www.example.com/\")\n ''\n\n Args:\n left_text: The text that should appear on the left-hand-side of the\n badge e.g. \"coverage\".\n right_text: The text that should appear on the right-hand-side of the\n badge e.g. \"23%\".\n left_link: The URL that should be redirected to when the left-hand text\n is selected.\n right_link: The URL that should be redirected to when the right-hand\n text is selected.\n whole_link: The link that should be redirected to when the badge is\n selected. If set then left_link and right_right may not be set.\n logo: A url representing a logo that will be displayed inside the\n badge. Can be a data URL e.g. \"data:image/svg+xml;utf8,>> Rachford_Rice_flash_error(0.5, zs=[0.5, 0.3, 0.2],\n ... Ks=[1.685, 0.742, 0.532])\n 0.04406445591174976\n\n References\n ----------\n .. [1] Rachford, H. H. Jr, and J. D. Rice. \"Procedure for Use of Electronic\n Digital Computers in Calculating Flash Vaporization Hydrocarbon\n Equilibrium.\" Journal of Petroleum Technology 4, no. 10 (October 1,\n 1952): 19-3. doi:10.2118/952327-G."} {"_id": "q_5296", "text": "r'''Calculates the activity coefficients of each species in a mixture\n using the Wilson method, given their mole fractions, and\n dimensionless interaction parameters. Those are normally correlated with\n temperature, and need to be calculated separately.\n\n .. math::\n \\ln \\gamma_i = 1 - \\ln \\left(\\sum_j^N \\Lambda_{ij} x_j\\right)\n -\\sum_j^N \\frac{\\Lambda_{ji}x_j}{\\displaystyle\\sum_k^N \\Lambda_{jk}x_k}\n\n Parameters\n ----------\n xs : list[float]\n Liquid mole fractions of each species, [-]\n params : list[list[float]]\n Dimensionless interaction parameters of each compound with each other,\n [-]\n\n Returns\n -------\n gammas : list[float]\n Activity coefficient for each species in the liquid mixture, [-]\n\n Notes\n -----\n This model needs N^2 parameters.\n\n The original model correlated the interaction parameters using the standard\n pure-component molar volumes of each species at 25\u00b0C, in the following form:\n\n .. math::\n \\Lambda_{ij} = \\frac{V_j}{V_i} \\exp\\left(\\frac{-\\lambda_{i,j}}{RT}\\right)\n\n However, that form has less flexibility and offered no advantage over\n using only regressed parameters.\n\n Most correlations for the interaction parameters include some of the terms\n shown in the following form:\n\n .. math::\n \\ln \\Lambda_{ij} =a_{ij}+\\frac{b_{ij}}{T}+c_{ij}\\ln T + d_{ij}T\n + \\frac{e_{ij}}{T^2} + h_{ij}{T^2}\n\n The Wilson model is not applicable to liquid-liquid systems.\n\n Examples\n --------\n Ethanol-water example, at 343.15 K and 1 MPa:\n\n >>> Wilson([0.252, 0.748], [[1, 0.154], [0.888, 1]])\n [1.8814926087178843, 1.1655774931125487]\n\n References\n ----------\n .. [1] Wilson, Grant M. \"Vapor-Liquid Equilibrium. XI. A New Expression for\n the Excess Free Energy of Mixing.\" Journal of the American Chemical\n Society 86, no. 2 (January 1, 1964): 127-130. doi:10.1021/ja01056a002.\n .. [2] Gmehling, Jurgen, Barbel Kolbe, Michael Kleiber, and Jurgen Rarey.\n Chemical Thermodynamics for Process Simulation. 1st edition. Weinheim:\n Wiley-VCH, 2012."} {"_id": "q_5297", "text": "r'''Determines the phase of a one-species chemical system according to\n basic rules, using whatever information is available. Considers only the\n phases liquid, solid, and gas; does not consider two-phase\n scenarios, as should occurs between phase boundaries.\n\n * If the melting temperature is known and the temperature is under or equal\n to it, consider it a solid.\n * If the critical temperature is known and the temperature is greater or\n equal to it, consider it a gas.\n * If the vapor pressure at `T` is known and the pressure is under or equal\n to it, consider it a gas. If the pressure is greater than the vapor\n pressure, consider it a liquid.\n * If the melting temperature, critical temperature, and vapor pressure are\n not known, attempt to use the boiling point to provide phase information.\n If the pressure is between 90 kPa and 110 kPa (approximately normal),\n consider it a liquid if it is under the boiling temperature and a gas if\n above the boiling temperature.\n * If the pressure is above 110 kPa and the boiling temperature is known,\n consider it a liquid if the temperature is under the boiling temperature.\n * Return None otherwise.\n\n Parameters\n ----------\n T : float\n Temperature, [K]\n P : float\n Pressure, [Pa]\n Tm : float, optional\n Normal melting temperature, [K]\n Tb : float, optional\n Normal boiling point, [K]\n Tc : float, optional\n Critical temperature, [K]\n Psat : float, optional\n Vapor pressure of the fluid at `T`, [Pa]\n\n Returns\n -------\n phase : str\n Either 's', 'l', 'g', or None if the phase cannot be determined\n\n Notes\n -----\n No special attential is paid to any phase transition. For the case where\n the melting point is not provided, the possibility of the fluid being solid\n is simply ignored.\n\n Examples\n --------\n >>> identify_phase(T=280, P=101325, Tm=273.15, Psat=991)\n 'l'"} {"_id": "q_5298", "text": "r'''Charge of a chemical, computed with RDKit from a chemical's SMILES.\n If RDKit is not available, holds None.\n\n Examples\n --------\n >>> Chemical('sodium ion').charge\n 1"} {"_id": "q_5299", "text": "r'''RDKit object of the chemical, without hydrogen. If RDKit is not\n available, holds None.\n\n For examples of what can be done with RDKit, see\n `their website `_."} {"_id": "q_5300", "text": "r'''RDKit object of the chemical, with hydrogen. If RDKit is not\n available, holds None.\n\n For examples of what can be done with RDKit, see\n `their website `_."} {"_id": "q_5301", "text": "r'''Dictionary of legal status indicators for the chemical.\n\n Examples\n --------\n >>> pprint(Chemical('benzene').legal_status)\n {'DSL': 'LISTED',\n 'EINECS': 'LISTED',\n 'NLP': 'UNLISTED',\n 'SPIN': 'LISTED',\n 'TSCA': 'LISTED'}"} {"_id": "q_5302", "text": "r'''Dictionary of economic status indicators for the chemical.\n\n Examples\n --------\n >>> pprint(Chemical('benzene').economic_status)\n [\"US public: {'Manufactured': 6165232.1, 'Imported': 463146.474, 'Exported': 271908.252}\",\n u'1,000,000 - 10,000,000 tonnes per annum',\n u'Intermediate Use Only',\n 'OECD HPV Chemicals']"} {"_id": "q_5303", "text": "r'''This function handles the retrieval of a chemical's Global Warming\n Potential, relative to CO2. Lookup is based on CASRNs. Will automatically\n select a data source to use if no Method is provided; returns None if the\n data is not available.\n\n Returns the GWP for the 100yr outlook by default.\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n GWP : float\n Global warming potential, [(impact/mass chemical)/(impact/mass CO2)]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain GWP with the\n given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n The method name to use. Accepted methods are IPCC (2007) 100yr',\n 'IPCC (2007) 100yr-SAR', 'IPCC (2007) 20yr', and 'IPCC (2007) 500yr'. \n All valid values are also held in the list GWP_methods.\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n the GWP for the desired chemical, and will return methods\n instead of the GWP\n\n Notes\n -----\n All data is from [1]_, the official source. Several chemicals are available\n in [1]_ are not included here as they do not have a CAS.\n Methods are 'IPCC (2007) 100yr', 'IPCC (2007) 100yr-SAR',\n 'IPCC (2007) 20yr', and 'IPCC (2007) 500yr'.\n\n Examples\n --------\n Methane, 100-yr outlook\n\n >>> GWP(CASRN='74-82-8')\n 25.0\n\n References\n ----------\n .. [1] IPCC. \"2.10.2 Direct Global Warming Potentials - AR4 WGI Chapter 2:\n Changes in Atmospheric Constituents and in Radiative Forcing.\" 2007.\n https://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch2s2-10-2.html."} {"_id": "q_5304", "text": "r'''Method to calculate vapor pressure of a fluid at temperature `T`\n with a given method.\n\n This method has no exception handling; see `T_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at calculate vapor pressure, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n Psat : float\n Vapor pressure at T, [pa]"} {"_id": "q_5305", "text": "Counts the number of real volumes in `Vs`, and determines what to do.\n If there is only one real volume, the method \n `set_properties_from_solution` is called with it. If there are\n two real volumes, `set_properties_from_solution` is called once with \n each volume. The phase is returned by `set_properties_from_solution`, \n and the volumes is set to either `V_l` or `V_g` as appropriate. \n\n Parameters\n ----------\n Vs : list[float]\n Three possible molar volumes, [m^3/mol]"} {"_id": "q_5306", "text": "Generic method to calculate `T` from a specified `P` and `V`.\n Provides SciPy's `newton` solver, and iterates to solve the general\n equation for `P`, recalculating `a_alpha` as a function of temperature\n using `a_alpha_and_derivatives` each iteration.\n\n Parameters\n ----------\n P : float\n Pressure, [Pa]\n V : float\n Molar volume, [m^3/mol]\n quick : bool, optional\n Whether to use a SymPy cse-derived expression (3x faster) or \n individual formulas - not applicable where a numerical solver is\n used.\n\n Returns\n -------\n T : float\n Temperature, [K]"} {"_id": "q_5307", "text": "r'''Method to calculate `a_alpha` and its first and second\n derivatives for this EOS. Returns `a_alpha`, `da_alpha_dT`, and \n `d2a_alpha_dT2`. See `GCEOS.a_alpha_and_derivatives` for more \n documentation. Uses the set values of `Tc`, `kappa`, and `a`. \n \n For use in `solve_T`, returns only `a_alpha` if full is False.\n\n .. math::\n a\\alpha = a \\left(\\kappa \\left(- \\frac{T^{0.5}}{Tc^{0.5}} \n + 1\\right) + 1\\right)^{2}\n \n \\frac{d a\\alpha}{dT} = - \\frac{1.0 a \\kappa}{T^{0.5} Tc^{0.5}}\n \\left(\\kappa \\left(- \\frac{T^{0.5}}{Tc^{0.5}} + 1\\right) + 1\\right)\n\n \\frac{d^2 a\\alpha}{dT^2} = 0.5 a \\kappa \\left(- \\frac{1}{T^{1.5} \n Tc^{0.5}} \\left(\\kappa \\left(\\frac{T^{0.5}}{Tc^{0.5}} - 1\\right)\n - 1\\right) + \\frac{\\kappa}{T^{1.0} Tc^{1.0}}\\right)"} {"_id": "q_5308", "text": "r'''Method to calculate `T` from a specified `P` and `V` for the PRSV\n EOS. Uses `Tc`, `a`, `b`, `kappa0` and `kappa` as well, obtained from \n the class's namespace.\n\n Parameters\n ----------\n P : float\n Pressure, [Pa]\n V : float\n Molar volume, [m^3/mol]\n quick : bool, optional\n Whether to use a SymPy cse-derived expression (somewhat faster) or \n individual formulas.\n\n Returns\n -------\n T : float\n Temperature, [K]\n \n Notes\n -----\n Not guaranteed to produce a solution. There are actually two solution,\n one much higher than normally desired; it is possible the solver could\n converge on this."} {"_id": "q_5309", "text": "r'''Method to calculate `T` from a specified `P` and `V` for the VDW\n EOS. Uses `a`, and `b`, obtained from the class's namespace.\n\n .. math::\n T = \\frac{1}{R V^{2}} \\left(P V^{2} \\left(V - b\\right)\n + V a - a b\\right)\n\n Parameters\n ----------\n P : float\n Pressure, [Pa]\n V : float\n Molar volume, [m^3/mol]\n\n Returns\n -------\n T : float\n Temperature, [K]"} {"_id": "q_5310", "text": "r'''Method to calculate `T` from a specified `P` and `V` for the RK\n EOS. Uses `a`, and `b`, obtained from the class's namespace.\n\n Parameters\n ----------\n P : float\n Pressure, [Pa]\n V : float\n Molar volume, [m^3/mol]\n quick : bool, optional\n Whether to use a SymPy cse-derived expression (3x faster) or \n individual formulas\n\n Returns\n -------\n T : float\n Temperature, [K]\n\n Notes\n -----\n The exact solution can be derived as follows; it is excluded for \n breviety.\n \n >>> from sympy import *\n >>> P, T, V, R = symbols('P, T, V, R')\n >>> Tc, Pc = symbols('Tc, Pc')\n >>> a, b = symbols('a, b')\n\n >>> RK = Eq(P, R*T/(V-b) - a/sqrt(T)/(V*V + b*V))\n >>> # solve(RK, T)"} {"_id": "q_5311", "text": "r'''Method to calculate `T` from a specified `P` and `V` for the API \n SRK EOS. Uses `a`, `b`, and `Tc` obtained from the class's namespace.\n\n Parameters\n ----------\n P : float\n Pressure, [Pa]\n V : float\n Molar volume, [m^3/mol]\n quick : bool, optional\n Whether to use a SymPy cse-derived expression (3x faster) or \n individual formulas\n\n Returns\n -------\n T : float\n Temperature, [K]\n\n Notes\n -----\n If S2 is set to 0, the solution is the same as in the SRK EOS, and that\n is used. Otherwise, newton's method must be used to solve for `T`. \n There are 8 roots of T in that case, six of them real. No guarantee can\n be made regarding which root will be obtained."} {"_id": "q_5312", "text": "r'''Method to calculate `a_alpha` and its first and second\n derivatives for this EOS. Returns `a_alpha`, `da_alpha_dT`, and \n `d2a_alpha_dT2`. See `GCEOS.a_alpha_and_derivatives` for more \n documentation. Uses the set values of `Tc`, `omega`, and `a`.\n \n Because of its similarity for the TWUPR EOS, this has been moved to an \n external `TWU_a_alpha_common` function. See it for further \n documentation."} {"_id": "q_5313", "text": "r'''This function handles the retrieval of a chemical's boiling\n point. Lookup is based on CASRNs. Will automatically select a data\n source to use if no Method is provided; returns None if the data is not\n available.\n\n Prefered sources are 'CRC Physical Constants, organic' for organic\n chemicals, and 'CRC Physical Constants, inorganic' for inorganic\n chemicals. Function has data for approximately 13000 chemicals.\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n Tb : float\n Boiling temperature, [K]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain Tb with the given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n Tb_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n Tb for the desired chemical, and will return methods instead of Tb\n IgnoreMethods : list, optional\n A list of methods to ignore in obtaining the full list of methods,\n useful for for performance reasons and ignoring inaccurate methods\n\n Notes\n -----\n A total of four methods are available for this function. They are:\n\n * 'CRC_ORG', a compillation of data on organics\n as published in [1]_.\n * 'CRC_INORG', a compillation of data on\n inorganic as published in [1]_.\n * 'YAWS', a large compillation of data from a\n variety of sources; no data points are sourced in the work of [2]_.\n * 'PSAT_DEFINITION', calculation of boiling point from a\n vapor pressure calculation. This is normally off by a fraction of a\n degree even in the best cases. Listed in IgnoreMethods by default\n for performance reasons.\n\n Examples\n --------\n >>> Tb('7732-18-5')\n 373.124\n\n References\n ----------\n .. [1] Haynes, W.M., Thomas J. Bruno, and David R. Lide. CRC Handbook of\n Chemistry and Physics, 95E. Boca Raton, FL: CRC press, 2014.\n .. [2] Yaws, Carl L. Thermophysical Properties of Chemicals and\n Hydrocarbons, Second Edition. Amsterdam Boston: Gulf Professional\n Publishing, 2014."} {"_id": "q_5314", "text": "r'''This function handles the retrieval of a chemical's melting\n point. Lookup is based on CASRNs. Will automatically select a data\n source to use if no Method is provided; returns None if the data is not\n available.\n\n Prefered sources are 'Open Notebook Melting Points', with backup sources\n 'CRC Physical Constants, organic' for organic chemicals, and\n 'CRC Physical Constants, inorganic' for inorganic chemicals. Function has\n data for approximately 14000 chemicals.\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n Tm : float\n Melting temperature, [K]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain Tm with the given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n Tm_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n Tm for the desired chemical, and will return methods instead of Tm\n IgnoreMethods : list, optional\n A list of methods to ignore in obtaining the full list of methods\n\n Notes\n -----\n A total of three sources are available for this function. They are:\n\n * 'OPEN_NTBKM, a compillation of data on organics\n as published in [1]_ as Open Notebook Melting Points; Averaged \n (median) values were used when\n multiple points were available. For more information on this\n invaluable and excellent collection, see\n http://onswebservices.wikispaces.com/meltingpoint.\n * 'CRC_ORG', a compillation of data on organics\n as published in [2]_.\n * 'CRC_INORG', a compillation of data on\n inorganic as published in [2]_.\n\n Examples\n --------\n >>> Tm(CASRN='7732-18-5')\n 273.15\n\n References\n ----------\n .. [1] Bradley, Jean-Claude, Antony Williams, and Andrew Lang.\n \"Jean-Claude Bradley Open Melting Point Dataset\", May 20, 2014.\n https://figshare.com/articles/Jean_Claude_Bradley_Open_Melting_Point_Datset/1031637.\n .. [2] Haynes, W.M., Thomas J. Bruno, and David R. Lide. CRC Handbook of\n Chemistry and Physics, 95E. Boca Raton, FL: CRC press, 2014."} {"_id": "q_5315", "text": "r'''Calculates enthalpy of vaporization at arbitrary temperatures using the\n Clapeyron equation.\n\n The enthalpy of vaporization is given by:\n\n .. math::\n \\Delta H_{vap} = RT \\Delta Z \\frac{\\ln (P_c/Psat)}{(1-T_{r})}\n\n Parameters\n ----------\n T : float\n Temperature of fluid [K]\n Tc : float\n Critical temperature of fluid [K]\n Pc : float\n Critical pressure of fluid [Pa]\n dZ : float\n Change in compressibility factor between liquid and gas, []\n Psat : float\n Saturation pressure of fluid [Pa], optional\n\n Returns\n -------\n Hvap : float\n Enthalpy of vaporization, [J/mol]\n\n Notes\n -----\n No original source is available for this equation.\n [1]_ claims this equation overpredicts enthalpy by several percent.\n Under Tr = 0.8, dZ = 1 is a reasonable assumption.\n This equation is most accurate at the normal boiling point.\n\n Internal units are bar.\n\n WARNING: I believe it possible that the adjustment for pressure may be incorrect\n\n Examples\n --------\n Problem from Perry's examples.\n\n >>> Clapeyron(T=294.0, Tc=466.0, Pc=5.55E6)\n 26512.354585061985\n\n References\n ----------\n .. [1] Poling, Bruce E. The Properties of Gases and Liquids. 5th edition.\n New York: McGraw-Hill Professional, 2000."} {"_id": "q_5316", "text": "This function handles the calculation of a chemical's enthalpy of fusion.\n Generally this, is used by the chemical class, as all parameters are passed.\n Calling the function directly works okay.\n\n Enthalpy of fusion is a weak function of pressure, and its effects are\n neglected.\n\n This API is considered experimental, and is expected to be removed in a\n future release in favor of a more complete object-oriented interface."} {"_id": "q_5317", "text": "This function handles the calculation of a chemical's enthalpy of sublimation.\n Generally this, is used by the chemical class, as all parameters are passed.\n\n\n This API is considered experimental, and is expected to be removed in a\n future release in favor of a more complete object-oriented interface."} {"_id": "q_5318", "text": "This function handles the retrival of a mixtures's liquidus point.\n\n This API is considered experimental, and is expected to be removed in a\n future release in favor of a more complete object-oriented interface.\n\n >>> Tliquidus(Tms=[250.0, 350.0], xs=[0.5, 0.5])\n 350.0\n >>> Tliquidus(Tms=[250, 350], xs=[0.5, 0.5], Method='Simple')\n 300.0\n >>> Tliquidus(Tms=[250, 350], xs=[0.5, 0.5], AvailableMethods=True)\n ['Maximum', 'Simple', 'None']"} {"_id": "q_5319", "text": "r'''This function handles the calculation of a chemical's solubility\n parameter. Calculation is a function of temperature, but is not always\n presented as such. No lookup values are available; either `Hvapm`, `Vml`,\n and `T` are provided or the calculation cannot be performed.\n\n .. math::\n \\delta = \\sqrt{\\frac{\\Delta H_{vap} - RT}{V_m}}\n\n Parameters\n ----------\n T : float\n Temperature of the fluid [k]\n Hvapm : float\n Heat of vaporization [J/mol/K]\n Vml : float\n Specific volume of the liquid [m^3/mol]\n CASRN : str, optional\n CASRN of the fluid, not currently used [-]\n\n Returns\n -------\n delta : float\n Solubility parameter, [Pa^0.5]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain the solubility parameter\n with the given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n solubility_parameter_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n the solubility parameter for the desired chemical, and will return\n methods instead of the solubility parameter\n\n Notes\n -----\n Undefined past the critical point. For convenience, if Hvap is not defined,\n an error is not raised; None is returned instead. Also for convenience,\n if Hvapm is less than RT, None is returned to avoid taking the root of a\n negative number.\n\n This parameter is often given in units of cal/ml, which is 2045.48 times\n smaller than the value returned here.\n\n Examples\n --------\n Pentane at STP\n\n >>> solubility_parameter(T=298.2, Hvapm=26403.3, Vml=0.000116055)\n 14357.681538173534\n\n References\n ----------\n .. [1] Barton, Allan F. M. CRC Handbook of Solubility Parameters and Other\n Cohesion Parameters, Second Edition. CRC Press, 1991."} {"_id": "q_5320", "text": "r'''Returns the maximum solubility of a solute in a solvent.\n\n .. math::\n \\ln x_i^L \\gamma_i^L = \\frac{\\Delta H_{m,i}}{RT}\\left(\n 1 - \\frac{T}{T_{m,i}}\\right) - \\frac{\\Delta C_{p,i}(T_{m,i}-T)}{RT}\n + \\frac{\\Delta C_{p,i}}{R}\\ln\\frac{T_m}{T}\n\n \\Delta C_{p,i} = C_{p,i}^L - C_{p,i}^S\n\n Parameters\n ----------\n T : float\n Temperature of the system [K]\n Tm : float\n Melting temperature of the solute [K]\n Hm : float\n Heat of melting at the melting temperature of the solute [J/mol]\n Cpl : float, optional\n Molar heat capacity of the solute as a liquid [J/mol/K]\n Cpls: float, optional\n Molar heat capacity of the solute as a solid [J/mol/K]\n gamma : float, optional\n Activity coefficient of the solute as a liquid [-]\n\n Returns\n -------\n x : float\n Mole fraction of solute at maximum solubility [-]\n\n Notes\n -----\n gamma is of the solute in liquid phase\n\n Examples\n --------\n From [1]_, matching example\n\n >>> solubility_eutectic(T=260., Tm=278.68, Hm=9952., Cpl=0, Cps=0, gamma=3.0176)\n 0.24340068761677464\n\n References\n ----------\n .. [1] Gmehling, Jurgen. Chemical Thermodynamics: For Process Simulation.\n Weinheim, Germany: Wiley-VCH, 2012."} {"_id": "q_5321", "text": "r'''Returns the freezing point depression caused by a solute in a solvent.\n Can use either the mole fraction of the solute or its molality and the\n molecular weight of the solvent. Assumes ideal system behavior.\n\n .. math::\n \\Delta T_m = \\frac{R T_m^2 x}{\\Delta H_m}\n\n \\Delta T_m = \\frac{R T_m^2 (MW) M}{1000 \\Delta H_m}\n\n Parameters\n ----------\n Tm : float\n Melting temperature of the solute [K]\n Hm : float\n Heat of melting at the melting temperature of the solute [J/mol]\n x : float, optional\n Mole fraction of the solute [-]\n M : float, optional\n Molality [mol/kg]\n MW: float, optional\n Molecular weight of the solvent [g/mol]\n\n Returns\n -------\n dTm : float\n Freezing point depression [K]\n\n Notes\n -----\n MW is the molecular weight of the solvent. M is the molality of the solute.\n\n Examples\n --------\n From [1]_, matching example.\n\n >>> Tm_depression_eutectic(353.35, 19110, .02)\n 1.0864594900639515\n\n References\n ----------\n .. [1] Gmehling, Jurgen. Chemical Thermodynamics: For Process Simulation.\n Weinheim, Germany: Wiley-VCH, 2012."} {"_id": "q_5322", "text": "r'''Calculates saturation liquid volume, using Rackett CSP method and\n critical properties.\n\n The molar volume of a liquid is given by:\n\n .. math::\n V_s = \\frac{RT_c}{P_c}{Z_c}^{[1+(1-{T/T_c})^{2/7} ]}\n\n Units are all currently in m^3/mol - this can be changed to kg/m^3\n\n Parameters\n ----------\n T : float\n Temperature of fluid [K]\n Tc : float\n Critical temperature of fluid [K]\n Pc : float\n Critical pressure of fluid [Pa]\n Zc : float\n Critical compressibility of fluid, [-]\n\n Returns\n -------\n Vs : float\n Saturation liquid volume, [m^3/mol]\n\n Notes\n -----\n Units are dependent on gas constant R, imported from scipy\n According to Reid et. al, underpredicts volume for compounds with Zc < 0.22\n\n Examples\n --------\n Propane, example from the API Handbook\n\n >>> Vm_to_rho(Rackett(272.03889, 369.83, 4248000.0, 0.2763), 44.09562)\n 531.3223212651092\n\n References\n ----------\n .. [1] Rackett, Harold G. \"Equation of State for Saturated Liquids.\"\n Journal of Chemical & Engineering Data 15, no. 4 (1970): 514-517.\n doi:10.1021/je60047a012"} {"_id": "q_5323", "text": "r'''Calculates saturation liquid volume, using Yamada and Gunn CSP method\n and a chemical's critical properties and acentric factor.\n\n The molar volume of a liquid is given by:\n\n .. math::\n V_s = \\frac{RT_c}{P_c}{(0.29056-0.08775\\omega)}^{[1+(1-{T/T_c})^{2/7}]}\n\n Units are in m^3/mol.\n\n Parameters\n ----------\n T : float\n Temperature of fluid [K]\n Tc : float\n Critical temperature of fluid [K]\n Pc : float\n Critical pressure of fluid [Pa]\n omega : float\n Acentric factor for fluid, [-]\n\n Returns\n -------\n Vs : float\n saturation liquid volume, [m^3/mol]\n\n Notes\n -----\n This equation is an improvement on the Rackett equation.\n This is often presented as the Rackett equation.\n The acentric factor is used here, instead of the critical compressibility\n A variant using a reference fluid also exists\n\n Examples\n --------\n >>> Yamada_Gunn(300, 647.14, 22048320.0, 0.245)\n 2.1882836429895796e-05\n\n References\n ----------\n .. [1] Gunn, R. D., and Tomoyoshi Yamada. \"A Corresponding States\n Correlation of Saturated Liquid Volumes.\" AIChE Journal 17, no. 6\n (1971): 1341-45. doi:10.1002/aic.690170613\n .. [2] Yamada, Tomoyoshi, and Robert D. Gunn. \"Saturated Liquid Molar\n Volumes. Rackett Equation.\" Journal of Chemical & Engineering Data 18,\n no. 2 (1973): 234-36. doi:10.1021/je60057a006"} {"_id": "q_5324", "text": "r'''Calculates saturation liquid density, using the Townsend and Hales\n CSP method as modified from the original Riedel equation. Uses\n chemical critical volume and temperature, as well as acentric factor\n\n The density of a liquid is given by:\n\n .. math::\n Vs = V_c/\\left(1+0.85(1-T_r)+(1.692+0.986\\omega)(1-T_r)^{1/3}\\right)\n\n Parameters\n ----------\n T : float\n Temperature of fluid [K]\n Tc : float\n Critical temperature of fluid [K]\n Vc : float\n Critical volume of fluid [m^3/mol]\n omega : float\n Acentric factor for fluid, [-]\n\n Returns\n -------\n Vs : float\n Saturation liquid volume, [m^3/mol]\n\n Notes\n -----\n The requirement for critical volume and acentric factor requires all data.\n\n Examples\n --------\n >>> Townsend_Hales(300, 647.14, 55.95E-6, 0.3449)\n 1.8007361992619923e-05\n\n References\n ----------\n .. [1] Hales, J. L, and R Townsend. \"Liquid Densities from 293 to 490 K of\n Nine Aromatic Hydrocarbons.\" The Journal of Chemical Thermodynamics\n 4, no. 5 (1972): 763-72. doi:10.1016/0021-9614(72)90050-X"} {"_id": "q_5325", "text": "r'''Calculate saturation liquid density using the COSTALD CSP method.\n\n A popular and accurate estimation method. If possible, fit parameters are\n used; alternatively critical properties work well.\n\n The density of a liquid is given by:\n\n .. math::\n V_s=V^*V^{(0)}[1-\\omega_{SRK}V^{(\\delta)}]\n\n V^{(0)}=1-1.52816(1-T_r)^{1/3}+1.43907(1-T_r)^{2/3}\n - 0.81446(1-T_r)+0.190454(1-T_r)^{4/3}\n\n V^{(\\delta)}=\\frac{-0.296123+0.386914T_r-0.0427258T_r^2-0.0480645T_r^3}\n {T_r-1.00001}\n\n Units are that of critical or fit constant volume.\n\n Parameters\n ----------\n T : float\n Temperature of fluid [K]\n Tc : float\n Critical temperature of fluid [K]\n Vc : float\n Critical volume of fluid [m^3/mol].\n This parameter is alternatively a fit parameter\n omega : float\n (ideally SRK) Acentric factor for fluid, [-]\n This parameter is alternatively a fit parameter.\n\n Returns\n -------\n Vs : float\n Saturation liquid volume\n\n Notes\n -----\n 196 constants are fit to this function in [1]_.\n Range: 0.25 < Tr < 0.95, often said to be to 1.0\n\n This function has been checked with the API handbook example problem.\n\n Examples\n --------\n Propane, from an example in the API Handbook\n\n >>> Vm_to_rho(COSTALD(272.03889, 369.83333, 0.20008161E-3, 0.1532), 44.097)\n 530.3009967969841\n\n\n References\n ----------\n .. [1] Hankinson, Risdon W., and George H. Thomson. \"A New Correlation for\n Saturated Densities of Liquids and Their Mixtures.\" AIChE Journal\n 25, no. 4 (1979): 653-663. doi:10.1002/aic.690250412"} {"_id": "q_5326", "text": "r'''Calculate mixture liquid density using the Amgat mixing rule.\n Highly inacurate, but easy to use. Assumes idea liquids with\n no excess volume. Average molecular weight should be used with it to obtain\n density.\n\n .. math::\n V_{mix} = \\sum_i x_i V_i\n\n or in terms of density:\n\n .. math::\n\n \\rho_{mix} = \\sum\\frac{x_i}{\\rho_i}\n\n Parameters\n ----------\n xs : array\n Mole fractions of each component, []\n Vms : array\n Molar volumes of each fluids at conditions [m^3/mol]\n\n Returns\n -------\n Vm : float\n Mixture liquid volume [m^3/mol]\n\n Notes\n -----\n Units are that of the given volumes.\n It has been suggested to use this equation with weight fractions,\n but the results have been less accurate.\n\n Examples\n --------\n >>> Amgat([0.5, 0.5], [4.057e-05, 5.861e-05])\n 4.9590000000000005e-05"} {"_id": "q_5327", "text": "r'''Calculate mixture liquid density using the COSTALD CSP method.\n\n A popular and accurate estimation method. If possible, fit parameters are\n used; alternatively critical properties work well.\n\n The mixing rules giving parameters for the pure component COSTALD\n equation are:\n\n .. math::\n T_{cm} = \\frac{\\sum_i\\sum_j x_i x_j (V_{ij}T_{cij})}{V_m}\n\n V_m = 0.25\\left[ \\sum x_i V_i + 3(\\sum x_i V_i^{2/3})(\\sum_i x_i V_i^{1/3})\\right]\n\n V_{ij}T_{cij} = (V_iT_{ci}V_{j}T_{cj})^{0.5}\n\n \\omega = \\sum_i z_i \\omega_i\n\n Parameters\n ----------\n xs: list\n Mole fractions of each component\n T : float\n Temperature of fluid [K]\n Tcs : list\n Critical temperature of fluids [K]\n Vcs : list\n Critical volumes of fluids [m^3/mol].\n This parameter is alternatively a fit parameter\n omegas : list\n (ideally SRK) Acentric factor of all fluids, [-]\n This parameter is alternatively a fit parameter.\n\n Returns\n -------\n Vs : float\n Saturation liquid mixture volume\n\n Notes\n -----\n Range: 0.25 < Tr < 0.95, often said to be to 1.0\n No example has been found.\n Units are that of critical or fit constant volume.\n\n Examples\n --------\n >>> COSTALD_mixture([0.4576, 0.5424], 298., [512.58, 647.29],[0.000117, 5.6e-05], [0.559,0.344] )\n 2.706588773271354e-05\n\n References\n ----------\n .. [1] Hankinson, Risdon W., and George H. Thomson. \"A New Correlation for\n Saturated Densities of Liquids and Their Mixtures.\" AIChE Journal\n 25, no. 4 (1979): 653-663. doi:10.1002/aic.690250412"} {"_id": "q_5328", "text": "r'''Method to calculate molar volume of a liquid mixture at \n temperature `T`, pressure `P`, mole fractions `zs` and weight fractions\n `ws` with a given method.\n\n This method has no exception handling; see `mixture_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the property, [K]\n P : float\n Pressure at which to calculate the property, [Pa]\n zs : list[float]\n Mole fractions of all species in the mixture, [-]\n ws : list[float]\n Weight fractions of all species in the mixture, [-]\n method : str\n Name of the method to use\n\n Returns\n -------\n Vm : float\n Molar volume of the liquid mixture at the given conditions, \n [m^3/mol]"} {"_id": "q_5329", "text": "r'''Method to calculate pressure-dependent gas molar volume at\n temperature `T` and pressure `P` with a given method.\n\n This method has no exception handling; see `TP_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate molar volume, [K]\n P : float\n Pressure at which to calculate molar volume, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n Vm : float\n Molar volume of the gas at T and P, [m^3/mol]"} {"_id": "q_5330", "text": "r'''Method to calculate the molar volume of a solid at tempearture `T`\n with a given method.\n\n This method has no exception handling; see `T_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate molar volume, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n Vms : float\n Molar volume of the solid at T, [m^3/mol]"} {"_id": "q_5331", "text": "r'''Looks up the legal status of a chemical according to either a specifc\n method or with all methods.\n\n Returns either the status as a string for a specified method, or the\n status of the chemical in all available data sources, in the format\n {source: status}.\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n status : str or dict\n Legal status information [-]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain legal status with the\n given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n legal_status_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n the legal status for the desired chemical, and will return methods\n instead of the status\n CASi : int, optional\n CASRN as an integer, used internally [-]\n\n Notes\n -----\n\n Supported methods are:\n\n * **DSL**: Canada Domestic Substance List, [1]_. As extracted on Feb 11, 2015\n from a html list. This list is updated continuously, so this version\n will always be somewhat old. Strictly speaking, there are multiple\n lists but they are all bundled together here. A chemical may be\n 'Listed', or be on the 'Non-Domestic Substances List (NDSL)',\n or be on the list of substances with 'Significant New Activity (SNAc)',\n or be on the DSL but with a 'Ministerial Condition pertaining to this\n substance', or have been removed from the DSL, or have had a\n Ministerial prohibition for the substance.\n * **TSCA**: USA EPA Toxic Substances Control Act Chemical Inventory, [2]_.\n This list is as extracted on 2016-01. It is believed this list is\n updated on a periodic basis (> 6 month). A chemical may simply be\n 'Listed', or may have certain flags attached to it. All these flags\n are described in the dict TSCA_flags.\n * **EINECS**: European INventory of Existing Commercial chemical\n Substances, [3]_. As extracted from a spreadsheet dynamically\n generated at [1]_. This list was obtained March 2015; a more recent\n revision already exists.\n * **NLP**: No Longer Polymers, a list of chemicals with special\n regulatory exemptions in EINECS. Also described at [3]_.\n * **SPIN**: Substances Prepared in Nordic Countries. Also a boolean\n data type. Retrieved 2015-03 from [4]_.\n\n Other methods which could be added are:\n\n * Australia: AICS Australian Inventory of Chemical Substances\n * China: Inventory of Existing Chemical Substances Produced or Imported\n in China (IECSC)\n * Europe: REACH List of Registered Substances\n * India: List of Hazardous Chemicals\n * Japan: ENCS: Inventory of existing and new chemical substances\n * Korea: Existing Chemicals Inventory (KECI)\n * Mexico: INSQ National Inventory of Chemical Substances in Mexico\n * New Zealand: Inventory of Chemicals (NZIoC)\n * Philippines: PICCS Philippines Inventory of Chemicals and Chemical\n Substances\n\n Examples\n --------\n >>> pprint(legal_status('64-17-5'))\n {'DSL': 'LISTED',\n 'EINECS': 'LISTED',\n 'NLP': 'UNLISTED',\n 'SPIN': 'LISTED',\n 'TSCA': 'LISTED'}\n\n References\n ----------\n .. [1] Government of Canada.. \"Substances Lists\" Feb 11, 2015.\n https://www.ec.gc.ca/subsnouvelles-newsubs/default.asp?n=47F768FE-1.\n .. [2] US EPA. \"TSCA Chemical Substance Inventory.\" Accessed April 2016.\n https://www.epa.gov/tsca-inventory.\n .. [3] ECHA. \"EC Inventory\". Accessed March 2015.\n http://echa.europa.eu/information-on-chemicals/ec-inventory.\n .. [4] SPIN. \"SPIN Substances in Products In Nordic Countries.\" Accessed\n March 2015. http://195.215.202.233/DotNetNuke/default.aspx."} {"_id": "q_5332", "text": "Look up the economic status of a chemical.\n\n This API is considered experimental, and is expected to be removed in a\n future release in favor of a more complete object-oriented interface.\n\n >>> pprint(economic_status(CASRN='98-00-0'))\n [\"US public: {'Manufactured': 0.0, 'Imported': 10272.711, 'Exported': 184.127}\",\n u'10,000 - 100,000 tonnes per annum',\n 'OECD HPV Chemicals']\n\n >>> economic_status(CASRN='13775-50-3') # SODIUM SESQUISULPHATE\n []\n >>> economic_status(CASRN='98-00-0', Method='OECD high production volume chemicals')\n 'OECD HPV Chemicals'\n >>> economic_status(CASRN='98-01-1', Method='European Chemicals Agency Total Tonnage Bands')\n [u'10,000 - 100,000 tonnes per annum']"} {"_id": "q_5333", "text": "Method to compute all available properties with the Joback method;\n returns their results as a dict. For the tempearture dependent values\n Cpig and mul, both the coefficients and objects to perform calculations\n are returned."} {"_id": "q_5334", "text": "r'''This function handles the retrieval of a chemical's conductivity.\n Lookup is based on CASRNs. Will automatically select a data source to use\n if no Method is provided; returns None if the data is not available.\n\n Function has data for approximately 100 chemicals.\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n kappa : float\n Electrical conductivity of the fluid, [S/m]\n T : float, only returned if full_info == True\n Temperature at which conductivity measurement was made\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain RI with the given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n conductivity_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n conductivity for the desired chemical, and will return methods instead\n of conductivity\n full_info : bool, optional\n If True, function will return the temperature at which the conductivity\n reading was made\n\n Notes\n -----\n Only one source is available in this function. It is:\n\n * 'LANGE_COND' which is from Lange's Handbook, Table 8.34 Electrical \n Conductivity of Various Pure Liquids', a compillation of data in [1]_.\n\n Examples\n --------\n >>> conductivity('7732-18-5')\n (4e-06, 291.15)\n\n References\n ----------\n .. [1] Speight, James. Lange's Handbook of Chemistry. 16 edition.\n McGraw-Hill Professional, 2005."} {"_id": "q_5335", "text": "Helper method for balance_ions for the proportional family of methods. \n See balance_ions for a description of the methods; parameters are fairly\n obvious."} {"_id": "q_5336", "text": "r'''Method to calculate permittivity of a liquid at temperature `T`\n with a given method.\n\n This method has no exception handling; see `T_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate relative permittivity, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n epsilon : float\n Relative permittivity of the liquid at T, [-]"} {"_id": "q_5337", "text": "Data is stored in the format\n InChI key\\tbool bool bool \\tsubgroup count ...\\tsubgroup count \\tsubgroup count...\n where the bools refer to whether or not the original UNIFAC, modified\n UNIFAC, and PSRK group assignments were completed correctly.\n The subgroups and their count have an indefinite length."} {"_id": "q_5338", "text": "r'''This function handles the retrieval of a chemical's dipole moment.\n Lookup is based on CASRNs. Will automatically select a data source to use\n if no Method is provided; returns None if the data is not available.\n\n Prefered source is 'CCCBDB'. Considerable variation in reported data has\n found.\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n dipole : float\n Dipole moment, [debye]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain dipole moment with the\n given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n The method name to use. Accepted methods are 'CCCBDB', 'MULLER', or\n 'POLING'. All valid values are also held in the list `dipole_methods`.\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n the dipole moment for the desired chemical, and will return methods\n instead of the dipole moment\n\n Notes\n -----\n A total of three sources are available for this function. They are:\n\n * 'CCCBDB', a series of critically evaluated data for compounds in\n [1]_, intended for use in predictive modeling.\n * 'MULLER', a collection of data in a\n group-contribution scheme in [2]_.\n * 'POLING', in the appendix in [3].\n \n This function returns dipole moment in units of Debye. This is actually\n a non-SI unit; to convert to SI, multiply by 3.33564095198e-30 and its\n units will be in ampere*second^2 or equivalently and more commonly given,\n coulomb*second. The constant is the result of 1E-21/c, where c is the\n speed of light.\n \n Examples\n --------\n >>> dipole_moment(CASRN='64-17-5')\n 1.44\n\n References\n ----------\n .. [1] NIST Computational Chemistry Comparison and Benchmark Database\n NIST Standard Reference Database Number 101 Release 17b, September 2015,\n Editor: Russell D. Johnson III http://cccbdb.nist.gov/\n .. [2] Muller, Karsten, Liudmila Mokrushina, and Wolfgang Arlt. \"Second-\n Order Group Contribution Method for the Determination of the Dipole\n Moment.\" Journal of Chemical & Engineering Data 57, no. 4 (April 12,\n 2012): 1231-36. doi:10.1021/je2013395.\n .. [3] Poling, Bruce E. The Properties of Gases and Liquids. 5th edition.\n New York: McGraw-Hill Professional, 2000."} {"_id": "q_5339", "text": "r'''This function handles the retrieval of a chemical's critical\n pressure. Lookup is based on CASRNs. Will automatically select a data\n source to use if no Method is provided; returns None if the data is not\n available.\n\n Prefered sources are 'IUPAC' for organic chemicals, and 'MATTHEWS' for \n inorganic chemicals. Function has data for approximately 1000 chemicals.\n\n Examples\n --------\n >>> Pc(CASRN='64-17-5')\n 6137000.0\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n Pc : float\n Critical pressure, [Pa]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain Pc with the given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n The method name to use. Accepted methods are 'IUPAC', 'MATTHEWS', \n 'CRC', 'PSRK', 'PD', 'YAWS', and 'SURF'. All valid values are also held \n in the list `Pc_methods`.\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n Pc for the desired chemical, and will return methods instead of Pc\n IgnoreMethods : list, optional\n A list of methods to ignore in obtaining the full list of methods,\n useful for for performance reasons and ignoring inaccurate methods\n\n Notes\n -----\n A total of seven sources are available for this function. They are:\n\n * 'IUPAC', a series of critically evaluated\n experimental datum for organic compounds in [1]_, [2]_, [3]_, [4]_,\n [5]_, [6]_, [7]_, [8]_, [9]_, [10]_, [11]_, and [12]_.\n * 'MATTHEWS', a series of critically\n evaluated data for inorganic compounds in [13]_.\n * 'CRC', a compillation of critically\n evaluated data by the TRC as published in [14]_.\n * 'PSRK', a compillation of experimental and\n estimated data published in [15]_.\n * 'PD', an older compillation of\n data published in [16]_\n * 'YAWS', a large compillation of data from a\n variety of sources; no data points are sourced in the work of [17]_.\n * SURF', an estimation method using a\n simple quadratic method for estimating Pc from Tc and Vc. This is\n ignored and not returned as a method by default.\n\n References\n ----------\n .. [1] Ambrose, Douglas, and Colin L. Young. \"Vapor-Liquid Critical\n Properties of Elements and Compounds. 1. An Introductory Survey.\"\n Journal of Chemical & Engineering Data 41, no. 1 (January 1, 1996):\n 154-154. doi:10.1021/je950378q.\n .. [2] Ambrose, Douglas, and Constantine Tsonopoulos. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 2. Normal Alkanes.\"\n Journal of Chemical & Engineering Data 40, no. 3 (May 1, 1995): 531-46.\n doi:10.1021/je00019a001.\n .. [3] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 3. Aromatic\n Hydrocarbons.\" Journal of Chemical & Engineering Data 40, no. 3\n (May 1, 1995): 547-58. doi:10.1021/je00019a002.\n .. [4] Gude, Michael, and Amyn S. Teja. \"Vapor-Liquid Critical Properties\n of Elements and Compounds. 4. Aliphatic Alkanols.\" Journal of Chemical\n & Engineering Data 40, no. 5 (September 1, 1995): 1025-36.\n doi:10.1021/je00021a001.\n .. [5] Daubert, Thomas E. \"Vapor-Liquid Critical Properties of Elements\n and Compounds. 5. Branched Alkanes and Cycloalkanes.\" Journal of\n Chemical & Engineering Data 41, no. 3 (January 1, 1996): 365-72.\n doi:10.1021/je9501548.\n .. [6] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 6. Unsaturated Aliphatic\n Hydrocarbons.\" Journal of Chemical & Engineering Data 41, no. 4\n (January 1, 1996): 645-56. doi:10.1021/je9501999.\n .. [7] Kudchadker, Arvind P., Douglas Ambrose, and Constantine Tsonopoulos.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 7. Oxygen\n Compounds Other Than Alkanols and Cycloalkanols.\" Journal of Chemical &\n Engineering Data 46, no. 3 (May 1, 2001): 457-79. doi:10.1021/je0001680.\n .. [8] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 8. Organic Sulfur,\n Silicon, and Tin Compounds (C + H + S, Si, and Sn).\" Journal of Chemical\n & Engineering Data 46, no. 3 (May 1, 2001): 480-85.\n doi:10.1021/je000210r.\n .. [9] Marsh, Kenneth N., Colin L. Young, David W. Morton, Douglas Ambrose,\n and Constantine Tsonopoulos. \"Vapor-Liquid Critical Properties of\n Elements and Compounds. 9. Organic Compounds Containing Nitrogen.\"\n Journal of Chemical & Engineering Data 51, no. 2 (March 1, 2006):\n 305-14. doi:10.1021/je050221q.\n .. [10] Marsh, Kenneth N., Alan Abramson, Douglas Ambrose, David W. Morton,\n Eugene Nikitin, Constantine Tsonopoulos, and Colin L. Young.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 10. Organic\n Compounds Containing Halogens.\" Journal of Chemical & Engineering Data\n 52, no. 5 (September 1, 2007): 1509-38. doi:10.1021/je700336g.\n .. [11] Ambrose, Douglas, Constantine Tsonopoulos, and Eugene D. Nikitin.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 11. Organic\n Compounds Containing B + O; Halogens + N, + O, + O + S, + S, + Si;\n N + O; and O + S, + Si.\" Journal of Chemical & Engineering Data 54,\n no. 3 (March 12, 2009): 669-89. doi:10.1021/je800580z.\n .. [12] Ambrose, Douglas, Constantine Tsonopoulos, Eugene D. Nikitin, David\n W. Morton, and Kenneth N. Marsh. \"Vapor-Liquid Critical Properties of\n Elements and Compounds. 12. Review of Recent Data for Hydrocarbons and\n Non-Hydrocarbons.\" Journal of Chemical & Engineering Data, October 5,\n 2015, 151005081500002. doi:10.1021/acs.jced.5b00571.\n .. [13] Mathews, Joseph F. \"Critical Constants of Inorganic Substances.\"\n Chemical Reviews 72, no. 1 (February 1, 1972): 71-100.\n doi:10.1021/cr60275a004.\n .. [14] Haynes, W.M., Thomas J. Bruno, and David R. Lide. CRC Handbook of\n Chemistry and Physics, 95E. Boca Raton, FL: CRC press, 2014.\n .. [15] Horstmann, Sven, Anna Jab\u0142oniec, J\u00f6rg Krafczyk, Kai Fischer, and\n J\u00fcrgen Gmehling. \"PSRK Group Contribution Equation of State:\n Comprehensive Revision and Extension IV, Including Critical Constants\n and \u0391-Function Parameters for 1000 Components.\" Fluid Phase Equilibria\n 227, no. 2 (January 25, 2005): 157-64. doi:10.1016/j.fluid.2004.11.002.\n .. [16] Passut, Charles A., and Ronald P. Danner. \"Acentric Factor. A\n Valuable Correlating Parameter for the Properties of Hydrocarbons.\"\n Industrial & Engineering Chemistry Process Design and Development 12,\n no. 3 (July 1, 1973): 365\u201368. doi:10.1021/i260047a026.\n .. [17] Yaws, Carl L. Thermophysical Properties of Chemicals and\n Hydrocarbons, Second Edition. Amsterdam Boston: Gulf Professional\n Publishing, 2014."} {"_id": "q_5340", "text": "r'''This function handles the retrieval of a chemical's critical\n volume. Lookup is based on CASRNs. Will automatically select a data\n source to use if no Method is provided; returns None if the data is not\n available.\n\n Prefered sources are 'IUPAC' for organic chemicals, and 'MATTHEWS' for \n inorganic chemicals. Function has data for approximately 1000 chemicals.\n\n Examples\n --------\n >>> Vc(CASRN='64-17-5')\n 0.000168\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n Vc : float\n Critical volume, [m^3/mol]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain Vc with the given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n The method name to use. Accepted methods are 'IUPAC', 'MATTHEWS', \n 'CRC', 'PSRK', 'YAWS', and 'SURF'. All valid values are also held \n in the list `Vc_methods`.\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n Vc for the desired chemical, and will return methods instead of Vc\n IgnoreMethods : list, optional\n A list of methods to ignore in obtaining the full list of methods,\n useful for for performance reasons and ignoring inaccurate methods\n\n Notes\n -----\n A total of six sources are available for this function. They are:\n\n * 'IUPAC', a series of critically evaluated\n experimental datum for organic compounds in [1]_, [2]_, [3]_, [4]_,\n [5]_, [6]_, [7]_, [8]_, [9]_, [10]_, [11]_, and [12]_.\n * 'MATTHEWS', a series of critically\n evaluated data for inorganic compounds in [13]_.\n * 'CRC', a compillation of critically\n evaluated data by the TRC as published in [14]_.\n * 'PSRK', a compillation of experimental and\n estimated data published in [15]_.\n * 'YAWS', a large compillation of data from a\n variety of sources; no data points are sourced in the work of [16]_.\n * 'SURF', an estimation method using a\n simple quadratic method for estimating Pc from Tc and Vc. This is\n ignored and not returned as a method by default\n\n References\n ----------\n .. [1] Ambrose, Douglas, and Colin L. Young. \"Vapor-Liquid Critical\n Properties of Elements and Compounds. 1. An Introductory Survey.\"\n Journal of Chemical & Engineering Data 41, no. 1 (January 1, 1996):\n 154-154. doi:10.1021/je950378q.\n .. [2] Ambrose, Douglas, and Constantine Tsonopoulos. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 2. Normal Alkanes.\"\n Journal of Chemical & Engineering Data 40, no. 3 (May 1, 1995): 531-46.\n doi:10.1021/je00019a001.\n .. [3] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 3. Aromatic\n Hydrocarbons.\" Journal of Chemical & Engineering Data 40, no. 3\n (May 1, 1995): 547-58. doi:10.1021/je00019a002.\n .. [4] Gude, Michael, and Amyn S. Teja. \"Vapor-Liquid Critical Properties\n of Elements and Compounds. 4. Aliphatic Alkanols.\" Journal of Chemical\n & Engineering Data 40, no. 5 (September 1, 1995): 1025-36.\n doi:10.1021/je00021a001.\n .. [5] Daubert, Thomas E. \"Vapor-Liquid Critical Properties of Elements\n and Compounds. 5. Branched Alkanes and Cycloalkanes.\" Journal of\n Chemical & Engineering Data 41, no. 3 (January 1, 1996): 365-72.\n doi:10.1021/je9501548.\n .. [6] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 6. Unsaturated Aliphatic\n Hydrocarbons.\" Journal of Chemical & Engineering Data 41, no. 4\n (January 1, 1996): 645-56. doi:10.1021/je9501999.\n .. [7] Kudchadker, Arvind P., Douglas Ambrose, and Constantine Tsonopoulos.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 7. Oxygen\n Compounds Other Than Alkanols and Cycloalkanols.\" Journal of Chemical &\n Engineering Data 46, no. 3 (May 1, 2001): 457-79. doi:10.1021/je0001680.\n .. [8] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 8. Organic Sulfur,\n Silicon, and Tin Compounds (C + H + S, Si, and Sn).\" Journal of Chemical\n & Engineering Data 46, no. 3 (May 1, 2001): 480-85.\n doi:10.1021/je000210r.\n .. [9] Marsh, Kenneth N., Colin L. Young, David W. Morton, Douglas Ambrose,\n and Constantine Tsonopoulos. \"Vapor-Liquid Critical Properties of\n Elements and Compounds. 9. Organic Compounds Containing Nitrogen.\"\n Journal of Chemical & Engineering Data 51, no. 2 (March 1, 2006):\n 305-14. doi:10.1021/je050221q.\n .. [10] Marsh, Kenneth N., Alan Abramson, Douglas Ambrose, David W. Morton,\n Eugene Nikitin, Constantine Tsonopoulos, and Colin L. Young.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 10. Organic\n Compounds Containing Halogens.\" Journal of Chemical & Engineering Data\n 52, no. 5 (September 1, 2007): 1509-38. doi:10.1021/je700336g.\n .. [11] Ambrose, Douglas, Constantine Tsonopoulos, and Eugene D. Nikitin.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 11. Organic\n Compounds Containing B + O; Halogens + N, + O, + O + S, + S, + Si;\n N + O; and O + S, + Si.\" Journal of Chemical & Engineering Data 54,\n no. 3 (March 12, 2009): 669-89. doi:10.1021/je800580z.\n .. [12] Ambrose, Douglas, Constantine Tsonopoulos, Eugene D. Nikitin, David\n W. Morton, and Kenneth N. Marsh. \"Vapor-Liquid Critical Properties of\n Elements and Compounds. 12. Review of Recent Data for Hydrocarbons and\n Non-Hydrocarbons.\" Journal of Chemical & Engineering Data, October 5,\n 2015, 151005081500002. doi:10.1021/acs.jced.5b00571.\n .. [13] Mathews, Joseph F. \"Critical Constants of Inorganic Substances.\"\n Chemical Reviews 72, no. 1 (February 1, 1972): 71-100.\n doi:10.1021/cr60275a004.\n .. [14] Haynes, W.M., Thomas J. Bruno, and David R. Lide. CRC Handbook of\n Chemistry and Physics, 95E. Boca Raton, FL: CRC press, 2014.\n .. [15] Horstmann, Sven, Anna Jab\u0142oniec, J\u00f6rg Krafczyk, Kai Fischer, and\n J\u00fcrgen Gmehling. \"PSRK Group Contribution Equation of State:\n Comprehensive Revision and Extension IV, Including Critical Constants\n and \u0391-Function Parameters for 1000 Components.\" Fluid Phase Equilibria\n 227, no. 2 (January 25, 2005): 157-64. doi:10.1016/j.fluid.2004.11.002.\n .. [16] Yaws, Carl L. Thermophysical Properties of Chemicals and\n Hydrocarbons, Second Edition. Amsterdam Boston: Gulf Professional\n Publishing, 2014."} {"_id": "q_5341", "text": "r'''This function handles the retrieval of a chemical's critical\n compressibility. Lookup is based on CASRNs. Will automatically select a\n data source to use if no Method is provided; returns None if the data is\n not available.\n\n Prefered sources are 'IUPAC' for organic chemicals, and 'MATTHEWS' for \n inorganic chemicals. Function has data for approximately 1000 chemicals.\n\n Examples\n --------\n >>> Zc(CASRN='64-17-5')\n 0.24100000000000002\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n Zc : float\n Critical compressibility, [-]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain Vc with the given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n The method name to use. Accepted methods are 'IUPAC', 'MATTHEWS', \n 'CRC', 'PSRK', 'YAWS', and 'COMBINED'. All valid values are also held \n in `Zc_methods`.\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n Zc for the desired chemical, and will return methods instead of Zc\n IgnoreMethods : list, optional\n A list of methods to ignore in obtaining the full list of methods,\n useful for for performance reasons and ignoring inaccurate methods\n\n Notes\n -----\n A total of five sources are available for this function. They are:\n\n * 'IUPAC', a series of critically evaluated\n experimental datum for organic compounds in [1]_, [2]_, [3]_, [4]_,\n [5]_, [6]_, [7]_, [8]_, [9]_, [10]_, [11]_, and [12]_.\n * 'MATTHEWS', a series of critically\n evaluated data for inorganic compounds in [13]_.\n * 'CRC', a compillation of critically\n evaluated data by the TRC as published in [14]_.\n * 'PSRK', a compillation of experimental and\n estimated data published in [15]_.\n * 'YAWS', a large compillation of data from a\n variety of sources; no data points are sourced in the work of [16]_.\n\n References\n ----------\n .. [1] Ambrose, Douglas, and Colin L. Young. \"Vapor-Liquid Critical\n Properties of Elements and Compounds. 1. An Introductory Survey.\"\n Journal of Chemical & Engineering Data 41, no. 1 (January 1, 1996):\n 154-154. doi:10.1021/je950378q.\n .. [2] Ambrose, Douglas, and Constantine Tsonopoulos. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 2. Normal Alkanes.\"\n Journal of Chemical & Engineering Data 40, no. 3 (May 1, 1995): 531-46.\n doi:10.1021/je00019a001.\n .. [3] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 3. Aromatic\n Hydrocarbons.\" Journal of Chemical & Engineering Data 40, no. 3\n (May 1, 1995): 547-58. doi:10.1021/je00019a002.\n .. [4] Gude, Michael, and Amyn S. Teja. \"Vapor-Liquid Critical Properties\n of Elements and Compounds. 4. Aliphatic Alkanols.\" Journal of Chemical\n & Engineering Data 40, no. 5 (September 1, 1995): 1025-36.\n doi:10.1021/je00021a001.\n .. [5] Daubert, Thomas E. \"Vapor-Liquid Critical Properties of Elements\n and Compounds. 5. Branched Alkanes and Cycloalkanes.\" Journal of\n Chemical & Engineering Data 41, no. 3 (January 1, 1996): 365-72.\n doi:10.1021/je9501548.\n .. [6] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 6. Unsaturated Aliphatic\n Hydrocarbons.\" Journal of Chemical & Engineering Data 41, no. 4\n (January 1, 1996): 645-56. doi:10.1021/je9501999.\n .. [7] Kudchadker, Arvind P., Douglas Ambrose, and Constantine Tsonopoulos.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 7. Oxygen\n Compounds Other Than Alkanols and Cycloalkanols.\" Journal of Chemical &\n Engineering Data 46, no. 3 (May 1, 2001): 457-79. doi:10.1021/je0001680.\n .. [8] Tsonopoulos, Constantine, and Douglas Ambrose. \"Vapor-Liquid\n Critical Properties of Elements and Compounds. 8. Organic Sulfur,\n Silicon, and Tin Compounds (C + H + S, Si, and Sn).\" Journal of Chemical\n & Engineering Data 46, no. 3 (May 1, 2001): 480-85.\n doi:10.1021/je000210r.\n .. [9] Marsh, Kenneth N., Colin L. Young, David W. Morton, Douglas Ambrose,\n and Constantine Tsonopoulos. \"Vapor-Liquid Critical Properties of\n Elements and Compounds. 9. Organic Compounds Containing Nitrogen.\"\n Journal of Chemical & Engineering Data 51, no. 2 (March 1, 2006):\n 305-14. doi:10.1021/je050221q.\n .. [10] Marsh, Kenneth N., Alan Abramson, Douglas Ambrose, David W. Morton,\n Eugene Nikitin, Constantine Tsonopoulos, and Colin L. Young.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 10. Organic\n Compounds Containing Halogens.\" Journal of Chemical & Engineering Data\n 52, no. 5 (September 1, 2007): 1509-38. doi:10.1021/je700336g.\n .. [11] Ambrose, Douglas, Constantine Tsonopoulos, and Eugene D. Nikitin.\n \"Vapor-Liquid Critical Properties of Elements and Compounds. 11. Organic\n Compounds Containing B + O; Halogens + N, + O, + O + S, + S, + Si;\n N + O; and O + S, + Si.\" Journal of Chemical & Engineering Data 54,\n no. 3 (March 12, 2009): 669-89. doi:10.1021/je800580z.\n .. [12] Ambrose, Douglas, Constantine Tsonopoulos, Eugene D. Nikitin, David\n W. Morton, and Kenneth N. Marsh. \"Vapor-Liquid Critical Properties of\n Elements and Compounds. 12. Review of Recent Data for Hydrocarbons and\n Non-Hydrocarbons.\" Journal of Chemical & Engineering Data, October 5,\n 2015, 151005081500002. doi:10.1021/acs.jced.5b00571.\n .. [13] Mathews, Joseph F. \"Critical Constants of Inorganic Substances.\"\n Chemical Reviews 72, no. 1 (February 1, 1972): 71-100.\n doi:10.1021/cr60275a004.\n .. [14] Haynes, W.M., Thomas J. Bruno, and David R. Lide. CRC Handbook of\n Chemistry and Physics, 95E. Boca Raton, FL: CRC press, 2014.\n .. [15] Horstmann, Sven, Anna Jab\u0142oniec, J\u00f6rg Krafczyk, Kai Fischer, and\n J\u00fcrgen Gmehling. \"PSRK Group Contribution Equation of State:\n Comprehensive Revision and Extension IV, Including Critical Constants\n and \u0391-Function Parameters for 1000 Components.\" Fluid Phase Equilibria\n 227, no. 2 (January 25, 2005): 157-64. doi:10.1016/j.fluid.2004.11.002.\n .. [16] Yaws, Carl L. Thermophysical Properties of Chemicals and\n Hydrocarbons, Second Edition. Amsterdam Boston: Gulf Professional\n Publishing, 2014."} {"_id": "q_5342", "text": "r'''Function for calculating a critical property of a substance from its\n other two critical properties. Calls functions Ihmels, Meissner, and\n Grigoras, each of which use a general 'Critical surface' type of equation.\n Limited accuracy is expected due to very limited theoretical backing.\n\n Parameters\n ----------\n Tc : float\n Critical temperature of fluid (optional) [K]\n Pc : float\n Critical pressure of fluid (optional) [Pa]\n Vc : float\n Critical volume of fluid (optional) [m^3/mol]\n AvailableMethods : bool\n Request available methods for given parameters\n Method : string\n Request calculation uses the requested method\n\n Returns\n -------\n Tc, Pc or Vc : float\n Critical property of fluid [K], [Pa], or [m^3/mol]\n\n Notes\n -----\n\n Examples\n --------\n Decamethyltetrasiloxane [141-62-8]\n\n >>> critical_surface(Tc=599.4, Pc=1.19E6, Method='IHMELS')\n 0.0010927333333333334"} {"_id": "q_5343", "text": "r'''Function for calculating a critical property of a substance from its\n other two critical properties, but retrieving the actual other critical\n values for convenient calculation.\n Calls functions Ihmels, Meissner, and\n Grigoras, each of which use a general 'Critical surface' type of equation.\n Limited accuracy is expected due to very limited theoretical backing.\n\n Parameters\n ----------\n CASRN : string\n The CAS number of the desired chemical\n T : bool\n Estimate critical temperature\n P : bool\n Estimate critical pressure\n V : bool\n Estimate critical volume\n\n Returns\n -------\n Tc, Pc or Vc : float\n Critical property of fluid [K], [Pa], or [m^3/mol]\n\n Notes\n -----\n Avoids recursion only by eliminating the None and critical surface options\n for calculating each critical property. So long as it never calls itself.\n Note that when used by Tc, Pc or Vc, this function results in said function\n calling the other functions (to determine methods) and (with method specified)\n\n Examples\n --------\n >>> # Decamethyltetrasiloxane [141-62-8]\n >>> third_property('141-62-8', V=True)\n 0.0010920041152263375\n\n >>> # Succinic acid 110-15-6\n >>> third_property('110-15-6', P=True)\n 6095016.233766234"} {"_id": "q_5344", "text": "Checks if a CAS number is valid. Returns False if the parser cannot \n parse the given string..\n\n Parameters\n ----------\n CASRN : string\n A three-piece, dash-separated set of numbers\n\n Returns\n -------\n result : bool\n Boolean value if CASRN was valid. If parsing fails, return False also.\n\n Notes\n -----\n Check method is according to Chemical Abstract Society. However, no lookup\n to their service is performed; therefore, this function cannot detect\n false positives.\n\n Function also does not support additional separators, apart from '-'.\n \n CAS numbers up to the series 1 XXX XXX-XX-X are now being issued.\n \n A long can hold CAS numbers up to 2 147 483-64-7\n\n Examples\n --------\n >>> checkCAS('7732-18-5')\n True\n >>> checkCAS('77332-18-5')\n False"} {"_id": "q_5345", "text": "Charge of the species as an integer. Computed as a property as most\n species do not have a charge and so storing it would be a waste of \n memory."} {"_id": "q_5346", "text": "Loads a file with newline-separated integers representing which \n chemical should be kept in memory; ones not included are ignored."} {"_id": "q_5347", "text": "r'''This function handles the retrieval or calculation a chemical's\n Stockmayer parameter. Values are available from one source with lookup\n based on CASRNs, or can be estimated from 7 CSP methods.\n Will automatically select a data source to use if no Method is provided;\n returns None if the data is not available.\n\n Prefered sources are 'Magalh\u00e3es, Lito, Da Silva, and Silva (2013)' for\n common chemicals which had valies listed in that source, and the CSP method\n `Tee, Gotoh, and Stewart CSP with Tc, omega (1966)` for chemicals which\n don't.\n\n Examples\n --------\n >>> Stockmayer(CASRN='64-17-5')\n 1291.41\n\n Parameters\n ----------\n Tm : float, optional\n Melting temperature of fluid [K]\n Tb : float, optional\n Boiling temperature of fluid [K]\n Tc : float, optional\n Critical temperature, [K]\n Zc : float, optional\n Critical compressibility, [-]\n omega : float, optional\n Acentric factor of compound, [-]\n CASRN : string, optional\n CASRN [-]\n\n Returns\n -------\n epsilon_k : float\n Lennard-Jones depth of potential-energy minimum over k, [K]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain epsilon with the given\n inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n Stockmayer_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n epsilon for the desired chemical, and will return methods instead of\n epsilon\n\n Notes\n -----\n These values are somewhat rough, as they attempt to pigeonhole a chemical\n into L-J behavior.\n\n The tabulated data is from [2]_, for 322 chemicals.\n\n References\n ----------\n .. [1] Bird, R. Byron, Warren E. Stewart, and Edwin N. Lightfoot.\n Transport Phenomena, Revised 2nd Edition. New York:\n John Wiley & Sons, Inc., 2006\n .. [2] Magalh\u00e3es, Ana L., Patr\u00edcia F. Lito, Francisco A. Da Silva, and\n Carlos M. Silva. \"Simple and Accurate Correlations for Diffusion\n Coefficients of Solutes in Liquids and Supercritical Fluids over Wide\n Ranges of Temperature and Density.\" The Journal of Supercritical Fluids\n 76 (April 2013): 94-114. doi:10.1016/j.supflu.2013.02.002."} {"_id": "q_5348", "text": "r'''This function handles the retrieval or calculation a chemical's\n L-J molecular diameter. Values are available from one source with lookup\n based on CASRNs, or can be estimated from 9 CSP methods.\n Will automatically select a data source to use if no Method is provided;\n returns None if the data is not available.\n\n Prefered sources are 'Magalh\u00e3es, Lito, Da Silva, and Silva (2013)' for\n common chemicals which had valies listed in that source, and the CSP method\n `Tee, Gotoh, and Stewart CSP with Tc, Pc, omega (1966)` for chemicals which\n don't.\n\n Examples\n --------\n >>> molecular_diameter(CASRN='64-17-5')\n 4.23738\n\n Parameters\n ----------\n Tc : float, optional\n Critical temperature, [K]\n Pc : float, optional\n Critical pressure, [Pa]\n Vc : float, optional\n Critical volume, [m^3/mol]\n Zc : float, optional\n Critical compressibility, [-]\n omega : float, optional\n Acentric factor of compound, [-]\n Vm : float, optional\n Molar volume of liquid at the melting point of the fluid [K]\n Vb : float, optional\n Molar volume of liquid at the boiling point of the fluid [K]\n CASRN : string, optional\n CASRN [-]\n\n Returns\n -------\n sigma : float\n Lennard-Jones molecular diameter, [Angstrom]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain epsilon with the given\n inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n molecular_diameter_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n sigma for the desired chemical, and will return methods instead of\n sigma\n\n Notes\n -----\n These values are somewhat rough, as they attempt to pigeonhole a chemical\n into L-J behavior.\n\n The tabulated data is from [2]_, for 322 chemicals.\n\n References\n ----------\n .. [1] Bird, R. Byron, Warren E. Stewart, and Edwin N. Lightfoot.\n Transport Phenomena, Revised 2nd Edition. New York:\n John Wiley & Sons, Inc., 2006\n .. [2] Magalh\u00e3es, Ana L., Patr\u00edcia F. Lito, Francisco A. Da Silva, and\n Carlos M. Silva. \"Simple and Accurate Correlations for Diffusion\n Coefficients of Solutes in Liquids and Supercritical Fluids over Wide\n Ranges of Temperature and Density.\" The Journal of Supercritical Fluids\n 76 (April 2013): 94-114. doi:10.1016/j.supflu.2013.02.002."} {"_id": "q_5349", "text": "r'''This function handles the retrieval of a chemical's acentric factor,\n `omega`, or its calculation from correlations or directly through the\n definition of acentric factor if possible. Requires a known boiling point,\n critical temperature and pressure for use of the correlations. Requires\n accurate vapor pressure data for direct calculation.\n\n Will automatically select a method to use if no Method is provided;\n returns None if the data is not available and cannot be calculated.\n\n .. math::\n \\omega \\equiv -\\log_{10}\\left[\\lim_{T/T_c=0.7}(P^{sat}/P_c)\\right]-1.0\n\n Examples\n --------\n >>> omega(CASRN='64-17-5')\n 0.635\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n omega : float\n Acentric factor of compound\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain omega with the given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n The method name to use. Accepted methods are 'PSRK', 'PD', 'YAWS', \n 'LK', and 'DEFINITION'. All valid values are also held in the list\n omega_methods.\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n omega for the desired chemical, and will return methods instead of\n omega\n IgnoreMethods : list, optional\n A list of methods to ignore in obtaining the full list of methods,\n useful for for performance reasons and ignoring inaccurate methods\n\n Notes\n -----\n A total of five sources are available for this function. They are:\n\n * 'PSRK', a compillation of experimental and estimated data published \n in the Appendix of [15]_, the fourth revision of the PSRK model.\n * 'PD', an older compillation of\n data published in (Passut & Danner, 1973) [16]_.\n * 'YAWS', a large compillation of data from a\n variety of sources; no data points are sourced in the work of [17]_.\n * 'LK', a estimation method for hydrocarbons.\n * 'DEFINITION', based on the definition of omega as\n presented in [1]_, using vapor pressure data.\n\n References\n ----------\n .. [1] Pitzer, K. S., D. Z. Lippmann, R. F. Curl, C. M. Huggins, and\n D. E. Petersen: The Volumetric and Thermodynamic Properties of Fluids.\n II. Compressibility Factor, Vapor Pressure and Entropy of Vaporization.\n J. Am. Chem. Soc., 77: 3433 (1955).\n .. [2] Horstmann, Sven, Anna Jab\u0142oniec, J\u00f6rg Krafczyk, Kai Fischer, and\n J\u00fcrgen Gmehling. \"PSRK Group Contribution Equation of State:\n Comprehensive Revision and Extension IV, Including Critical Constants\n and \u0391-Function Parameters for 1000 Components.\" Fluid Phase Equilibria\n 227, no. 2 (January 25, 2005): 157-64. doi:10.1016/j.fluid.2004.11.002.\n .. [3] Passut, Charles A., and Ronald P. Danner. \"Acentric Factor. A\n Valuable Correlating Parameter for the Properties of Hydrocarbons.\"\n Industrial & Engineering Chemistry Process Design and Development 12,\n no. 3 (July 1, 1973): 365-68. doi:10.1021/i260047a026.\n .. [4] Yaws, Carl L. Thermophysical Properties of Chemicals and\n Hydrocarbons, Second Edition. Amsterdam Boston: Gulf Professional\n Publishing, 2014."} {"_id": "q_5350", "text": "r'''This function handles the calculation of a chemical's Stiel Polar\n factor, directly through the definition of Stiel-polar factor if possible.\n Requires Tc, Pc, acentric factor, and a vapor pressure datum at Tr=0.6.\n\n Will automatically select a method to use if no Method is provided;\n returns None if the data is not available and cannot be calculated.\n\n .. math::\n x = \\log P_r|_{T_r=0.6} + 1.70 \\omega + 1.552\n\n Parameters\n ----------\n Tc : float\n Critical temperature of fluid [K]\n Pc : float\n Critical pressure of fluid [Pa]\n omega : float\n Acentric factor of the fluid [-]\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n factor : float\n Stiel polar factor of compound\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain Stiel polar factor with the\n given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n The method name to use. Only 'DEFINITION' is accepted so far.\n All valid values are also held in the list Stiel_polar_methods.\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n Stiel-polar factor for the desired chemical, and will return methods\n instead of stiel-polar factor\n\n Notes\n -----\n Only one source is available for this function. It is:\n\n * 'DEFINITION', based on the definition of\n Stiel Polar Factor presented in [1]_, using vapor pressure data.\n\n A few points have also been published in [2]_, which may be used for\n comparison. Currently this is only used for a surface tension correlation.\n\n Examples\n --------\n >>> StielPolar(647.3, 22048321.0, 0.344, CASRN='7732-18-5')\n 0.024581140348734376\n\n References\n ----------\n .. [1] Halm, Roland L., and Leonard I. Stiel. \"A Fourth Parameter for the\n Vapor Pressure and Entropy of Vaporization of Polar Fluids.\" AIChE\n Journal 13, no. 2 (1967): 351-355. doi:10.1002/aic.690130228.\n .. [2] D, Kukoljac Milo\u0161, and Grozdani\u0107 Du\u0161an K. \"New Values of the\n Polarity Factor.\" Journal of the Serbian Chemical Society 65, no. 12\n (January 1, 2000). http://www.shd.org.rs/JSCS/Vol65/No12-Pdf/JSCS12-07.pdf"} {"_id": "q_5351", "text": "r'''Round a number to the nearest whole number. If the number is exactly\n between two numbers, round to the even whole number. Used by\n `viscosity_index`.\n\n Parameters\n ----------\n i : float\n Number, [-]\n\n Returns\n -------\n i : int\n Rounded number, [-]\n\n Notes\n -----\n Should never run with inputs from a practical function, as numbers on\n computers aren't really normally exactly between two numbers.\n\n Examples\n --------\n _round_whole_even(116.5)\n 116"} {"_id": "q_5352", "text": "r'''Method to calculate low-pressure liquid viscosity at tempearture\n `T` with a given method.\n\n This method has no exception handling; see `T_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate viscosity, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n mu : float\n Viscosity of the liquid at T and a low pressure, [Pa*S]"} {"_id": "q_5353", "text": "r'''Method to calculate pressure-dependent liquid viscosity at\n temperature `T` and pressure `P` with a given method.\n\n This method has no exception handling; see `TP_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate viscosity, [K]\n P : float\n Pressure at which to calculate viscosity, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n mu : float\n Viscosity of the liquid at T and P, [Pa*S]"} {"_id": "q_5354", "text": "r'''Method to calculate viscosity of a liquid mixture at \n temperature `T`, pressure `P`, mole fractions `zs` and weight fractions\n `ws` with a given method.\n\n This method has no exception handling; see `mixture_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the property, [K]\n P : float\n Pressure at which to calculate the property, [Pa]\n zs : list[float]\n Mole fractions of all species in the mixture, [-]\n ws : list[float]\n Weight fractions of all species in the mixture, [-]\n method : str\n Name of the method to use\n\n Returns\n -------\n mu : float\n Viscosity of the liquid mixture, [Pa*s]"} {"_id": "q_5355", "text": "r'''Method to calculate viscosity of a gas mixture at \n temperature `T`, pressure `P`, mole fractions `zs` and weight fractions\n `ws` with a given method.\n\n This method has no exception handling; see `mixture_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the property, [K]\n P : float\n Pressure at which to calculate the property, [Pa]\n zs : list[float]\n Mole fractions of all species in the mixture, [-]\n ws : list[float]\n Weight fractions of all species in the mixture, [-]\n method : str\n Name of the method to use\n\n Returns\n -------\n mu : float\n Viscosity of gas mixture, [Pa*s]"} {"_id": "q_5356", "text": "This function handles the retrieval of Time-Weighted Average limits on worker\n exposure to dangerous chemicals.\n\n This API is considered experimental, and is expected to be removed in a\n future release in favor of a more complete object-oriented interface.\n\n >>> TWA('98-00-0')\n (10.0, 'ppm')\n >>> TWA('1303-00-0')\n (5.0742430905659505e-05, 'ppm')\n >>> TWA('7782-42-5', AvailableMethods=True)\n ['Ontario Limits', 'None']"} {"_id": "q_5357", "text": "This function handles the retrieval of Ceiling limits on worker\n exposure to dangerous chemicals.\n\n This API is considered experimental, and is expected to be removed in a\n future release in favor of a more complete object-oriented interface.\n\n >>> Ceiling('75-07-0')\n (25.0, 'ppm')\n >>> Ceiling('1395-21-7')\n (6e-05, 'mg/m^3')\n >>> Ceiling('7572-29-4', AvailableMethods=True)\n ['Ontario Limits', 'None']"} {"_id": "q_5358", "text": "r'''Looks up if a chemical is listed as a carcinogen or not according to\n either a specifc method or with all methods.\n\n Returns either the status as a string for a specified method, or the\n status of the chemical in all available data sources, in the format\n {source: status}.\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n status : str or dict\n Carcinogen status information [-]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain carcinogen status with the\n given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n Carcinogen_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n if a chemical is listed as carcinogenic, and will return methods\n instead of the status\n\n Notes\n -----\n Supported methods are:\n\n * **IARC**: International Agency for Research on Cancer, [1]_. As\n extracted with a last update of February 22, 2016. Has listing\n information of 843 chemicals with CAS numbers. Chemicals without\n CAS numbers not included here. If two listings for the same CAS\n were available, that closest to the CAS number was used. If two\n listings were available published at different times, the latest\n value was used. All else equal, the most pessimistic value was used.\n * **NTP**: National Toxicology Program, [2]_. Has data on 226\n chemicals.\n\n Examples\n --------\n >>> Carcinogen('61-82-5')\n {'National Toxicology Program 13th Report on Carcinogens': 'Reasonably Anticipated', 'International Agency for Research on Cancer': 'Not classifiable as to its carcinogenicity to humans (3)'}\n\n References\n ----------\n .. [1] International Agency for Research on Cancer. Agents Classified by\n the IARC Monographs, Volumes 1-115. Lyon, France: IARC; 2016 Available\n from: http://monographs.iarc.fr/ENG/Classification/\n .. [2] NTP (National Toxicology Program). 2014. Report on Carcinogens,\n Thirteenth Edition. Research Triangle Park, NC: U.S. Department of\n Health and Human Services, Public Health Service.\n http://ntp.niehs.nih.gov/pubhealth/roc/roc13/"} {"_id": "q_5359", "text": "r'''This function handles the retrieval or calculation of a chemical's\n autoifnition temperature. Lookup is based on CASRNs. No predictive methods\n are currently implemented. Will automatically select a data source to use\n if no Method is provided; returns None if the data is not available.\n\n Prefered source is 'IEC 60079-20-1 (2010)' [1]_, with the secondary source\n 'NFPA 497 (2008)' [2]_ having very similar data.\n\n Examples\n --------\n >>> Tautoignition(CASRN='71-43-2')\n 771.15\n\n Parameters\n ----------\n CASRN : string\n CASRN [-]\n\n Returns\n -------\n Tautoignition : float\n Autoignition point of the chemical, [K]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain Tautoignition with the\n given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n Tautoignition_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n Tautoignition for the desired chemical, and will return methods\n instead of Tautoignition\n\n Notes\n -----\n\n References\n ----------\n .. [1] IEC. \u201cIEC 60079-20-1:2010 Explosive atmospheres - Part 20-1:\n Material characteristics for gas and vapour classification - Test\n methods and data.\u201d https://webstore.iec.ch/publication/635. See also\n https://law.resource.org/pub/in/bis/S05/is.iec.60079.20.1.2010.pdf\n .. [2] National Fire Protection Association. NFPA 497: Recommended\n Practice for the Classification of Flammable Liquids, Gases, or Vapors\n and of Hazardous. NFPA, 2008."} {"_id": "q_5360", "text": "r'''This function handles the retrieval or calculation of a chemical's\n Lower Flammability Limit. Lookup is based on CASRNs. Two predictive methods\n are currently implemented. Will automatically select a data source to use\n if no Method is provided; returns None if the data is not available.\n\n Prefered source is 'IEC 60079-20-1 (2010)' [1]_, with the secondary source\n 'NFPA 497 (2008)' [2]_ having very similar data. If the heat of combustion\n is provided, the estimation method `Suzuki_LFL` can be used. If the atoms\n of the molecule are available, the method `Crowl_Louvar_LFL` can be used.\n\n Examples\n --------\n >>> LFL(CASRN='71-43-2')\n 0.012\n\n Parameters\n ----------\n Hc : float, optional\n Heat of combustion of gas [J/mol]\n atoms : dict, optional\n Dictionary of atoms and atom counts\n CASRN : string, optional\n CASRN [-]\n\n Returns\n -------\n LFL : float\n Lower flammability limit of the gas in an atmosphere at STP, [mole fraction]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain LFL with the\n given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n LFL_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n the Lower Flammability Limit for the desired chemical, and will return\n methods instead of Lower Flammability Limit.\n\n Notes\n -----\n\n References\n ----------\n .. [1] IEC. \u201cIEC 60079-20-1:2010 Explosive atmospheres - Part 20-1:\n Material characteristics for gas and vapour classification - Test\n methods and data.\u201d https://webstore.iec.ch/publication/635. See also\n https://law.resource.org/pub/in/bis/S05/is.iec.60079.20.1.2010.pdf\n .. [2] National Fire Protection Association. NFPA 497: Recommended\n Practice for the Classification of Flammable Liquids, Gases, or Vapors\n and of Hazardous. NFPA, 2008."} {"_id": "q_5361", "text": "r'''This function handles the retrieval or calculation of a chemical's\n Upper Flammability Limit. Lookup is based on CASRNs. Two predictive methods\n are currently implemented. Will automatically select a data source to use\n if no Method is provided; returns None if the data is not available.\n\n Prefered source is 'IEC 60079-20-1 (2010)' [1]_, with the secondary source\n 'NFPA 497 (2008)' [2]_ having very similar data. If the heat of combustion\n is provided, the estimation method `Suzuki_UFL` can be used. If the atoms\n of the molecule are available, the method `Crowl_Louvar_UFL` can be used.\n\n Examples\n --------\n >>> UFL(CASRN='71-43-2')\n 0.086\n\n Parameters\n ----------\n Hc : float, optional\n Heat of combustion of gas [J/mol]\n atoms : dict, optional\n Dictionary of atoms and atom counts\n CASRN : string, optional\n CASRN [-]\n\n Returns\n -------\n UFL : float\n Upper flammability limit of the gas in an atmosphere at STP, [mole fraction]\n methods : list, only returned if AvailableMethods == True\n List of methods which can be used to obtain UFL with the\n given inputs\n\n Other Parameters\n ----------------\n Method : string, optional\n A string for the method name to use, as defined by constants in\n UFL_methods\n AvailableMethods : bool, optional\n If True, function will determine which methods can be used to obtain\n the Upper Flammability Limit for the desired chemical, and will return\n methods instead of Upper Flammability Limit.\n\n Notes\n -----\n\n References\n ----------\n .. [1] IEC. \u201cIEC 60079-20-1:2010 Explosive atmospheres - Part 20-1:\n Material characteristics for gas and vapour classification - Test\n methods and data.\u201d https://webstore.iec.ch/publication/635. See also\n https://law.resource.org/pub/in/bis/S05/is.iec.60079.20.1.2010.pdf\n .. [2] National Fire Protection Association. NFPA 497: Recommended\n Practice for the Classification of Flammable Liquids, Gases, or Vapors\n and of Hazardous. NFPA, 2008."} {"_id": "q_5362", "text": "r'''Interface for drawing a 2D image of all the molecules in the\n mixture. Requires an HTML5 browser, and the libraries RDKit and\n IPython. An exception is raised if either of these libraries is\n absent.\n\n Parameters\n ----------\n Hs : bool\n Whether or not to show hydrogen\n\n Examples\n --------\n Mixture(['natural gas']).draw_2d()"} {"_id": "q_5363", "text": "r'''Calculate a real fluid's Joule Thomson coefficient. The required \n derivative should be calculated with an equation of state, and `Cp` is the\n real fluid versions. This can either be calculated with `dV_dT` directly, \n or with `beta` if it is already known.\n\n .. math::\n \\mu_{JT} = \\left(\\frac{\\partial T}{\\partial P}\\right)_H = \\frac{1}{C_p}\n \\left[T \\left(\\frac{\\partial V}{\\partial T}\\right)_P - V\\right]\n = \\frac{V}{C_p}\\left(\\beta T-1\\right)\n \n Parameters\n ----------\n T : float\n Temperature of fluid, [K]\n V : float\n Molar volume of fluid, [m^3/mol]\n Cp : float\n Real fluid heat capacity at constant pressure, [J/mol/K]\n dV_dT : float, optional\n Derivative of `V` with respect to `T`, [m^3/mol/K]\n beta : float, optional\n Isobaric coefficient of a thermal expansion, [1/K]\n\n Returns\n -------\n mu_JT : float\n Joule-Thomson coefficient [K/Pa]\n \n Examples\n --------\n Example from [2]_:\n \n >>> Joule_Thomson(T=390, V=0.00229754, Cp=153.235, dV_dT=1.226396e-05)\n 1.621956080529905e-05\n\n References\n ----------\n .. [1] Walas, Stanley M. Phase Equilibria in Chemical Engineering. \n Butterworth-Heinemann, 1985.\n .. [2] Pratt, R. M. \"Thermodynamic Properties Involving Derivatives: Using \n the Peng-Robinson Equation of State.\" Chemical Engineering Education 35,\n no. 2 (March 1, 2001): 112-115."} {"_id": "q_5364", "text": "r'''Converts a list of mole fractions to mass fractions. Requires molecular\n weights for all species.\n\n .. math::\n w_i = \\frac{z_i MW_i}{MW_{avg}}\n\n MW_{avg} = \\sum_i z_i MW_i\n\n Parameters\n ----------\n zs : iterable\n Mole fractions [-]\n MWs : iterable\n Molecular weights [g/mol]\n\n Returns\n -------\n ws : iterable\n Mass fractions [-]\n\n Notes\n -----\n Does not check that the sums add to one. Does not check that inputs are of\n the same length.\n\n Examples\n --------\n >>> zs_to_ws([0.5, 0.5], [10, 20])\n [0.3333333333333333, 0.6666666666666666]"} {"_id": "q_5365", "text": "r'''Converts a list of mole fractions to volume fractions. Requires molar\n volumes for all species.\n\n .. math::\n \\text{Vf}_i = \\frac{z_i V_{m,i}}{\\sum_i z_i V_{m,i}}\n\n Parameters\n ----------\n zs : iterable\n Mole fractions [-]\n VMs : iterable\n Molar volumes of species [m^3/mol]\n\n Returns\n -------\n Vfs : list\n Molar volume fractions [-]\n\n Notes\n -----\n Does not check that the sums add to one. Does not check that inputs are of\n the same length.\n\n Molar volumes are specified in terms of pure components only. Function\n works with any phase.\n\n Examples\n --------\n Acetone and benzene example\n\n >>> zs_to_Vfs([0.637, 0.363], [8.0234e-05, 9.543e-05])\n [0.5960229712956298, 0.4039770287043703]"} {"_id": "q_5366", "text": "r'''Checks inputs for suitability of use by a mixing rule which requires\n all inputs to be of the same length and non-None. A number of variations\n were attempted for this function; this was found to be the quickest.\n\n Parameters\n ----------\n all_inputs : array-like of array-like\n list of all the lists of inputs, [-]\n length : int, optional\n Length of the desired inputs, [-]\n\n Returns\n -------\n False/True : bool\n Returns True only if all inputs are the same length (or length `length`)\n and none of the inputs contain None [-]\n\n Notes\n -----\n Does not check for nan values.\n\n Examples\n --------\n >>> none_and_length_check(([1, 1], [1, 1], [1, 30], [10,0]), length=2)\n True"} {"_id": "q_5367", "text": "r'''Simple function calculates a property based on weighted averages of\n logarithmic properties.\n\n .. math::\n y = \\sum_i \\text{frac}_i \\cdot \\log(\\text{prop}_i)\n\n Parameters\n ----------\n fracs : array-like\n Fractions of a mixture\n props: array-like\n Properties\n\n Returns\n -------\n prop : value\n Calculated property\n\n Notes\n -----\n Does not work on negative values.\n Returns None if any fractions or properties are missing or are not of the\n same length.\n\n Examples\n --------\n >>> mixing_logarithmic([0.1, 0.9], [0.01, 0.02])\n 0.01866065983073615"} {"_id": "q_5368", "text": "r'''Determines which phase's property should be set as a default, given\n the phase a chemical is, and the property values of various phases. For the\n case of liquid-gas phase, returns None. If the property is not available\n for the current phase, or if the current phase is not known, returns None.\n\n Parameters\n ----------\n phase : str\n One of {'s', 'l', 'g', 'two-phase'}\n s : float\n Solid-phase property\n l : float\n Liquid-phase property\n g : float\n Gas-phase property\n V_over_F : float\n Vapor phase fraction\n\n Returns\n -------\n prop : float\n The selected/calculated property for the relevant phase\n\n Notes\n -----\n Could calculate mole-fraction weighted properties for the two phase regime.\n Could also implement equilibria with solid phases.\n\n Examples\n --------\n >>> phase_select_property(phase='g', l=1560.14, g=3312.)\n 3312.0"} {"_id": "q_5369", "text": "r'''Method to obtain a sorted list of methods which are valid at `T`\n according to `test_method_validity`. Considers either only user methods\n if forced is True, or all methods. User methods are first tested\n according to their listed order, and unless forced is True, then all\n methods are tested and sorted by their order in `ranked_methods`.\n\n Parameters\n ----------\n T : float\n Temperature at which to test methods, [K]\n\n Returns\n -------\n sorted_valid_methods : list\n Sorted lists of methods valid at T according to\n `test_method_validity`"} {"_id": "q_5370", "text": "r'''Method to solve for the temperature at which a property is at a\n specified value. `T_dependent_property` is used to calculate the value\n of the property as a function of temperature; if `reset_method` is True,\n the best method is used at each temperature as the solver seeks a\n solution. This slows the solution moderately.\n\n Checks the given property value with `test_property_validity` first\n and raises an exception if it is not valid. Requires that Tmin and\n Tmax have been set to know what range to search within.\n\n Search is performed with the brenth solver from SciPy.\n\n Parameters\n ----------\n goal : float\n Propoerty value desired, [`units`]\n reset_method : bool\n Whether or not to reset the method as the solver searches\n\n Returns\n -------\n T : float\n Temperature at which the property is the specified value [K]"} {"_id": "q_5371", "text": "r'''Method to obtain a derivative of a property with respect to \n temperature, of a given order. Methods found valid by \n `select_valid_methods` are attempted until a method succeeds. If no \n methods are valid and succeed, None is returned.\n\n Calls `calculate_derivative` internally to perform the actual\n calculation.\n \n .. math::\n \\text{derivative} = \\frac{d (\\text{property})}{d T}\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the derivative, [K]\n order : int\n Order of the derivative, >= 1\n\n Returns\n -------\n derivative : float\n Calculated derivative property, [`units/K^order`]"} {"_id": "q_5372", "text": "r'''Method to calculate the integral of a property with respect to\n temperature, using a specified method. Uses SciPy's `quad` function\n to perform the integral, with no options.\n \n This method can be overwritten by subclasses who may perfer to add\n analytical methods for some or all methods as this is much faster.\n\n If the calculation does not succeed, returns the actual error\n encountered.\n\n Parameters\n ----------\n T1 : float\n Lower limit of integration, [K]\n T2 : float\n Upper limit of integration, [K]\n method : str\n Method for which to find the integral\n\n Returns\n -------\n integral : float\n Calculated integral of the property over the given range, \n [`units*K`]"} {"_id": "q_5373", "text": "r'''Method to calculate the integral of a property with respect to\n temperature, using a specified method. Methods found valid by \n `select_valid_methods` are attempted until a method succeeds. If no \n methods are valid and succeed, None is returned.\n \n Calls `calculate_integral` internally to perform the actual\n calculation.\n\n .. math::\n \\text{integral} = \\int_{T_1}^{T_2} \\text{property} \\; dT\n\n Parameters\n ----------\n T1 : float\n Lower limit of integration, [K]\n T2 : float\n Upper limit of integration, [K]\n method : str\n Method for which to find the integral\n\n Returns\n -------\n integral : float\n Calculated integral of the property over the given range, \n [`units*K`]"} {"_id": "q_5374", "text": "r'''Method to calculate the integral of a property over temperature\n with respect to temperature, using a specified method. Uses SciPy's \n `quad` function to perform the integral, with no options.\n \n This method can be overwritten by subclasses who may perfer to add\n analytical methods for some or all methods as this is much faster.\n\n If the calculation does not succeed, returns the actual error\n encountered.\n\n Parameters\n ----------\n T1 : float\n Lower limit of integration, [K]\n T2 : float\n Upper limit of integration, [K]\n method : str\n Method for which to find the integral\n\n Returns\n -------\n integral : float\n Calculated integral of the property over the given range, \n [`units`]"} {"_id": "q_5375", "text": "r'''Method to load all data, and set all_methods based on the available\n data and properties. Demo function for testing only; must be\n implemented according to the methods available for each individual\n method."} {"_id": "q_5376", "text": "r'''Method to calculate a property with a specified method, with no\n validity checking or error handling. Demo function for testing only;\n must be implemented according to the methods available for each\n individual method. Include the interpolation call here.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the property, [K]\n method : str\n Method name to use\n\n Returns\n -------\n prop : float\n Calculated property, [`units`]"} {"_id": "q_5377", "text": "r'''Method to obtain a sorted list methods which are valid at `T`\n according to `test_method_validity`. Considers either only user methods\n if forced is True, or all methods. User methods are first tested\n according to their listed order, and unless forced is True, then all\n methods are tested and sorted by their order in `ranked_methods`.\n\n Parameters\n ----------\n T : float\n Temperature at which to test methods, [K]\n P : float\n Pressure at which to test methods, [Pa]\n\n Returns\n -------\n sorted_valid_methods_P : list\n Sorted lists of methods valid at T and P according to\n `test_method_validity`"} {"_id": "q_5378", "text": "r'''Method to calculate the property with sanity checking and without\n specifying a specific method. `select_valid_methods_P` is used to obtain\n a sorted list of methods to try. Methods are then tried in order until\n one succeeds. The methods are allowed to fail, and their results are\n checked with `test_property_validity`. On success, the used method\n is stored in the variable `method_P`.\n\n If `method_P` is set, this method is first checked for validity with\n `test_method_validity_P` for the specified temperature, and if it is\n valid, it is then used to calculate the property. The result is checked\n for validity, and returned if it is valid. If either of the checks fail,\n the function retrieves a full list of valid methods with\n `select_valid_methods_P` and attempts them as described above.\n\n If no methods are found which succeed, returns None.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the property, [K]\n P : float\n Pressure at which to calculate the property, [Pa]\n\n Returns\n -------\n prop : float\n Calculated property, [`units`]"} {"_id": "q_5379", "text": "r'''Method to calculate a derivative of a temperature and pressure\n dependent property with respect to temperature at constant pressure,\n of a given order. Methods found valid by `select_valid_methods_P` are \n attempted until a method succeeds. If no methods are valid and succeed,\n None is returned.\n\n Calls `calculate_derivative_T` internally to perform the actual\n calculation.\n \n .. math::\n \\text{derivative} = \\frac{d (\\text{property})}{d T}|_{P}\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the derivative, [K]\n P : float\n Pressure at which to calculate the derivative, [Pa]\n order : int\n Order of the derivative, >= 1\n\n Returns\n -------\n d_prop_d_T_at_P : float\n Calculated derivative property, [`units/K^order`]"} {"_id": "q_5380", "text": "r'''Method to calculate a derivative of a temperature and pressure\n dependent property with respect to pressure at constant temperature,\n of a given order. Methods found valid by `select_valid_methods_P` are \n attempted until a method succeeds. If no methods are valid and succeed,\n None is returned.\n\n Calls `calculate_derivative_P` internally to perform the actual\n calculation.\n \n .. math::\n \\text{derivative} = \\frac{d (\\text{property})}{d P}|_{T}\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the derivative, [K]\n P : float\n Pressure at which to calculate the derivative, [Pa]\n order : int\n Order of the derivative, >= 1\n\n Returns\n -------\n d_prop_d_P_at_T : float\n Calculated derivative property, [`units/Pa^order`]"} {"_id": "q_5381", "text": "r'''Method to calculate a derivative of a mixture property with respect\n to temperature at constant pressure and composition,\n of a given order. Methods found valid by `select_valid_methods` are \n attempted until a method succeeds. If no methods are valid and succeed,\n None is returned.\n\n Calls `calculate_derivative_T` internally to perform the actual\n calculation.\n \n .. math::\n \\text{derivative} = \\frac{d (\\text{property})}{d T}|_{P, z}\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the derivative, [K]\n P : float\n Pressure at which to calculate the derivative, [Pa]\n zs : list[float]\n Mole fractions of all species in the mixture, [-]\n ws : list[float]\n Weight fractions of all species in the mixture, [-]\n order : int\n Order of the derivative, >= 1\n\n Returns\n -------\n d_prop_d_T_at_P : float\n Calculated derivative property, [`units/K^order`]"} {"_id": "q_5382", "text": "r'''Method to calculate a derivative of a mixture property with respect\n to pressure at constant temperature and composition,\n of a given order. Methods found valid by `select_valid_methods` are \n attempted until a method succeeds. If no methods are valid and succeed,\n None is returned.\n\n Calls `calculate_derivative_P` internally to perform the actual\n calculation.\n \n .. math::\n \\text{derivative} = \\frac{d (\\text{property})}{d P}|_{T, z}\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the derivative, [K]\n P : float\n Pressure at which to calculate the derivative, [Pa]\n zs : list[float]\n Mole fractions of all species in the mixture, [-]\n ws : list[float]\n Weight fractions of all species in the mixture, [-]\n order : int\n Order of the derivative, >= 1\n\n Returns\n -------\n d_prop_d_P_at_T : float\n Calculated derivative property, [`units/Pa^order`]"} {"_id": "q_5383", "text": "r'''Generic method to calculate `T` from a specified `P` and `V`.\n Provides SciPy's `newton` solver, and iterates to solve the general\n equation for `P`, recalculating `a_alpha` as a function of temperature\n using `a_alpha_and_derivatives` each iteration.\n\n Parameters\n ----------\n P : float\n Pressure, [Pa]\n V : float\n Molar volume, [m^3/mol]\n quick : bool, optional\n Unimplemented, although it may be possible to derive explicit \n expressions as done for many pure-component EOS\n\n Returns\n -------\n T : float\n Temperature, [K]"} {"_id": "q_5384", "text": "r'''Sets `a`, `kappa`, and `Tc` for a specific component before the \n pure-species EOS's `a_alpha_and_derivatives` method is called. Both are \n called by `GCEOSMIX.a_alpha_and_derivatives` for every component."} {"_id": "q_5385", "text": "r'''Sets `a`, `m`, and `Tc` for a specific component before the \n pure-species EOS's `a_alpha_and_derivatives` method is called. Both are \n called by `GCEOSMIX.a_alpha_and_derivatives` for every component."} {"_id": "q_5386", "text": "r'''Sets `a`, `kappa0`, `kappa1`, and `Tc` for a specific component before the \n pure-species EOS's `a_alpha_and_derivatives` method is called. Both are \n called by `GCEOSMIX.a_alpha_and_derivatives` for every component."} {"_id": "q_5387", "text": "r'''Sets `a`, `kappa`, `kappa0`, `kappa1`, `kappa2`, `kappa3` and `Tc`\n for a specific component before the \n pure-species EOS's `a_alpha_and_derivatives` method is called. Both are \n called by `GCEOSMIX.a_alpha_and_derivatives` for every component."} {"_id": "q_5388", "text": "r'''Sets `a`, `omega`, and `Tc` for a specific component before the \n pure-species EOS's `a_alpha_and_derivatives` method is called. Both are \n called by `GCEOSMIX.a_alpha_and_derivatives` for every component."} {"_id": "q_5389", "text": "r'''Sets `a`, `S1`, `S2` and `Tc` for a specific component before the \n pure-species EOS's `a_alpha_and_derivatives` method is called. Both are \n called by `GCEOSMIX.a_alpha_and_derivatives` for every component."} {"_id": "q_5390", "text": "r'''Estimates the thermal conductivity of parafin liquid hydrocarbons.\n Fits their data well, and is useful as only MW is required.\n X is the Molecular weight, and Y the temperature.\n\n .. math::\n K = a + bY + CY^2 + dY^3\n\n a = A_1 + B_1 X + C_1 X^2 + D_1 X^3\n\n b = A_2 + B_2 X + C_2 X^2 + D_2 X^3\n\n c = A_3 + B_3 X + C_3 X^2 + D_3 X^3\n\n d = A_4 + B_4 X + C_4 X^2 + D_4 X^3\n\n Parameters\n ----------\n T : float\n Temperature of the fluid [K]\n M : float\n Molecular weight of the fluid [g/mol]\n\n Returns\n -------\n kl : float\n Estimated liquid thermal conductivity [W/m/k]\n\n Notes\n -----\n The accuracy of this equation has not been reviewed.\n\n Examples\n --------\n Data point from [1]_.\n\n >>> Bahadori_liquid(273.15, 170)\n 0.14274278108272603\n\n References\n ----------\n .. [1] Bahadori, Alireza, and Saeid Mokhatab. \"Estimating Thermal\n Conductivity of Hydrocarbons.\" Chemical Engineering 115, no. 13\n (December 2008): 52-54"} {"_id": "q_5391", "text": "r'''Estimates the thermal conductivity of hydrocarbons gases at low P.\n Fits their data well, and is useful as only MW is required.\n Y is the Molecular weight, and X the temperature.\n\n .. math::\n K = a + bY + CY^2 + dY^3\n\n a = A_1 + B_1 X + C_1 X^2 + D_1 X^3\n\n b = A_2 + B_2 X + C_2 X^2 + D_2 X^3\n\n c = A_3 + B_3 X + C_3 X^2 + D_3 X^3\n\n d = A_4 + B_4 X + C_4 X^2 + D_4 X^3\n\n Parameters\n ----------\n T : float\n Temperature of the gas [K]\n MW : float\n Molecular weight of the gas [g/mol]\n\n Returns\n -------\n kg : float\n Estimated gas thermal conductivity [W/m/k]\n\n Notes\n -----\n The accuracy of this equation has not been reviewed.\n\n Examples\n --------\n >>> Bahadori_gas(40+273.15, 20) # Point from article\n 0.031968165337873326\n\n References\n ----------\n .. [1] Bahadori, Alireza, and Saeid Mokhatab. \"Estimating Thermal\n Conductivity of Hydrocarbons.\" Chemical Engineering 115, no. 13\n (December 2008): 52-54"} {"_id": "q_5392", "text": "r'''Method to calculate low-pressure liquid thermal conductivity at\n tempearture `T` with a given method.\n\n This method has no exception handling; see `T_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature of the liquid, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n kl : float\n Thermal conductivity of the liquid at T and a low pressure, [W/m/K]"} {"_id": "q_5393", "text": "r'''Method to calculate pressure-dependent liquid thermal conductivity\n at temperature `T` and pressure `P` with a given method.\n\n This method has no exception handling; see `TP_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate liquid thermal conductivity, [K]\n P : float\n Pressure at which to calculate liquid thermal conductivity, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n kl : float\n Thermal conductivity of the liquid at T and P, [W/m/K]"} {"_id": "q_5394", "text": "r'''Method to calculate thermal conductivity of a liquid mixture at \n temperature `T`, pressure `P`, mole fractions `zs` and weight fractions\n `ws` with a given method.\n\n This method has no exception handling; see `mixture_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the property, [K]\n P : float\n Pressure at which to calculate the property, [Pa]\n zs : list[float]\n Mole fractions of all species in the mixture, [-]\n ws : list[float]\n Weight fractions of all species in the mixture, [-]\n method : str\n Name of the method to use\n\n Returns\n -------\n k : float\n Thermal conductivity of the liquid mixture, [W/m/K]"} {"_id": "q_5395", "text": "r'''Method to calculate low-pressure gas thermal conductivity at\n tempearture `T` with a given method.\n\n This method has no exception handling; see `T_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature of the gas, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n kg : float\n Thermal conductivity of the gas at T and a low pressure, [W/m/K]"} {"_id": "q_5396", "text": "r'''Method to calculate pressure-dependent gas thermal conductivity\n at temperature `T` and pressure `P` with a given method.\n\n This method has no exception handling; see `TP_dependent_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate gas thermal conductivity, [K]\n P : float\n Pressure at which to calculate gas thermal conductivity, [K]\n method : str\n Name of the method to use\n\n Returns\n -------\n kg : float\n Thermal conductivity of the gas at T and P, [W/m/K]"} {"_id": "q_5397", "text": "r'''Method to calculate thermal conductivity of a gas mixture at \n temperature `T`, pressure `P`, mole fractions `zs` and weight fractions\n `ws` with a given method.\n\n This method has no exception handling; see `mixture_property`\n for that.\n\n Parameters\n ----------\n T : float\n Temperature at which to calculate the property, [K]\n P : float\n Pressure at which to calculate the property, [Pa]\n zs : list[float]\n Mole fractions of all species in the mixture, [-]\n ws : list[float]\n Weight fractions of all species in the mixture, [-]\n method : str\n Name of the method to use\n\n Returns\n -------\n kg : float\n Thermal conductivity of gas mixture, [W/m/K]"} {"_id": "q_5398", "text": "r'''Basic formula parser to determine the charge from a formula - given\n that the charge is already specified as one element of the formula.\n\n Performs no sanity checking that elements are actually elements.\n \n Parameters\n ----------\n formula : str\n Formula string, very simply formats only, ending in one of '+x',\n '-x', n*'+', or n*'-' or any of them surrounded by brackets but always\n at the end of a formula.\n\n Returns\n -------\n charge : int\n Charge of the molecule, [faraday]\n\n Notes\n -----\n\n Examples\n --------\n >>> charge_from_formula('Br3-')\n -1\n >>> charge_from_formula('Br3(-)')\n -1"} {"_id": "q_5399", "text": "Convolve 2d gaussian."} {"_id": "q_5400", "text": "Generate a gaussian kernel."} {"_id": "q_5401", "text": "Convert PIL image to numpy grayscale array and numpy alpha array.\n\n Args:\n img (PIL.Image): PIL Image object.\n\n Returns:\n (gray, alpha): both numpy arrays."} {"_id": "q_5402", "text": "Compute the SSIM value from the reference image to the target image.\n\n Args:\n target (str or PIL.Image): Input image to compare the reference image\n to. This may be a PIL Image object or, to save time, an SSIMImage\n object (e.g. the img member of another SSIM object).\n\n Returns:\n Computed SSIM float value."} {"_id": "q_5403", "text": "Computes SSIM.\n\n Args:\n im1: First PIL Image object to compare.\n im2: Second PIL Image object to compare.\n\n Returns:\n SSIM float value."} {"_id": "q_5404", "text": "Switch to a new code version on all cluster nodes. You\n should ensure that cluster nodes are updated, otherwise they\n won't be able to apply commands.\n\n :param newVersion: new code version\n :type int\n :param callback: will be called on cussess or fail\n :type callback: function(`FAIL_REASON <#pysyncobj.FAIL_REASON>`_, None)"} {"_id": "q_5405", "text": "Dumps different debug info about cluster to dict and return it"} {"_id": "q_5406", "text": "Find the node to which a connection belongs.\n\n :param conn: connection object\n :type conn: TcpConnection\n :returns corresponding node or None if the node cannot be found\n :rtype Node or None"} {"_id": "q_5407", "text": "Callback for connections initiated by the other side\n\n :param conn: connection object\n :type conn: TcpConnection"} {"_id": "q_5408", "text": "Callback for initial messages on incoming connections. Handles encryption, utility messages, and association of the connection with a Node.\n Once this initial setup is done, the relevant connected callback is executed, and further messages are deferred to the onMessageReceived callback.\n\n :param conn: connection object\n :type conn: TcpConnection\n :param message: received message\n :type message: any"} {"_id": "q_5409", "text": "Check whether this node should initiate a connection to another node\n\n :param node: the other node\n :type node: Node"} {"_id": "q_5410", "text": "Connect to a node if necessary.\n\n :param node: node to connect to\n :type node: Node"} {"_id": "q_5411", "text": "Callback for receiving a message on a new outgoing connection. Used only if encryption is enabled to exchange the random keys.\n Once the key exchange is done, this triggers the onNodeConnected callback, and further messages are deferred to the onMessageReceived callback.\n\n :param conn: connection object\n :type conn: TcpConnection\n :param message: received message\n :type message: any"} {"_id": "q_5412", "text": "Callback for when a connection is terminated or considered dead. Initiates a reconnect if necessary.\n\n :param conn: connection object\n :type conn: TcpConnection"} {"_id": "q_5413", "text": "Send a message to a node. Returns False if the connection appears to be dead either before or after actually trying to send the message.\n\n :param node: target node\n :type node: Node\n :param message: message\n :param message: any\n :returns success\n :rtype bool"} {"_id": "q_5414", "text": "Destroy this transport"} {"_id": "q_5415", "text": "Put an item into the queue.\n True - if item placed in queue.\n False - if queue is full and item can not be placed."} {"_id": "q_5416", "text": "Put an item into the queue. Items should be comparable, eg. tuples.\n True - if item placed in queue.\n False - if queue is full and item can not be placed."} {"_id": "q_5417", "text": "Extract the smallest item from queue.\n Return default if queue is empty."} {"_id": "q_5418", "text": "Attempt to acquire lock.\n\n :param lockID: unique lock identifier.\n :type lockID: str\n :param sync: True - to wait until lock is acquired or failed to acquire.\n :type sync: bool\n :param callback: if sync is False - callback will be called with operation result.\n :type callback: func(opResult, error)\n :param timeout: max operation time (default - unlimited)\n :type timeout: float\n :return True if acquired, False - somebody else already acquired lock"} {"_id": "q_5419", "text": "Check if lock is acquired by ourselves.\n\n :param lockID: unique lock identifier.\n :type lockID: str\n :return True if lock is acquired by ourselves."} {"_id": "q_5420", "text": "Decorator which wraps checks and returns an error response on failure."} {"_id": "q_5421", "text": "Decorator which ensures that one of the WATCHMAN_TOKENS is provided if set.\n\n WATCHMAN_TOKEN_NAME can also be set if the token GET parameter must be\n customized."} {"_id": "q_5422", "text": "Establish a connection to the chat server.\n\n Returns when an error has occurred, or :func:`disconnect` has been\n called."} {"_id": "q_5423", "text": "Return ``request_header`` for use when constructing requests.\n\n Returns:\n Populated request header."} {"_id": "q_5424", "text": "Set this client as active.\n\n While a client is active, no other clients will raise notifications.\n Call this method whenever there is an indication the user is\n interacting with this client. This method may be called very\n frequently, and it will only make a request when necessary."} {"_id": "q_5425", "text": "Parse the image upload response to obtain status.\n\n Args:\n res: http_utils.FetchResponse instance, the upload response\n\n Returns:\n dict, sessionStatus of the response\n\n Raises:\n hangups.NetworkError: If the upload request failed."} {"_id": "q_5426", "text": "Parse channel array and call the appropriate events."} {"_id": "q_5427", "text": "Add services to the channel.\n\n The services we add to the channel determine what kind of data we will\n receive on it.\n\n The \"babel\" service includes what we need for Hangouts. If this fails\n for some reason, hangups will never receive any events. The\n \"babel_presence_last_seen\" service is also required to receive presence\n notifications.\n\n This needs to be re-called whenever we open a new channel (when there's\n a new SID and client_id."} {"_id": "q_5428", "text": "Send a Protocol Buffer formatted chat API request.\n\n Args:\n endpoint (str): The chat API endpoint to use.\n request_pb: The request body as a Protocol Buffer message.\n response_pb: The response body as a Protocol Buffer message.\n\n Raises:\n NetworkError: If the request fails."} {"_id": "q_5429", "text": "Send a generic authenticated POST request.\n\n Args:\n url (str): URL of request.\n content_type (str): Request content type.\n response_type (str): The desired response format. Valid options\n are: 'json' (JSON), 'protojson' (pblite), and 'proto' (binary\n Protocol Buffer). 'proto' requires manually setting an extra\n header 'X-Goog-Encode-Response-If-Executable: base64'.\n data (str): Request body data.\n\n Returns:\n FetchResponse: Response containing HTTP code, cookies, and body.\n\n Raises:\n NetworkError: If the request fails."} {"_id": "q_5430", "text": "Invite users to join an existing group conversation."} {"_id": "q_5431", "text": "Create a new conversation."} {"_id": "q_5432", "text": "Return conversation info and recent events."} {"_id": "q_5433", "text": "Return one or more user entities.\n\n Searching by phone number only finds entities when their phone number\n is in your contacts (and not always even then), and can't be used to\n find Google Voice contacts."} {"_id": "q_5434", "text": "Return info about the current user."} {"_id": "q_5435", "text": "Return presence status for a list of users."} {"_id": "q_5436", "text": "Remove a participant from a group conversation."} {"_id": "q_5437", "text": "Rename a conversation.\n\n Both group and one-to-one conversations may be renamed, but the\n official Hangouts clients have mixed support for one-to-one\n conversations with custom names."} {"_id": "q_5438", "text": "Enable or disable message history in a conversation."} {"_id": "q_5439", "text": "Set the notification level of a conversation."} {"_id": "q_5440", "text": "Set focus to a conversation."} {"_id": "q_5441", "text": "Set whether group link sharing is enabled for a conversation."} {"_id": "q_5442", "text": "Set the typing status of a conversation."} {"_id": "q_5443", "text": "Return info on recent conversations and their events."} {"_id": "q_5444", "text": "Convert a microsecond timestamp to a UTC datetime instance."} {"_id": "q_5445", "text": "Convert UserID to hangouts_pb2.ParticipantId."} {"_id": "q_5446", "text": "Return WatermarkNotification from hangouts_pb2.WatermarkNotification."} {"_id": "q_5447", "text": "Return authorization headers for API request."} {"_id": "q_5448", "text": "Make an HTTP request.\n\n Automatically uses configured HTTP proxy, and adds Google authorization\n header and cookies.\n\n Failures will be retried MAX_RETRIES times before raising NetworkError.\n\n Args:\n method (str): Request method.\n url (str): Request URL.\n params (dict): (optional) Request query string parameters.\n headers (dict): (optional) Request headers.\n data: (str): (optional) Request body data.\n\n Returns:\n FetchResponse: Response data.\n\n Raises:\n NetworkError: If the request fails."} {"_id": "q_5449", "text": "Search for entities by phone number, email, or gaia_id."} {"_id": "q_5450", "text": "Return EntityLookupSpec from phone number, email address, or gaia ID."} {"_id": "q_5451", "text": "Return a readable name for a conversation.\n\n If the conversation has a custom name, use the custom name. Otherwise, for\n one-to-one conversations, the name is the full name of the other user. For\n group conversations, the name is a comma-separated list of first names. If\n the group conversation is empty, the name is \"Empty Conversation\".\n\n If truncate is true, only show up to two names in a group conversation.\n\n If show_unread is True, if there are unread chat messages, show the number\n of unread chat messages in parentheses after the conversation name."} {"_id": "q_5452", "text": "Add foreground and background colours to a color scheme"} {"_id": "q_5453", "text": "Sync all conversations by making paginated requests.\n\n Conversations are ordered by ascending sort timestamp.\n\n Args:\n client (Client): Connected client.\n\n Raises:\n NetworkError: If the requests fail.\n\n Returns:\n tuple of list of ``ConversationState`` messages and sync timestamp"} {"_id": "q_5454", "text": "Loaded events which are unread sorted oldest to newest.\n\n Some Hangouts clients don't update the read timestamp for certain event\n types, such as membership changes, so this may return more unread\n events than these clients will show. There's also a delay between\n sending a message and the user's own message being considered read.\n\n (list of :class:`.ConversationEvent`)."} {"_id": "q_5455", "text": "Handle a watermark notification."} {"_id": "q_5456", "text": "Update the internal state of the conversation.\n\n This method is used by :class:`.ConversationList` to maintain this\n instance.\n\n Args:\n conversation: ``Conversation`` message."} {"_id": "q_5457", "text": "Wrap hangouts_pb2.Event in ConversationEvent subclass."} {"_id": "q_5458", "text": "Add an event to the conversation.\n\n This method is used by :class:`.ConversationList` to maintain this\n instance.\n\n Args:\n event_: ``Event`` message.\n\n Returns:\n :class:`.ConversationEvent` representing the event."} {"_id": "q_5459", "text": "Send a message to this conversation.\n\n A per-conversation lock is acquired to ensure that messages are sent in\n the correct order when this method is called multiple times\n asynchronously.\n\n Args:\n segments: List of :class:`.ChatMessageSegment` objects to include\n in the message.\n image_file: (optional) File-like object containing an image to be\n attached to the message.\n image_id: (optional) ID of an Picasa photo to be attached to the\n message. If you specify both ``image_file`` and ``image_id``\n together, ``image_file`` takes precedence and ``image_id`` will\n be ignored.\n image_user_id: (optional) Picasa user ID, required only if\n ``image_id`` refers to an image from a different Picasa user,\n such as Google's sticker user.\n\n Raises:\n .NetworkError: If the message cannot be sent."} {"_id": "q_5460", "text": "Leave this conversation.\n\n Raises:\n .NetworkError: If conversation cannot be left."} {"_id": "q_5461", "text": "Set the notification level of this conversation.\n\n Args:\n level: ``NOTIFICATION_LEVEL_QUIET`` to disable notifications, or\n ``NOTIFICATION_LEVEL_RING`` to enable them.\n\n Raises:\n .NetworkError: If the request fails."} {"_id": "q_5462", "text": "Update the timestamp of the latest event which has been read.\n\n This method will avoid making an API request if it will have no effect.\n\n Args:\n read_timestamp (datetime.datetime): (optional) Timestamp to set.\n Defaults to the timestamp of the newest event.\n\n Raises:\n .NetworkError: If the timestamp cannot be updated."} {"_id": "q_5463", "text": "Get all the conversations.\n\n Args:\n include_archived (bool): (optional) Whether to include archived\n conversations. Defaults to ``False``.\n\n Returns:\n List of all :class:`.Conversation` objects."} {"_id": "q_5464", "text": "Leave a conversation.\n\n Args:\n conv_id (str): ID of conversation to leave."} {"_id": "q_5465", "text": "Add new conversation from hangouts_pb2.Conversation"} {"_id": "q_5466", "text": "Get a cached conversation or fetch a missing conversation.\n\n Args:\n conv_id: string, conversation identifier\n\n Raises:\n NetworkError: If the request to fetch the conversation fails.\n\n Returns:\n :class:`.Conversation` with matching ID."} {"_id": "q_5467", "text": "Receive a hangouts_pb2.Event and fan out to Conversations.\n\n Args:\n event_: hangouts_pb2.Event instance"} {"_id": "q_5468", "text": "Receive Conversation delta and create or update the conversation.\n\n Args:\n conversation: hangouts_pb2.Conversation instance\n\n Raises:\n NetworkError: A request to fetch the complete conversation failed."} {"_id": "q_5469", "text": "Receive SetTypingNotification and update the conversation.\n\n Args:\n set_typing_notification: hangouts_pb2.SetTypingNotification\n instance"} {"_id": "q_5470", "text": "Receive WatermarkNotification and update the conversation.\n\n Args:\n watermark_notification: hangouts_pb2.WatermarkNotification instance"} {"_id": "q_5471", "text": "Sync conversation state and events that could have been missed."} {"_id": "q_5472", "text": "Construct user from ``Entity`` message.\n\n Args:\n entity: ``Entity`` message.\n self_user_id (~hangups.user.UserID or None): The ID of the current\n user. If ``None``, assume ``entity`` is the current user.\n\n Returns:\n :class:`~hangups.user.User` object."} {"_id": "q_5473", "text": "Get a user by its ID.\n\n Args:\n user_id (~hangups.user.UserID): The ID of the user.\n\n Raises:\n KeyError: If no such user is known.\n\n Returns:\n :class:`~hangups.user.User` with the given ID."} {"_id": "q_5474", "text": "Add or upgrade User from ConversationParticipantData."} {"_id": "q_5475", "text": "Add an observer to this event.\n\n Args:\n callback: A function or coroutine callback to call when the event\n is fired.\n\n Raises:\n ValueError: If the callback has already been added."} {"_id": "q_5476", "text": "Remove an observer from this event.\n\n Args:\n callback: A function or coroutine callback to remove from this\n event.\n\n Raises:\n ValueError: If the callback is not an observer of this event."} {"_id": "q_5477", "text": "Fire this event, calling all observers with the same arguments."} {"_id": "q_5478", "text": "Run a hangups example coroutine.\n\n Args:\n example_coroutine (coroutine): Coroutine to run with a connected\n hangups client and arguments namespace as arguments.\n extra_args (str): Any extra command line arguments required by the\n example."} {"_id": "q_5479", "text": "Return ArgumentParser with any extra arguments."} {"_id": "q_5480", "text": "Print column headers and rows as a reStructuredText table.\n\n Args:\n col_tuple: Tuple of column name strings.\n row_tuples: List of tuples containing row data."} {"_id": "q_5481", "text": "Generate doc for an enum.\n\n Args:\n enum_descriptor: descriptor_pb2.EnumDescriptorProto instance for enum\n to generate docs for.\n locations: Dictionary of location paths tuples to\n descriptor_pb2.SourceCodeInfo.Location instances.\n path: Path tuple to the enum definition.\n name_prefix: Optional prefix for this enum's name."} {"_id": "q_5482", "text": "Create a directory if it does not exist."} {"_id": "q_5483", "text": "Show the overlay menu."} {"_id": "q_5484", "text": "Handle connecting for the first time."} {"_id": "q_5485", "text": "Open conversation tab for new messages & pass events to notifier."} {"_id": "q_5486", "text": "Put a coroutine in the queue to be executed."} {"_id": "q_5487", "text": "Consume coroutines from the queue by executing them."} {"_id": "q_5488", "text": "Rename conversation and call callback."} {"_id": "q_5489", "text": "Re-order the conversations when an event occurs."} {"_id": "q_5490", "text": "Make users stop typing when they send a message."} {"_id": "q_5491", "text": "Update status text."} {"_id": "q_5492", "text": "Return MessageWidget representing a ConversationEvent.\n\n Returns None if the ConversationEvent does not have a widget\n representation."} {"_id": "q_5493", "text": "Handle updating and scrolling when a new event is added.\n\n Automatically scroll down to show the new text if the bottom is\n showing. This allows the user to scroll up to read previous messages\n while new messages are arriving."} {"_id": "q_5494", "text": "Load more events for this conversation."} {"_id": "q_5495", "text": "Return the menu widget associated with this widget."} {"_id": "q_5496", "text": "Update this conversation's tab title."} {"_id": "q_5497", "text": "Update tab display."} {"_id": "q_5498", "text": "Add or modify a tab.\n\n If widget is not a tab, it will be added. If switch is True, switch to\n this tab. If title is given, set the tab's title."} {"_id": "q_5499", "text": "Use the access token to get session cookies.\n\n Raises GoogleAuthError if session cookies could not be loaded.\n\n Returns dict of cookies."} {"_id": "q_5500", "text": "Populate and submit a form on the current page.\n\n Raises GoogleAuthError if form can not be submitted."} {"_id": "q_5501", "text": "Parse response format for request for new channel SID.\n\n Example format (after parsing JS):\n [ [0,[\"c\",\"SID_HERE\",\"\",8]],\n [1,[{\"gsid\":\"GSESSIONID_HERE\"}]]]\n\n Returns (SID, gsessionid) tuple."} {"_id": "q_5502", "text": "Listen for messages on the backwards channel.\n\n This method only returns when the connection has been closed due to an\n error."} {"_id": "q_5503", "text": "Open a long-polling request and receive arrays.\n\n This method uses keep-alive to make re-opening the request faster, but\n the remote server will set the \"Connection: close\" header once an hour.\n\n Raises hangups.NetworkError or ChannelSessionError."} {"_id": "q_5504", "text": "Parse push data and trigger events."} {"_id": "q_5505", "text": "Decode optional or required field."} {"_id": "q_5506", "text": "Decode repeated field."} {"_id": "q_5507", "text": "Decode pblite to Protocol Buffer message.\n\n This method is permissive of decoding errors and will log them as warnings\n and continue decoding where possible.\n\n The first element of the outer pblite list must often be ignored using the\n ignore_first_item parameter because it contains an abbreviation of the name\n of the protobuf message (eg. cscmrp for ClientSendChatMessageResponseP)\n that's not part of the protobuf.\n\n Args:\n message: protocol buffer message instance to decode into.\n pblite: list representing a pblite-serialized message.\n ignore_first_item: If True, ignore the item at index 0 in the pblite\n list, making the item at index 1 correspond to field 1 in the\n message."} {"_id": "q_5508", "text": "Sets the Elasticsearch hosts to use\n\n Args:\n hosts (str): A single hostname or URL, or list of hostnames or URLs\n use_ssl (bool): Use a HTTPS connection to the server\n ssl_cert_path (str): Path to the certificate chain"} {"_id": "q_5509", "text": "Updates index mappings\n\n Args:\n aggregate_indexes (list): A list of aggregate index names\n forensic_indexes (list): A list of forensic index names"} {"_id": "q_5510", "text": "Saves aggregate DMARC reports to Kafka\n\n Args:\n aggregate_reports (list): A list of aggregate report dictionaries\n to save to Kafka\n aggregate_topic (str): The name of the Kafka topic"} {"_id": "q_5511", "text": "Extracts xml from a zip or gzip file at the given path, file-like object,\n or bytes.\n\n Args:\n input_: A path to a file, a file like object, or bytes\n\n Returns:\n str: The extracted XML"} {"_id": "q_5512", "text": "Parses a file at the given path, a file-like object. or bytes as a\n aggregate DMARC report\n\n Args:\n _input: A path to a file, a file like object, or bytes\n nameservers (list): A list of one or more nameservers to use\n (Cloudflare's public DNS resolvers by default)\n dns_timeout (float): Sets the DNS timeout in seconds\n parallel (bool): Parallel processing\n\n Returns:\n OrderedDict: The parsed DMARC aggregate report"} {"_id": "q_5513", "text": "Converts one or more parsed forensic reports to flat CSV format, including\n headers\n\n Args:\n reports: A parsed forensic report or list of parsed forensic reports\n\n Returns:\n str: Parsed forensic report data in flat CSV format, including headers"} {"_id": "q_5514", "text": "Parses a DMARC aggregate or forensic file at the given path, a\n file-like object. or bytes\n\n Args:\n input_: A path to a file, a file like object, or bytes\n nameservers (list): A list of one or more nameservers to use\n (Cloudflare's public DNS resolvers by default)\n dns_timeout (float): Sets the DNS timeout in seconds\n strip_attachment_payloads (bool): Remove attachment payloads from\n forensic report results\n parallel (bool): Parallel processing\n\n Returns:\n OrderedDict: The parsed DMARC report"} {"_id": "q_5515", "text": "Returns a list of an IMAP server's capabilities\n\n Args:\n server (imapclient.IMAPClient): An instance of imapclient.IMAPClient\n\n Returns (list): A list of capabilities"} {"_id": "q_5516", "text": "Emails parsing results as a zip file\n\n Args:\n results (OrderedDict): Parsing results\n host: Mail server hostname or IP address\n mail_from: The value of the message from header\n mail_to : A list of addresses to mail to\n port (int): Port to use\n ssl (bool): Require a SSL connection from the start\n user: An optional username\n password: An optional password\n subject: Overrides the default message subject\n attachment_filename: Override the default attachment filename\n message: Override the default plain text body\n ssl_context: SSL context options"} {"_id": "q_5517", "text": "Saves aggregate DMARC reports to Splunk\n\n Args:\n aggregate_reports: A list of aggregate report dictionaries\n to save in Splunk"} {"_id": "q_5518", "text": "Decodes a base64 string, with padding being optional\n\n Args:\n data: A base64 encoded string\n\n Returns:\n bytes: The decoded bytes"} {"_id": "q_5519", "text": "Gets the base domain name for the given domain\n\n .. note::\n Results are based on a list of public domain suffixes at\n https://publicsuffix.org/list/public_suffix_list.dat.\n\n Args:\n domain (str): A domain or subdomain\n use_fresh_psl (bool): Download a fresh Public Suffix List\n\n Returns:\n str: The base domain of the given domain"} {"_id": "q_5520", "text": "Resolves an IP address to a hostname using a reverse DNS query\n\n Args:\n ip_address (str): The IP address to resolve\n cache (ExpiringDict): Cache storage\n nameservers (list): A list of one or more nameservers to use\n (Cloudflare's public DNS resolvers by default)\n timeout (float): Sets the DNS query timeout in seconds\n\n Returns:\n str: The reverse DNS hostname (if any)"} {"_id": "q_5521", "text": "Converts a human-readable timestamp into a Python ``DateTime`` object\n\n Args:\n human_timestamp (str): A timestamp string\n to_utc (bool): Convert the timestamp to UTC\n\n Returns:\n DateTime: The converted timestamp"} {"_id": "q_5522", "text": "Returns reverse DNS and country information for the given IP address\n\n Args:\n ip_address (str): The IP address to check\n cache (ExpiringDict): Cache storage\n nameservers (list): A list of one or more nameservers to use\n (Cloudflare's public DNS resolvers by default)\n timeout (float): Sets the DNS timeout in seconds\n parallel (bool): parallel processing\n\n Returns:\n OrderedDict: ``ip_address``, ``reverse_dns``"} {"_id": "q_5523", "text": "Uses the ``msgconvert`` Perl utility to convert an Outlook MS file to\n standard RFC 822 format\n\n Args:\n msg_bytes (bytes): the content of the .msg file\n\n Returns:\n A RFC 822 string"} {"_id": "q_5524", "text": "Separated this function for multiprocessing"} {"_id": "q_5525", "text": "Sends a PUB command to the server on the specified subject.\n\n ->> PUB hello 5\n ->> MSG_PAYLOAD: world\n <<- MSG hello 2 5"} {"_id": "q_5526", "text": "Publishes a message tagging it with a reply subscription\n which can be used by those receiving the message to respond.\n\n ->> PUB hello _INBOX.2007314fe0fcb2cdc2a2914c1 5\n ->> MSG_PAYLOAD: world\n <<- MSG hello 2 _INBOX.2007314fe0fcb2cdc2a2914c1 5"} {"_id": "q_5527", "text": "Sets the subcription to use a task per message to be processed.\n\n ..deprecated:: 7.0\n Will be removed 9.0."} {"_id": "q_5528", "text": "Sends a ping to the server expecting a pong back ensuring\n what we have written so far has made it to the server and\n also enabling measuring of roundtrip time.\n In case a pong is not returned within the allowed timeout,\n then it will raise ErrTimeout."} {"_id": "q_5529", "text": "Looks up in the server pool for an available server\n and attempts to connect."} {"_id": "q_5530", "text": "Process errors which occured while reading or parsing\n the protocol. If allow_reconnect is enabled it will\n try to switch the server to which it is currently connected\n otherwise it will disconnect."} {"_id": "q_5531", "text": "Process PONG sent by server."} {"_id": "q_5532", "text": "Coroutine which continuously tries to consume pending commands\n and then flushes them to the socket."} {"_id": "q_5533", "text": "Coroutine which gathers bytes sent by the server\n and feeds them to the protocol parser.\n In case of error while reading, it will stop running\n and its task has to be rescheduled."} {"_id": "q_5534", "text": "Generates a timezone aware datetime if the 'USE_TZ' setting is enabled\n\n :param value: The datetime value\n :return: A locale aware datetime"} {"_id": "q_5535", "text": "Load feature data from a 2D ndarray on disk."} {"_id": "q_5536", "text": "Load feature image data from image files.\n\n Args:\n images: A list of image filenames.\n names: An optional list of strings to use as the feature names. Must\n be in the same order as the images."} {"_id": "q_5537", "text": "Decode images using Pearson's r.\n\n Computes the correlation between each input image and each feature\n image across voxels.\n\n Args:\n imgs_to_decode: An ndarray of images to decode, with voxels in rows\n and images in columns.\n\n Returns:\n An n_features x n_images 2D array, with each cell representing the\n pearson correlation between the i'th feature and the j'th image\n across all voxels."} {"_id": "q_5538", "text": "Decoding using the dot product."} {"_id": "q_5539", "text": "Set up data for a classification task given a set of masks\n\n Given a set of masks, this function retrieves studies associated with\n each mask at the specified threshold, optionally removes overlap and\n filters by studies and features, and returns studies by feature matrix\n (X) and class labels (y)\n\n Args:\n dataset: a Neurosynth dataset\n maks: a list of paths to Nifti masks\n threshold: percentage of voxels active within the mask for study\n to be included\n remove_overlap: A boolean indicating if studies studies that\n appear in more than one mask should be excluded\n studies: An optional list of study names used to constrain the set\n used in classification. If None, will use all features in the\n dataset.\n features: An optional list of feature names used to constrain the\n set used in classification. If None, will use all features in\n the dataset.\n regularize: Optional boolean indicating if X should be regularized\n\n Returns:\n A tuple (X, y) of np arrays.\n X is a feature by studies matrix and y is a vector of class labels"} {"_id": "q_5540", "text": "Returns a list with the order that features requested appear in\n dataset"} {"_id": "q_5541", "text": "Sets the class_weight of the classifier to match y"} {"_id": "q_5542", "text": "Given a dataset, fits either features or voxels to y"} {"_id": "q_5543", "text": "Aggregates over all voxels within each ROI in the input image.\n\n Takes a Dataset and a Nifti image that defines distinct regions, and\n returns a numpy matrix of ROIs x mappables, where the value at each\n ROI is the proportion of active voxels in that ROI. Each distinct ROI\n must have a unique value in the image; non-contiguous voxels with the\n same value will be assigned to the same ROI.\n\n Args:\n dataset: Either a Dataset instance from which image data are\n extracted, or a Numpy array containing image data to use. If\n the latter, the array contains voxels in rows and\n features/studies in columns. The number of voxels must be equal\n to the length of the vectorized image mask in the regions\n image.\n regions: An image defining the boundaries of the regions to use.\n Can be one of:\n 1) A string name of the NIFTI or Analyze-format image\n 2) A NiBabel SpatialImage\n 3) A list of NiBabel images\n 4) A 1D numpy array of the same length as the mask vector in\n the Dataset's current Masker.\n masker: Optional masker used to load image if regions is not a\n numpy array. Must be passed if dataset is a numpy array.\n threshold: An optional float in the range of 0 - 1 or integer. If\n passed, the array will be binarized, with ROI values above the\n threshold assigned to True and values below the threshold\n assigned to False. (E.g., if threshold = 0.05, only ROIs in\n which more than 5% of voxels are active will be considered\n active.) If threshold is integer, studies will only be\n considered active if they activate more than that number of\n voxels in the ROI.\n remove_zero: An optional boolean; when True, assume that voxels\n with value of 0 should not be considered as a separate ROI, and\n will be ignored.\n\n Returns:\n A 2D numpy array with ROIs in rows and mappables in columns."} {"_id": "q_5544", "text": "Return top forty words from each topic in trained topic model."} {"_id": "q_5545", "text": "Correlates row vector x with each row vector in 2D array y."} {"_id": "q_5546", "text": "Determine FDR threshold given a p value array and desired false\n discovery rate q."} {"_id": "q_5547", "text": "Create and store a new ImageTable instance based on the current\n Dataset. Will generally be called privately, but may be useful as a\n convenience method in cases where the user wants to re-generate the\n table with a new smoothing kernel of different radius.\n\n Args:\n r (int): An optional integer indicating the radius of the smoothing\n kernel. By default, this is None, which will keep whatever\n value is currently set in the Dataset instance."} {"_id": "q_5548", "text": "Construct a new FeatureTable from file.\n\n Args:\n features: Feature data to add. Can be:\n (a) A text file containing the feature data, where each row is\n a study in the database, with features in columns. The first\n column must contain the IDs of the studies to match up with the\n image data.\n (b) A pandas DataFrame, where studies are in rows, features are\n in columns, and the index provides the study IDs.\n append (bool): If True, adds new features to existing ones\n incrementally. If False, replaces old features.\n merge, duplicates, min_studies, threshold: Additional arguments\n passed to FeatureTable.add_features()."} {"_id": "q_5549", "text": "Load a pickled Dataset instance from file."} {"_id": "q_5550", "text": "Given a list of features, returns features in order that they\n appear in database.\n\n Args:\n features (list): A list or 1D numpy array of named features to\n return.\n\n Returns:\n A list of features in order they appear in database."} {"_id": "q_5551", "text": "Returns a list of all studies in the table that meet the desired\n feature-based criteria.\n\n Will most commonly be used to retrieve studies that use one or more\n features with some minimum frequency; e.g.,:\n\n get_ids(['fear', 'anxiety'], threshold=0.001)\n\n Args:\n features (lists): a list of feature names to search on.\n threshold (float): optional float indicating threshold features\n must pass to be included.\n func (Callable): any numpy function to use for thresholding\n (default: sum). The function will be applied to the list of\n features and the result compared to the threshold. This can be\n used to change the meaning of the query in powerful ways. E.g,:\n max: any of the features have to pass threshold\n (i.e., max > thresh)\n min: all features must each individually pass threshold\n (i.e., min > thresh)\n sum: the summed weight of all features must pass threshold\n (i.e., sum > thresh)\n get_weights (bool): if True, returns a dict with ids => weights.\n\n Returns:\n When get_weights is false (default), returns a list of study\n names. When true, returns a dict, with study names as keys\n and feature weights as values."} {"_id": "q_5552", "text": "Returns all features that match any of the elements in the input\n list.\n\n Args:\n search (str, list): A string or list of strings defining the query.\n\n Returns:\n A list of matching feature names."} {"_id": "q_5553", "text": "Convert FeatureTable to SciPy CSR matrix."} {"_id": "q_5554", "text": "Deprecation warning decorator. Takes optional deprecation message,\n otherwise will use a generic warning."} {"_id": "q_5555", "text": "Convert coordinates from one space to another using provided\n transformation matrix."} {"_id": "q_5556", "text": "Convert an N x 3 array of XYZ coordinates to matrix indices."} {"_id": "q_5557", "text": "Perform an ADC read with the provided mux, gain, data_rate, and mode\n values and with the comparator enabled as specified. Returns the signed\n integer result of the read."} {"_id": "q_5558", "text": "Read a single ADC channel and return the ADC value as a signed integer\n result. Channel must be a value within 0-3."} {"_id": "q_5559", "text": "Expand the given address into one or more normalized strings.\n\n Required\n --------\n @param address: the address as either Unicode or a UTF-8 encoded string\n\n Options\n -------\n @param languages: a tuple or list of ISO language code strings (e.g. \"en\", \"fr\", \"de\", etc.)\n to use in expansion. If None is passed, use language classifier\n to detect language automatically.\n @param address_components: an integer (bit-set) of address component expansions\n to use e.g. ADDRESS_NAME | ADDRESS_STREET would use\n only expansions which apply to venue names or streets.\n @param latin_ascii: use the Latin to ASCII transliterator, which normalizes e.g. \u00e6 => ae\n @param transliterate: use any available transliterators for non-Latin scripts, e.g.\n for the Greek phrase \u03b4\u03b9\u03b1\u03c6\u03bf\u03c1\u03b5\u03c4\u03b9\u03ba\u03bf\u03cd\u03c2 becomes diaphoretiko\u00fas\u0331\n @param strip_accents: strip accented characters e.g. \u00e9 => e, \u00e7 => c. This loses some\n information in various languags, but in general we want\n @param decompose: perform Unicode normalization (NFD form)\n @param lowercase: UTF-8 lowercase the string\n @param trim_string: trim spaces on either side of the string\n @param replace_word_hyphens: add version of the string replacing hyphens with space\n @param delete_word_hyphens: add version of the string with hyphens deleted\n @param replace_numeric_hyphens: add version of the string with numeric hyphens replaced \n e.g. 12345-6789 => 12345 6789\n @param delete_numeric_hyphens: add version of the string with numeric hyphens removed\n e.g. 12345-6789 => 123456789\n @param split_alpha_from_numeric: split tokens like CR17 into CR 17, helps with expansion\n of certain types of highway abbreviations\n @param delete_final_periods: remove final periods on abbreviations e.g. St. => St\n @param delete_acronym_periods: remove periods in acronyms e.g. U.S.A. => USA\n @param drop_english_possessives: normalize possessives e.g. Mark's => Marks\n @param delete_apostrophes: delete other types of hyphens e.g. O'Malley => OMalley\n @param expand_numex: converts numeric expressions e.g. Twenty sixth => 26th,\n using either the supplied languages or the result of\n automated language classification.\n @param roman_numerals: normalize Roman numerals e.g. IX => 9. Since these can be\n ambiguous (especially I and V), turning this on simply\n adds another version of the string if any potential\n Roman numerals are found."} {"_id": "q_5560", "text": "Normalizes a string, tokenizes, and normalizes each token\n with string and token-level options.\n\n This version only uses libpostal's deterministic normalizations\n i.e. methods with a single output. The string tree version will\n return multiple normalized strings, each with tokens.\n\n Usage:\n normalized_tokens(u'St.-Barth\u00e9lemy')"} {"_id": "q_5561", "text": "Parse address into components.\n\n @param address: the address as either Unicode or a UTF-8 encoded string\n @param language (optional): language code\n @param country (optional): country code"} {"_id": "q_5562", "text": "Hash the given address into normalized strings that can be used to group similar\n addresses together for more detailed pairwise comparison. This can be thought of\n as the blocking function in record linkage or locally-sensitive hashing in the\n document near-duplicate detection.\n\n Required\n --------\n @param labels: array of component labels as either Unicode or UTF-8 encoded strings\n e.g. [\"house_number\", \"road\", \"postcode\"]\n @param values: array of component values as either Unicode or UTF-8 encoded strings\n e.g. [\"123\", \"Broadway\", \"11216\"]. Note len(values) must be equal to\n len(labels).\n\n Options\n -------\n @param languages: a tuple or list of ISO language code strings (e.g. \"en\", \"fr\", \"de\", etc.)\n to use in expansion. If None is passed, use language classifier\n to detect language automatically.\n @param with_name: use name in the hashes\n @param with_address: use house_number & street in the hashes\n @param with_unit: use secondary unit as part of the hashes\n @param with_city_or_equivalent: use the city, city_district, suburb, or island name as one of\n the geo qualifiers\n @param with_small_containing_boundaries: use small containing boundaries (currently state_district)\n as one of the geo qualifiers\n @param with_postal_code: use postal code as one of the geo qualifiers\n @param with_latlon: use geohash + neighbors as one of the geo qualifiers\n @param latitude: latitude (Y coordinate)\n @param longitude: longitude (X coordinate)\n @param geohash_precision: geohash tile size (default = 6)\n @param name_and_address_keys: include keys with name + address + geo\n @param name_only_keys: include keys with name + geo\n @param address_only_keys: include keys with address + geo"} {"_id": "q_5563", "text": "Removed all dusty containers with 'Exited' in their status"} {"_id": "q_5564", "text": "Removes all dangling images as well as all images referenced in a dusty spec; forceful removal is not used"} {"_id": "q_5565", "text": "Write the given config to disk as a Dusty sub-config\n in the Nginx includes directory. Then, either start nginx\n or tell it to reload its config to pick up what we've\n just written."} {"_id": "q_5566", "text": "We require the list of all remote repo paths to be passed in\n to this because otherwise we would need to import the spec assembler\n in this module, which would give us circular imports."} {"_id": "q_5567", "text": "Daemon-side command to ensure we're running the latest\n versions of any managed repos, including the\n specs repo, before we do anything else in the up flow."} {"_id": "q_5568", "text": "This command will use the compilers to get compose specs\n will pass those specs to the systems that need them. Those\n systems will in turn launch the services needed to make the\n local environment go."} {"_id": "q_5569", "text": "Restart any containers associated with Dusty, or associated with\n the provided app_or_service_names."} {"_id": "q_5570", "text": "Return a dictionary containing the Compose spec required to run\n Dusty's nginx container used for host forwarding."} {"_id": "q_5571", "text": "Given the assembled specs and app_name, this function will return all apps and services specified in\n 'conditional_links' if they are specified in 'apps' or 'services' in assembled_specs. That means that\n some other part of the system has declared them as necessary, so they should be linked to this app"} {"_id": "q_5572", "text": "This function returns a dictionary of the docker-compose.yml specifications for one app"} {"_id": "q_5573", "text": "This function returns a dictionary of the docker_compose specifications\n for one service. Currently, this is just the Dusty service spec with\n an additional volume mount to support Dusty's cp functionality."} {"_id": "q_5574", "text": "Returns a list of formatted port mappings for an app"} {"_id": "q_5575", "text": "This returns formatted volume specifications for a docker-compose app. We mount the app\n as well as any libs it needs so that local code is used in our container, instead of whatever\n code was in the docker image.\n\n Additionally, we create a volume for the /cp directory used by Dusty to facilitate\n easy file transfers using `dusty cp`."} {"_id": "q_5576", "text": "Expands specs.libs.depends.libs to include any indirectly required libs"} {"_id": "q_5577", "text": "Returns all libs that are referenced in specs.apps.depends.libs"} {"_id": "q_5578", "text": "Returns all services that are referenced in specs.apps.depends.services,\n or in specs.bundles.services"} {"_id": "q_5579", "text": "This function adds an assets key to the specs, which is filled in with a dictionary\n of all assets defined by apps and libs in the specs"} {"_id": "q_5580", "text": "This function takes an app or library name and will return the corresponding repo\n for that app or library"} {"_id": "q_5581", "text": "Given the spec of an app or library, returns all repos that are guaranteed\n to live in the same container"} {"_id": "q_5582", "text": "Given the name of an app or library, returns all repos that are guaranteed\n to live in the same container"} {"_id": "q_5583", "text": "Return a string of all host rules required to match\n the given spec. This string is wrapped in the Dusty hosts\n header and footer so it can be easily removed later."} {"_id": "q_5584", "text": "Moves the temporary binary to the location of the binary that's currently being run.\n Preserves owner, group, and permissions of original binary"} {"_id": "q_5585", "text": "Context manager for setting up a TaskQueue. Upon leaving the\n context manager, all tasks that were enqueued will be executed\n in parallel subject to `pool_size` concurrency constraints."} {"_id": "q_5586", "text": "This will output the nginx stream config string for specific port spec"} {"_id": "q_5587", "text": "Starting with Yosemite, launchd was rearchitected and now only one\n launchd process runs for all users. This allows us to much more easily\n impersonate a user through launchd and extract the environment\n variables from their running processes."} {"_id": "q_5588", "text": "Will check the mac_username config value; if it is present, will load that user's\n SSH_AUTH_SOCK environment variable to the current environment. This allows git clones\n to behave the same for the daemon as they do for the user"} {"_id": "q_5589", "text": "Recursively delete a path upon exiting this context\n manager. Supports targets that are files or directories."} {"_id": "q_5590", "text": "Copy a path from the local filesystem to a path inside a Dusty\n container. The files on the local filesystem must be accessible\n by the user specified in mac_username."} {"_id": "q_5591", "text": "Given a dictionary containing the expanded dusty DAG specs this function will\n return a dictionary containing the port mappings needed by downstream methods. Currently\n this includes docker_compose, virtualbox, nginx and hosts_file."} {"_id": "q_5592", "text": "Returns the Docker registry host associated with\n a given image name."} {"_id": "q_5593", "text": "Reads the local Docker client config for the current user\n and returns all registries to which the user may be logged in.\n This is intended to be run client-side, not by the daemon."} {"_id": "q_5594", "text": "Puts the client logger into streaming mode, which sends\n unbuffered input through to the socket one character at a time.\n We also disable propagation so the root logger does not\n receive many one-byte emissions. This context handler\n was originally created for streaming Compose up's\n terminal output through to the client and should only be\n used for similarly complex circumstances."} {"_id": "q_5595", "text": "This is used to compile the command that will be run when the docker container starts\n up. This command has to install any libs that the app uses, run the `always` command, and\n run the `once` command if the container is being launched for the first time"} {"_id": "q_5596", "text": "Raise the open file handles permitted by the Dusty daemon process\n and its child processes. The number we choose here needs to be within\n the OS X default kernel hard limit, which is 10240."} {"_id": "q_5597", "text": "Start the daemon's HTTP server on a separate thread.\n This server is only used for servicing container status\n requests from Dusty's custom 502 page."} {"_id": "q_5598", "text": "Ripped off and slightly modified based on docker-py's\n kwargs_from_env utility function."} {"_id": "q_5599", "text": "Get a list of containers associated with the list\n of services. If no services are provided, attempts to\n return all containers associated with Dusty."} {"_id": "q_5600", "text": "This function is used with `dusty up`. It will check all active repos to see if\n they are exported. If any are missing, it will replace current dusty exports with\n exports that are needed for currently active repos, and restart\n the nfs server"} {"_id": "q_5601", "text": "Our exports file will be invalid if this folder doesn't exist, and the NFS server\n will not run correctly."} {"_id": "q_5602", "text": "Given an existing consumer ID, return any new lines from the\n log since the last time the consumer was consumed."} {"_id": "q_5603", "text": "This returns a list of formatted volume specs for an app. These mounts declared in the apps' spec\n and mounts declared in all lib specs the app depends on"} {"_id": "q_5604", "text": "Returns a list of the formatted volume specs for a lib"} {"_id": "q_5605", "text": "Returns a list of the formatted volume mounts for all libs that an app uses"} {"_id": "q_5606", "text": "Initialize the Dusty VM if it does not already exist."} {"_id": "q_5607", "text": "Start the Dusty VM if it is not already running."} {"_id": "q_5608", "text": "Using VBoxManage is 0.5 seconds or so faster than Machine."} {"_id": "q_5609", "text": "Something in the VM chain, either VirtualBox or Machine, helpfully\n sets up localhost-to-VM forwarding on port 22. We can inspect this\n rule to determine the port on localhost which gets forwarded to\n 22 in the VM."} {"_id": "q_5610", "text": "Returns the MAC address assigned to the host-only adapter,\n using output from VBoxManage. Returned MAC address has no colons\n and is lower-cased."} {"_id": "q_5611", "text": "Given the rather-complex output from an 'ip addr show' command\n on the VM, parse the output to determine the IP address\n assigned to the interface with the given MAC."} {"_id": "q_5612", "text": "Determine the host-only IP of the Dusty VM through Virtualbox and SSH\n directly, bypassing Docker Machine. We do this because Docker Machine is\n much slower, taking about 600ms total. We are basically doing the same\n flow Docker Machine does in its own code."} {"_id": "q_5613", "text": "Converts a python dict to a namedtuple, saving memory."} {"_id": "q_5614", "text": "By default, return latest EOD Composite Price for a stock ticker.\n On average, each feed contains 3 data sources.\n\n Supported tickers + Available Day Ranges are here:\n https://apimedia.tiingo.com/docs/tiingo/daily/supported_tickers.zip\n\n Args:\n ticker (string): Unique identifier for stock ticker\n startDate (string): Start of ticker range in YYYY-MM-DD format\n endDate (string): End of ticker range in YYYY-MM-DD format\n fmt (string): 'csv' or 'json'\n frequency (string): Resample frequency"} {"_id": "q_5615", "text": "Return a pandas.DataFrame of historical prices for one or more ticker symbols.\n\n By default, return latest EOD Composite Price for a list of stock tickers.\n On average, each feed contains 3 data sources.\n\n Supported tickers + Available Day Ranges are here:\n https://apimedia.tiingo.com/docs/tiingo/daily/supported_tickers.zip\n or from the TiingoClient.list_tickers() method.\n\n Args:\n tickers (string/list): One or more unique identifiers for a stock ticker.\n startDate (string): Start of ticker range in YYYY-MM-DD format.\n endDate (string): End of ticker range in YYYY-MM-DD format.\n metric_name (string): Optional parameter specifying metric to be returned for each\n ticker. In the event of a single ticker, this is optional and if not specified\n all of the available data will be returned. In the event of a list of tickers,\n this parameter is required.\n frequency (string): Resample frequency (defaults to daily)."} {"_id": "q_5616", "text": "Make a local copy of the sqlite cookie database and return the new filename.\n This is necessary in case this database is still being written to while the user browses\n to avoid sqlite locking errors."} {"_id": "q_5617", "text": "Try to load cookies from all supported browsers and return combined cookiejar\n Optionally pass in a domain name to only load cookies from the specified domain"} {"_id": "q_5618", "text": "Decrypt encoded cookies"} {"_id": "q_5619", "text": "Get the application bearer token from client_id and client_secret."} {"_id": "q_5620", "text": "Make a request to the spotify API with the current bearer credentials.\n\n Parameters\n ----------\n route : Union[tuple[str, str], Route]\n A tuple of the method and url or a :class:`Route` object.\n kwargs : Any\n keyword arguments to pass into :class:`aiohttp.ClientSession.request`"} {"_id": "q_5621", "text": "Get an albums tracks by an ID.\n\n Parameters\n ----------\n spotify_id : str\n The spotify_id to search by.\n limit : Optional[int]\n The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.\n offset : Optiona[int]\n The offset of which Spotify should start yielding from.\n market : Optional[str]\n An ISO 3166-1 alpha-2 country code."} {"_id": "q_5622", "text": "Get a spotify artist by their ID.\n\n Parameters\n ----------\n spotify_id : str\n The spotify_id to search by."} {"_id": "q_5623", "text": "Get an artists tracks by their ID.\n\n Parameters\n ----------\n spotify_id : str\n The spotify_id to search by.\n include_groups : INCLUDE_GROUPS_TP\n INCLUDE_GROUPS\n limit : Optional[int]\n The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.\n offset : Optiona[int]\n The offset of which Spotify should start yielding from.\n market : Optional[str]\n An ISO 3166-1 alpha-2 country code."} {"_id": "q_5624", "text": "Get an artists top tracks per country with their ID.\n\n Parameters\n ----------\n spotify_id : str\n The spotify_id to search by.\n country : COUNTRY_TP\n COUNTRY"} {"_id": "q_5625", "text": "Get related artists for an artist by their ID.\n\n Parameters\n ----------\n spotify_id : str\n The spotify_id to search by."} {"_id": "q_5626", "text": "Get a single category used to tag items in Spotify.\n\n Parameters\n ----------\n category_id : str\n The Spotify category ID for the category.\n country : COUNTRY_TP\n COUNTRY\n locale : LOCALE_TP\n LOCALE"} {"_id": "q_5627", "text": "Get a list of Spotify playlists tagged with a particular category.\n\n Parameters\n ----------\n category_id : str\n The Spotify category ID for the category.\n limit : Optional[int]\n The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.\n offset : Optional[int]\n The index of the first item to return. Default: 0\n country : COUNTRY_TP\n COUNTRY"} {"_id": "q_5628", "text": "Get a list of categories used to tag items in Spotify.\n\n Parameters\n ----------\n limit : Optional[int]\n The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.\n offset : Optional[int]\n The index of the first item to return. Default: 0\n country : COUNTRY_TP\n COUNTRY\n locale : LOCALE_TP\n LOCALE"} {"_id": "q_5629", "text": "Get a list of Spotify featured playlists.\n\n Parameters\n ----------\n locale : LOCALE_TP\n LOCALE\n country : COUNTRY_TP\n COUNTRY\n timestamp : TIMESTAMP_TP\n TIMESTAMP\n limit : Optional[int]\n The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.\n offset : Optional[int]\n The index of the first item to return. Default: 0"} {"_id": "q_5630", "text": "Get a list of new album releases featured in Spotify.\n\n Parameters\n ----------\n limit : Optional[int]\n The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.\n offset : Optional[int]\n The index of the first item to return. Default: 0\n country : COUNTRY_TP\n COUNTRY"} {"_id": "q_5631", "text": "Get Recommendations Based on Seeds.\n\n Parameters\n ----------\n seed_artists : str\n A comma separated list of Spotify IDs for seed artists. Up to 5 seed values may be provided.\n seed_genres : str\n A comma separated list of any genres in the set of available genre seeds. Up to 5 seed values may be provided.\n seed_tracks : str\n A comma separated list of Spotify IDs for a seed track. Up to 5 seed values may be provided.\n limit : Optional[int]\n The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.\n market : Optional[str]\n An ISO 3166-1 alpha-2 country code.\n max_* : Optional[Keyword arguments]\n For each tunable track attribute, a hard ceiling on the selected track attribute\u2019s value can be provided.\n min_* : Optional[Keyword arguments]\n For each tunable track attribute, a hard floor on the selected track attribute\u2019s value can be provided.\n target_* : Optional[Keyword arguments]\n For each of the tunable track attributes (below) a target value may be provided."} {"_id": "q_5632", "text": "Check to see if the current user is following one or more artists or other Spotify users.\n\n Parameters\n ----------\n ids : List[str]\n A comma-separated list of the artist or the user Spotify IDs to check.\n A maximum of 50 IDs can be sent in one request.\n type : Optional[str]\n The ID type: either \"artist\" or \"user\".\n Default: \"artist\""} {"_id": "q_5633", "text": "Get the albums of a Spotify artist.\n\n Parameters\n ----------\n limit : Optional[int]\n The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.\n offset : Optiona[int]\n The offset of which Spotify should start yielding from.\n include_groups : INCLUDE_GROUPS_TP\n INCLUDE_GROUPS\n market : Optional[str]\n An ISO 3166-1 alpha-2 country code.\n\n Returns\n -------\n albums : List[Album]\n The albums of the artist."} {"_id": "q_5634", "text": "get the total amout of tracks in the album.\n\n Parameters\n ----------\n market : Optional[str]\n An ISO 3166-1 alpha-2 country code.\n\n Returns\n -------\n total : int\n The total amount of albums."} {"_id": "q_5635", "text": "Get the users currently playing track.\n\n Returns\n -------\n context, track : Tuple[Context, Track]\n A tuple of the context and track."} {"_id": "q_5636", "text": "Get information about the users avaliable devices.\n\n Returns\n -------\n devices : List[Device]\n The devices the user has available."} {"_id": "q_5637", "text": "Get tracks from the current users recently played tracks.\n\n Returns\n -------\n playlist_history : List[Dict[str, Union[Track, Context, str]]]\n A list of playlist history object.\n Each object is a dict with a timestamp, track and context field."} {"_id": "q_5638", "text": "Create a playlist for a Spotify user.\n\n Parameters\n ----------\n name : str\n The name of the playlist.\n public : Optional[bool]\n The public/private status of the playlist.\n `True` for public, `False` for private.\n collaborative : Optional[bool]\n If `True`, the playlist will become collaborative and other users will be able to modify the playlist.\n description : Optional[str]\n The playlist description\n\n Returns\n -------\n playlist : Playlist\n The playlist that was created."} {"_id": "q_5639", "text": "get the albums tracks from spotify.\n\n Parameters\n ----------\n limit : Optional[int]\n The limit on how many tracks to retrieve for this album (default is 20).\n offset : Optional[int]\n The offset from where the api should start from in the tracks.\n \n Returns\n -------\n tracks : List[Track]\n The tracks of the artist."} {"_id": "q_5640", "text": "loads all of the albums tracks, depending on how many the album has this may be a long operation.\n\n Parameters\n ----------\n market : Optional[str]\n An ISO 3166-1 alpha-2 country code. Provide this parameter if you want to apply Track Relinking.\n \n Returns\n -------\n tracks : List[Track]\n The tracks of the artist."} {"_id": "q_5641", "text": "Retrive an album with a spotify ID.\n\n Parameters\n ----------\n spotify_id : str\n The ID to search for.\n market : Optional[str]\n An ISO 3166-1 alpha-2 country code\n\n Returns\n -------\n album : Album\n The album from the ID"} {"_id": "q_5642", "text": "Retrive an track with a spotify ID.\n\n Parameters\n ----------\n spotify_id : str\n The ID to search for.\n\n Returns\n -------\n track : Track\n The track from the ID"} {"_id": "q_5643", "text": "Retrive multiple albums with a list of spotify IDs.\n\n Parameters\n ----------\n ids : List[str]\n the ID to look for\n market : Optional[str]\n An ISO 3166-1 alpha-2 country code\n\n Returns\n -------\n albums : List[Album]\n The albums from the IDs"} {"_id": "q_5644", "text": "Retrive multiple artists with a list of spotify IDs.\n\n Parameters\n ----------\n ids : List[str]\n the IDs to look for\n\n Returns\n -------\n artists : List[Artist]\n The artists from the IDs"} {"_id": "q_5645", "text": "decorator to assert an object has an attribute when run."} {"_id": "q_5646", "text": "Construct a OAuth2 object from a `spotify.Client`."} {"_id": "q_5647", "text": "Construct a OAuth2 URL instead of an OAuth2 object."} {"_id": "q_5648", "text": "Attributes used when constructing url parameters."} {"_id": "q_5649", "text": "URL parameters used."} {"_id": "q_5650", "text": "Execute the logic behind the meaning of ExpirationDate + return the matched status.\n\n :return:\n The status of the tested domain.\n Can be one of the official status.\n :rtype: str"} {"_id": "q_5651", "text": "Read the code and update all links."} {"_id": "q_5652", "text": "Check if the current version is greater as the older older one."} {"_id": "q_5653", "text": "Check if the current branch is `dev`."} {"_id": "q_5654", "text": "Check if we have to put the previous version into the deprecated list."} {"_id": "q_5655", "text": "Backup the current execution state."} {"_id": "q_5656", "text": "Restore data from the given path."} {"_id": "q_5657", "text": "Check if we have to ignore the given line.\n\n :param line: The line from the file.\n :type line: str"} {"_id": "q_5658", "text": "Handle the data from the options.\n\n :param options: The list of options from the rule.\n :type options: list\n\n :return: The list of domains to return globally.\n :rtype: list"} {"_id": "q_5659", "text": "Format the exctracted adblock line before passing it to the system.\n\n :param to_format: The extracted line from the file.\n :type to_format: str\n\n :param result: A list of the result of this method.\n :type result: list\n\n :return: The list of domains or IP to test.\n :rtype: list"} {"_id": "q_5660", "text": "Return the HTTP code status.\n\n :return: The matched and formatted status code.\n :rtype: str|int|None"} {"_id": "q_5661", "text": "Check if the given domain is a subdomain.\n\n :param domain: The domain we are checking.\n :type domain: str\n\n :return: The subdomain state.\n :rtype: bool\n\n .. warning::\n If an empty or a non-string :code:`domain` is given, we return :code:`None`."} {"_id": "q_5662", "text": "Check the syntax of the given IPv4.\n\n :param ip: The IPv4 to check the syntax for.\n :type ip: str\n\n :return: The syntax validity.\n :rtype: bool\n\n .. warning::\n If an empty or a non-string :code:`ip` is given, we return :code:`None`."} {"_id": "q_5663", "text": "Check if the given information is a URL.\n If it is the case, it download and update the location of file to test.\n\n :param passed: The url passed to the system.\n :type passed: str\n\n :return: The state of the check.\n :rtype: bool"} {"_id": "q_5664", "text": "Manage the loading of the url system."} {"_id": "q_5665", "text": "Decide if we print or not the header."} {"_id": "q_5666", "text": "Manage the database, autosave and autocontinue systems for the case that we are reading\n a file.\n\n :param current: The currently tested element.\n :type current: str\n\n :param last: The last element of the list.\n :type last: str\n\n :param status: The status of the currently tested element.\n :type status: str"} {"_id": "q_5667", "text": "Manage the case that we want to test only a domain.\n\n :param domain: The domain or IP to test.\n :type domain: str\n\n :param last_domain:\n The last domain to test if we are testing a file.\n :type last_domain: str\n\n :param return_status: Tell us if we need to return the status.\n :type return_status: bool"} {"_id": "q_5668", "text": "Manage the case that we want to test only a given url.\n\n :param url_to_test: The url to test.\n :type url_to_test: str\n\n :param last_url:\n The last url of the file we are testing\n (if exist)\n :type last_url: str"} {"_id": "q_5669", "text": "Format the extracted domain before passing it to the system.\n\n :param extracted_domain: The extracted domain.\n :type extracted_domain: str\n\n :return: The formatted domain or IP to test.\n :rtype: str\n\n .. note:\n Understand by formating the fact that we get rid\n of all the noises around the domain we want to test."} {"_id": "q_5670", "text": "Extract all non commented lines from the file we are testing.\n\n :return: The elements to test.\n :rtype: list"} {"_id": "q_5671", "text": "Manage the case that need to test each domain of a given file path.\n\n .. note::\n 1 domain per line."} {"_id": "q_5672", "text": "Manage the case that we have to test a file\n\n .. note::\n 1 URL per line."} {"_id": "q_5673", "text": "Switch PyFunceble.CONFIGURATION variables to their opposite.\n\n :param variable:\n The variable name to switch.\n The variable should be an index our configuration system.\n If we want to switch a bool variable, we should parse\n it here.\n :type variable: str|bool\n\n :param custom:\n Let us know if have to switch the parsed variable instead\n of our configuration index.\n :type custom: bool\n\n :return:\n The opposite of the configuration index or the given variable.\n :rtype: bool\n\n :raises:\n :code:`Exception`\n When the configuration is not valid. In other words,\n if the PyFunceble.CONFIGURATION[variable_name] is not a bool."} {"_id": "q_5674", "text": "Get the status while testing for an IP or domain.\n\n .. note::\n We consider that the domain or IP we are currently testing\n is into :code:`PyFunceble.INTERN[\"to_test\"]`."} {"_id": "q_5675", "text": "Handle the backend of the given status."} {"_id": "q_5676", "text": "Get the structure we are going to work with.\n\n :return: The structure we have to work with.\n :rtype: dict"} {"_id": "q_5677", "text": "Creates the given directory if it does not exists.\n\n :param directory: The directory to create.\n :type directory: str\n\n :param loop: Tell us if we are in the creation loop or not.\n :type loop: bool"} {"_id": "q_5678", "text": "Delete the directory which are not registered into our structure."} {"_id": "q_5679", "text": "Set the paths to the configuration files.\n\n :param path_to_config: The possible path to the config to load.\n :type path_to_config: str\n\n :return:\n The path to the config to read (0), the path to the default\n configuration to read as fallback.(1)\n :rtype: tuple"} {"_id": "q_5680", "text": "Download `public-suffix.json` if not present."} {"_id": "q_5681", "text": "Download the latest version of `dir_structure_production.json`."} {"_id": "q_5682", "text": "Execute the logic behind the merging."} {"_id": "q_5683", "text": "Convert the versions to a shorter one.\n\n :param version: The version to split.\n :type version: str\n\n :param return_non_digits:\n Activate the return of the non-digits parts of the splitted\n version.\n :type return_non_digits: bool\n\n :return: The splitted version name/numbers.\n :rtype: list"} {"_id": "q_5684", "text": "Compare the given versions.\n\n :param local: The local version converted by split_versions().\n :type local: list\n\n :param upstream: The upstream version converted by split_versions().\n :type upstream: list\n\n :return:\n - True: local < upstream\n - None: local == upstream\n - False: local > upstream\n :rtype: bool|None"} {"_id": "q_5685", "text": "Let us know if we are currently in the cloned version of\n PyFunceble which implicitly mean that we are in developement mode."} {"_id": "q_5686", "text": "Handle and check that some configuration index exists."} {"_id": "q_5687", "text": "Generate unified file. Understand by that that we use an unified table\n instead of a separate table for each status which could result into a\n misunderstanding."} {"_id": "q_5688", "text": "Generate a file according to the domain status."} {"_id": "q_5689", "text": "Check if we are allowed to produce a file based from the given\n information.\n\n :return:\n The state of the production.\n True: We do not produce file.\n False: We do produce file.\n :rtype: bool"} {"_id": "q_5690", "text": "Implement the standard and alphabetical sorting.\n\n :param element: The element we are currently reading.\n :type element: str\n\n :return: The formatted element.\n :rtype: str"} {"_id": "q_5691", "text": "The idea behind this method is to sort a list of domain hierarchicaly.\n\n :param element: The element we are currently reading.\n :type element: str\n\n :return: The formatted element.\n :rtype: str\n\n .. note::\n For a domain like :code:`aaa.bbb.ccc.tdl`.\n\n A normal sorting is done in the following order:\n 1. :code:`aaa`\n 2. :code:`bbb`\n 3. :code:`ccc`\n 4. :code:`tdl`\n\n This method allow the sorting to be done in the following order:\n 1. :code:`tdl`\n 2. :code:`ccc`\n 3. :code:`bbb`\n 4. :code:`aaa`"} {"_id": "q_5692", "text": "Initiate the IANA database if it is not the case."} {"_id": "q_5693", "text": "Extract the extention from the given block.\n Plus get its referer."} {"_id": "q_5694", "text": "Update the content of the `iana-domains-db` file."} {"_id": "q_5695", "text": "Retrieve the mining informations."} {"_id": "q_5696", "text": "Backup the mined informations."} {"_id": "q_5697", "text": "Remove the currently tested element from the mining\n data."} {"_id": "q_5698", "text": "Provide the list of mined so they can be added to the list\n queue.\n\n :return: The list of mined domains or URL.\n :rtype: list"} {"_id": "q_5699", "text": "Get and return the content of the given log file.\n\n :param file: The file we have to get the content from.\n :type file: str\n\n :return The content of the given file.\n :rtype: dict"} {"_id": "q_5700", "text": "Write the content into the given file.\n\n :param content: The dict to write.\n :type content: dict\n\n :param file: The file to write.\n :type file: str"} {"_id": "q_5701", "text": "Logs the case that the referer was not found.\n\n :param extension: The extension of the domain we are testing.\n :type extension: str"} {"_id": "q_5702", "text": "Construct header of the table according to template.\n\n :param data_to_print:\n The list of data to print into the header of the table.\n :type data_to_print: list\n\n :param header_separator:\n The separator to use between the table header and our data.\n :type header_separator: str\n\n :param colomn_separator: The separator to use between each colomns.\n :type colomn_separator: str\n\n :return: The data to print in a list format.\n :rtype: list"} {"_id": "q_5703", "text": "Management and creation of templates of header.\n Please consider as \"header\" the title of each columns.\n\n :param do_not_print:\n Tell us if we have to print the header or not.\n :type do_not_print: bool"} {"_id": "q_5704", "text": "Construct the table of data according to given size.\n\n :param size: The maximal length of each string in the table.\n :type size: list\n\n :return:\n A dict with all information about the data and how to which what\n maximal size to print it.\n :rtype: OrderedDict\n\n :raises:\n :code:`Exception`\n If the data and the size does not have the same length."} {"_id": "q_5705", "text": "Get the size of each columns from the header.\n\n :param header:\n The header template we have to get the size from.\n :type header: dict\n\n :return: The maximal size of the each data to print.\n :rtype: list"} {"_id": "q_5706", "text": "Management and input of data to the table.\n\n :raises:\n :code:`Exception`\n When self.data_to_print is not a list."} {"_id": "q_5707", "text": "Save the current time to the file.\n\n :param last:\n Tell us if we are at the very end of the file testing.\n :type last: bool"} {"_id": "q_5708", "text": "Set the databases files to delete."} {"_id": "q_5709", "text": "Delete almost all discovered files.\n\n :param clean_all:\n Tell the subsystem if we have to clean everything instesd\n of almost everything.\n :type clean_all: bool"} {"_id": "q_5710", "text": "Get hash of the given data.\n\n :param algo: The algorithm to use.\n :type algo: str"} {"_id": "q_5711", "text": "Return the hash of the given file"} {"_id": "q_5712", "text": "Remove a given key from a given dictionary.\n\n :param key_to_remove: The key(s) to delete.\n :type key_to_remove: list|str\n\n :return: The dict without the given key(s).\n :rtype: dict|None"} {"_id": "q_5713", "text": "Rename the given keys from the given dictionary.\n\n :param key_to_rename:\n The key(s) to rename.\n Expected format: :code:`{old:new}`\n :type key_to_rename: dict\n\n :param strict:\n Tell us if we have to rename the exact index or\n the index which looks like the given key(s)\n\n :return: The well formatted dict.\n :rtype: dict|None"} {"_id": "q_5714", "text": "Merge the content of to_merge into the given main dictionnary.\n\n :param to_merge: The dictionnary to merge.\n :type to_merge: dict\n\n :param strict:\n Tell us if we have to strictly merge lists.\n\n :code:`True`: We follow index\n :code`False`: We follow element (content)\n :type strict: bool\n\n :return: The merged dict.\n :rtype: dict"} {"_id": "q_5715", "text": "Save a dictionnary into a JSON file.\n\n :param destination:\n A path to a file where we're going to\n write the converted dict into a JSON format.\n :type destination: str"} {"_id": "q_5716", "text": "Fix the path of the given path.\n\n :param splited_path: A list to convert to the right path.\n :type splited_path: list\n\n :return: The fixed path.\n :rtype: str"} {"_id": "q_5717", "text": "Read a given file path and return its content.\n\n :return: The content of the given file path.\n :rtype: str"} {"_id": "q_5718", "text": "Return a well formatted list. Basicaly, it's sort a list and remove duplicate.\n\n :return: A sorted, without duplicate, list.\n :rtype: list"} {"_id": "q_5719", "text": "Return a list of string which don't match the\n given regex."} {"_id": "q_5720", "text": "Used to get exploitable result of re.search\n\n :return: The data of the match status.\n :rtype: mixed"} {"_id": "q_5721", "text": "Used to replace a matched string with another.\n\n :return: The data after replacement.\n :rtype: str"} {"_id": "q_5722", "text": "Print on screen and on file the percentages for each status."} {"_id": "q_5723", "text": "Check if the given URL is valid.\n\n :param url: The url to validate.\n :type url: str\n\n :param return_base:\n Allow us the return of the url base (if URL formatted correctly).\n :type return_formatted: bool\n\n :param return_formatted:\n Allow us to get the URL converted to IDNA if the conversion\n is activated.\n :type return_formatted: bool\n\n\n :return: The validity of the URL or its base.\n :rtype: bool|str"} {"_id": "q_5724", "text": "Check if the given subdomain is a subdomain.\n\n :param domain: The domain to validate.\n :type domain: str\n\n :return: The validity of the subdomain.\n :rtype: bool"} {"_id": "q_5725", "text": "Execute the logic behind the Syntax handling.\n\n :return: The syntax status.\n :rtype: str"} {"_id": "q_5726", "text": "Return the current content of the inactive-db.json file."} {"_id": "q_5727", "text": "Save the current database into the inactive-db.json file."} {"_id": "q_5728", "text": "Get the timestamp where we are going to save our current list.\n\n :return: The timestamp to append with the currently tested element.\n :rtype: int|str"} {"_id": "q_5729", "text": "Get the content of the database.\n\n :return: The content of the database.\n :rtype: list"} {"_id": "q_5730", "text": "Check if the currently tested element is into the database."} {"_id": "q_5731", "text": "Backup the database into its file."} {"_id": "q_5732", "text": "Check if the current time is older than the one in the database."} {"_id": "q_5733", "text": "Implementation of UNIX whois.\n\n :param whois_server: The WHOIS server to use to get the record.\n :type whois_server: str\n\n :param domain: The domain to get the whois record from.\n :type domain: str\n\n :param timeout: The timeout to apply to the request.\n :type timeout: int\n\n :return: The whois record from the given whois server, if exist.\n :rtype: str|None"} {"_id": "q_5734", "text": "Execute the logic behind the URL handling.\n\n :return: The status of the URL.\n :rtype: str"} {"_id": "q_5735", "text": "Return the referer aka the WHOIS server of the current domain extension."} {"_id": "q_5736", "text": "docstring for _randone"} {"_id": "q_5737", "text": "Wrapper for Zotero._cleanup"} {"_id": "q_5738", "text": "Add a retrieved template to the cache for 304 checking\n accepts a dict and key name, adds the retrieval time, and adds both\n to self.templates as a new dict using the specified key"} {"_id": "q_5739", "text": "Remove keys we added for internal use"} {"_id": "q_5740", "text": "Return the contents of My Publications"} {"_id": "q_5741", "text": "Return the total number of items in the specified collection"} {"_id": "q_5742", "text": "Return the total number of items for the specified tag"} {"_id": "q_5743", "text": "General method for returning total counts"} {"_id": "q_5744", "text": "Retrieve info about the permissions associated with the\n key associated to the given Zotero instance"} {"_id": "q_5745", "text": "Get the last modified version"} {"_id": "q_5746", "text": "Retrieve all collections and subcollections. Works for top-level collections\n or for a specific collection. Works at all collection depths."} {"_id": "q_5747", "text": "Get subcollections for a specific collection"} {"_id": "q_5748", "text": "Retrieve all items in the library for a particular query\n This method will override the 'limit' parameter if it's been set"} {"_id": "q_5749", "text": "Return a list of dicts which are dumped CSL JSON"} {"_id": "q_5750", "text": "Return a list of strings formatted as HTML citation entries"} {"_id": "q_5751", "text": "Get a template for a new item"} {"_id": "q_5752", "text": "Create attachments\n accepts a list of one or more attachment template dicts\n and an optional parent Item ID. If this is specified,\n attachments are created under this ID"} {"_id": "q_5753", "text": "Delete one or more saved searches by passing a list of one or more\n unique search keys"} {"_id": "q_5754", "text": "Add one or more tags to a retrieved item,\n then update it on the server\n Accepts a dict, and one or more tags to add to it\n Returns the updated item from the server"} {"_id": "q_5755", "text": "Update an existing item\n Accepts one argument, a dict containing Item data"} {"_id": "q_5756", "text": "Update existing items\n Accepts one argument, a list of dicts containing Item data"} {"_id": "q_5757", "text": "Validate saved search conditions, raising an error if any contain invalid operators"} {"_id": "q_5758", "text": "Split a multiline string into a list, excluding blank lines."} {"_id": "q_5759", "text": "Split a string with comma or space-separated elements into a list."} {"_id": "q_5760", "text": "Evaluate environment markers."} {"_id": "q_5761", "text": "Get configuration value."} {"_id": "q_5762", "text": "Set configuration value."} {"_id": "q_5763", "text": "Compatibility helper to use setup.cfg in setup.py."} {"_id": "q_5764", "text": "Get LanguageTool version."} {"_id": "q_5765", "text": "Get supported languages."} {"_id": "q_5766", "text": "Set LanguageTool directory."} {"_id": "q_5767", "text": "Match text against enabled rules."} {"_id": "q_5768", "text": "Return newest compatible version.\n\n >>> version = get_newest_possible_languagetool_version()\n >>> version in [JAVA_6_COMPATIBLE_VERSION,\n ... JAVA_7_COMPATIBLE_VERSION,\n ... LATEST_VERSION]\n True"} {"_id": "q_5769", "text": "Get common directory in a zip file if any."} {"_id": "q_5770", "text": "Make a Qt async slot run on asyncio loop."} {"_id": "q_5771", "text": "Class decorator to add a logger to a class."} {"_id": "q_5772", "text": "Selector has delivered us an event."} {"_id": "q_5773", "text": "Add more ASN.1 MIB source repositories.\n\n MibCompiler.compile will invoke each of configured source objects\n in order of their addition asking each to fetch MIB module specified\n by name.\n\n Args:\n sources: reader object(s)\n\n Returns:\n reference to itself (can be used for call chaining)"} {"_id": "q_5774", "text": "Add more transformed MIBs repositories to borrow MIBs from.\n\n Whenever MibCompiler.compile encounters MIB module which neither of\n the *searchers* can find or fetched ASN.1 MIB module can not be\n parsed (due to syntax errors), these *borrowers* objects will be\n invoked in order of their addition asking each if already transformed\n MIB can be fetched (borrowed).\n\n Args:\n borrowers: borrower object(s)\n\n Returns:\n reference to itself (can be used for call chaining)"} {"_id": "q_5775", "text": "Get current object.\n This is useful if you want the real\n object behind the proxy at a time for performance reasons or because\n you want to pass the object into a different context."} {"_id": "q_5776", "text": "r\"\"\"Kullback information criterion\n\n .. math:: KIC(k) = log(\\rho_k) + 3 \\frac{k+1}{N}\n\n :validation: double checked versus octave."} {"_id": "q_5777", "text": "r\"\"\"approximate corrected Kullback information\n\n .. math:: AKICc(k) = log(rho_k) + \\frac{p}{N*(N-k)} + (3-\\frac{k+2}{N})*\\frac{k+1}{N-k-2}"} {"_id": "q_5778", "text": "r\"\"\"Final prediction error criterion\n\n .. math:: FPE(k) = \\frac{N + k + 1}{N - k - 1} \\rho_k\n\n :validation: double checked versus octave."} {"_id": "q_5779", "text": "r\"\"\"Minimum Description Length\n\n .. math:: MDL(k) = N log \\rho_k + p \\log N\n\n :validation: results"} {"_id": "q_5780", "text": "Generate the Main examples gallery reStructuredText\n\n Start the sphinx-gallery configuration and recursively scan the examples\n directories in order to populate the examples gallery"} {"_id": "q_5781", "text": "Setup sphinx-gallery sphinx extension"} {"_id": "q_5782", "text": "r\"\"\"Correlation function\n\n This function should give the same results as :func:`xcorr` but it\n returns the positive lags only. Moreover the algorithm does not use\n FFT as compared to other algorithms.\n\n :param array x: first data array of length N\n :param array y: second data array of length N. If not specified, computes the\n autocorrelation.\n :param int maxlags: compute cross correlation between [0:maxlags]\n when maxlags is not specified, the range of lags is [0:maxlags].\n :param str norm: normalisation in ['biased', 'unbiased', None, 'coeff']\n\n * *biased* correlation=raw/N,\n * *unbiased* correlation=raw/(N-`|lag|`)\n * *coeff* correlation=raw/(rms(x).rms(y))/N\n * None correlation=raw\n\n :return:\n * a numpy.array correlation sequence, r[1,N]\n * a float for the zero-lag correlation, r[0]\n\n The *unbiased* correlation has the form:\n\n .. math::\n\n \\hat{r}_{xx} = \\frac{1}{N-m}T \\sum_{n=0}^{N-m-1} x[n+m]x^*[n] T\n\n The *biased* correlation differs by the front factor only:\n\n .. math::\n\n \\check{r}_{xx} = \\frac{1}{N}T \\sum_{n=0}^{N-m-1} x[n+m]x^*[n] T\n\n with :math:`0\\leq m\\leq N-1`.\n\n .. doctest::\n\n >>> from spectrum import CORRELATION\n >>> x = [1,2,3,4,5]\n >>> res = CORRELATION(x,x, maxlags=0, norm='biased')\n >>> res[0]\n 11.0\n\n .. note:: this function should be replaced by :func:`xcorr`.\n\n .. seealso:: :func:`xcorr`"} {"_id": "q_5783", "text": "Cross-correlation using numpy.correlate\n\n Estimates the cross-correlation (and autocorrelation) sequence of a random\n process of length N. By default, there is no normalisation and the output\n sequence of the cross-correlation has a length 2*N+1.\n\n :param array x: first data array of length N\n :param array y: second data array of length N. If not specified, computes the\n autocorrelation.\n :param int maxlags: compute cross correlation between [-maxlags:maxlags]\n when maxlags is not specified, the range of lags is [-N+1:N-1].\n :param str option: normalisation in ['biased', 'unbiased', None, 'coeff']\n\n The true cross-correlation sequence is\n\n .. math:: r_{xy}[m] = E(x[n+m].y^*[n]) = E(x[n].y^*[n-m])\n\n However, in practice, only a finite segment of one realization of the\n infinite-length random process is available.\n\n The correlation is estimated using numpy.correlate(x,y,'full').\n Normalisation is handled by this function using the following cases:\n\n * 'biased': Biased estimate of the cross-correlation function\n * 'unbiased': Unbiased estimate of the cross-correlation function\n * 'coeff': Normalizes the sequence so the autocorrelations at zero\n lag is 1.0.\n\n :return:\n * a numpy.array containing the cross-correlation sequence (length 2*N-1)\n * lags vector\n\n .. note:: If x and y are not the same length, the shorter vector is\n zero-padded to the length of the longer vector.\n\n .. rubric:: Examples\n\n .. doctest::\n\n >>> from spectrum import xcorr\n >>> x = [1,2,3,4,5]\n >>> c, l = xcorr(x,x, maxlags=0, norm='biased')\n >>> c\n array([ 11.])\n\n .. seealso:: :func:`CORRELATION`."} {"_id": "q_5784", "text": "Finds the minimum eigenvalue of a Hermitian Toeplitz matrix\n\n The classical power method is used together with a fast Toeplitz\n equation solution routine. The eigenvector is normalized to unit length.\n\n :param T0: Scalar corresponding to real matrix element t(0)\n :param T: Array of M complex matrix elements t(1),...,t(M) C from the left column of the Toeplitz matrix\n :param TOL: Real scalar tolerance; routine exits when [ EVAL(k) - EVAL(k-1) ]/EVAL(k-1) < TOL , where the index k denotes the iteration number.\n\n :return:\n * EVAL - Real scalar denoting the minimum eigenvalue of matrix\n * EVEC - Array of M complex eigenvector elements associated\n\n\n .. note::\n * External array T must be dimensioned >= M\n * array EVEC must be >= M+1\n * Internal array E must be dimensioned >= M+1 . \n\n * **dependencies**\n * :meth:`spectrum.toeplitz.HERMTOEP`"} {"_id": "q_5785", "text": "r\"\"\"Generate the Morlet waveform\n\n\n The Morlet waveform is defined as follows:\n\n .. math:: w[x] = \\cos{5x} \\exp^{-x^2/2}\n\n :param lb: lower bound\n :param ub: upper bound\n :param int n: waveform data samples\n\n\n .. plot::\n :include-source:\n :width: 80%\n\n from spectrum import morlet\n from pylab import plot\n plot(morlet(0,10,100))"} {"_id": "q_5786", "text": "convert reflection coefficients to prediction filter polynomial\n\n :param k: reflection coefficients"} {"_id": "q_5787", "text": "Convert reflection coefficients to log area ratios.\n\n :param k: reflection coefficients\n :return: inverse sine parameters\n\n The log area ratio is defined by G = log((1+k)/(1-k)) , where the K\n parameter is the reflection coefficient.\n\n .. seealso:: :func:`lar2rc`, :func:`rc2poly`, :func:`rc2ac`, :func:`rc2ic`.\n\n :References:\n [1] J. Makhoul, \"Linear Prediction: A Tutorial Review,\" Proc. IEEE, Vol.63, No.4, pp.561-580, Apr 1975."} {"_id": "q_5788", "text": "Convert log area ratios to reflection coefficients.\n\n :param g: log area ratios\n :returns: the reflection coefficients\n\n .. seealso: :func:`rc2lar`, :func:`poly2rc`, :func:`ac2rc`, :func:`is2rc`.\n\n :References:\n [1] J. Makhoul, \"Linear Prediction: A Tutorial Review,\" Proc. IEEE, Vol.63, No.4, pp.561-580, Apr 1975."} {"_id": "q_5789", "text": "Convert line spectral frequencies to prediction filter coefficients\n\n returns a vector a containing the prediction filter coefficients from a vector lsf of line spectral frequencies.\n\n .. doctest::\n\n >>> from spectrum import lsf2poly\n >>> lsf = [0.7842 , 1.5605 , 1.8776 , 1.8984, 2.3593]\n >>> a = lsf2poly(lsf)\n\n # array([ 1.00000000e+00, 6.14837835e-01, 9.89884967e-01,\n # 9.31594056e-05, 3.13713832e-03, -8.12002261e-03 ])\n\n .. seealso:: poly2lsf, rc2poly, ac2poly, rc2is"} {"_id": "q_5790", "text": "Prediction polynomial to line spectral frequencies.\n\n converts the prediction polynomial specified by A,\n into the corresponding line spectral frequencies, LSF.\n normalizes the prediction polynomial by A(1).\n\n .. doctest::\n\n >>> from spectrum import poly2lsf\n >>> a = [1.0000, 0.6149, 0.9899, 0.0000 ,0.0031, -0.0082]\n >>> lsf = poly2lsf(a)\n >>> lsf = array([0.7842, 1.5605, 1.8776, 1.8984, 2.3593])\n\n .. seealso:: lsf2poly, poly2rc, poly2qc, rc2is"} {"_id": "q_5791", "text": "Convert a one-sided PSD to a twosided PSD\n\n In order to keep the power in the onesided PSD the same\n as in the twosided version, the onesided values are twice\n as much as in the input data (except for the zero-lag value).\n\n ::\n\n >>> twosided_2_onesided([10, 2,3,3,2,8])\n array([ 10., 4., 6., 8.])"} {"_id": "q_5792", "text": "Convert a two-sided PSD to a one-sided PSD\n\n In order to keep the power in the twosided PSD the same\n as in the onesided version, the twosided values are 2 times\n lower than the input data (except for the zero-lag and N-lag\n values).\n\n ::\n\n >>> twosided_2_onesided([10, 4, 6, 8])\n array([ 10., 2., 3., 3., 2., 8.])"} {"_id": "q_5793", "text": "Convert a two-sided PSD to a center-dc PSD"} {"_id": "q_5794", "text": "Convert a center-dc PSD to a twosided PSD"} {"_id": "q_5795", "text": "A simple test example with two close frequencies"} {"_id": "q_5796", "text": "Plot the data set, using the sampling information to set the x-axis\n correctly."} {"_id": "q_5797", "text": "Returns the autocovariance of signal s at all lags.\n\n Adheres to the definition\n sxx[k] = E{S[n]S[n+k]} = cov{S[n],S[n+k]}\n where E{} is the expectation operator, and S is a zero mean process"} {"_id": "q_5798", "text": "Separate `filename` content between docstring and the rest\n\n Strongly inspired from ast.get_docstring.\n\n Returns\n -------\n docstring: str\n docstring of `filename`\n rest: str\n `filename` content without the docstring"} {"_id": "q_5799", "text": "Returns md5sum of file"} {"_id": "q_5800", "text": "Returns True if src_file has a different md5sum"} {"_id": "q_5801", "text": "Test existence of image file and no change in md5sum of\n example"} {"_id": "q_5802", "text": "Save all open matplotlib figures of the example code-block\n\n Parameters\n ----------\n image_path : str\n Path where plots are saved (format string which accepts figure number)\n fig_count : int\n Previous figure number count. Figure number add from this number\n\n Returns\n -------\n list of strings containing the full path to each figure"} {"_id": "q_5803", "text": "Save the thumbnail image"} {"_id": "q_5804", "text": "Executes the code block of the example file"} {"_id": "q_5805", "text": "This function solve Ax=B directly without taking care of the input\n matrix properties."} {"_id": "q_5806", "text": "Simple periodogram, but matrices accepted.\n\n :param x: an array or matrix of data samples.\n :param NFFT: length of the data before FFT is computed (zero padding)\n :param bool detrend: detrend the data before co,puteing the FFT\n :param float sampling: sampling frequency of the input :attr:`data`.\n\n :param scale_by_freq:\n :param str window:\n\n :return: 2-sided PSD if complex data, 1-sided if real.\n\n if a matrix is provided (using numpy.matrix), then a periodogram\n is computed for each row. The returned matrix has the same shape as the input\n matrix.\n\n The mean of the input data is also removed from the data before computing\n the psd.\n\n .. plot::\n :width: 80%\n :include-source:\n\n from pylab import grid, semilogy\n from spectrum import data_cosine, speriodogram\n data = data_cosine(N=1024, A=0.1, sampling=1024, freq=200)\n semilogy(speriodogram(data, detrend=False, sampling=1024), marker='o')\n grid(True)\n\n\n .. plot::\n :width: 80%\n :include-source:\n\n import numpy\n from spectrum import speriodogram, data_cosine\n from pylab import figure, semilogy, figure ,imshow\n # create N data sets and make the frequency dependent on the time\n N = 100\n m = numpy.concatenate([data_cosine(N=1024, A=0.1, sampling=1024, freq=x) \n for x in range(1, N)]);\n m.resize(N, 1024)\n res = speriodogram(m)\n figure(1)\n semilogy(res)\n figure(2)\n imshow(res.transpose(), aspect='auto')\n\n .. todo:: a proper spectrogram class/function that takes care of normalisation"} {"_id": "q_5807", "text": "r\"\"\"Simple periodogram wrapper of numpy.psd function.\n\n :param A: the input data\n :param int NFFT: total length of the final data sets (padded \n with zero if needed; default is 4096)\n :param str window:\n\n :Technical documentation:\n\n When we calculate the periodogram of a set of data we get an estimation\n of the spectral density. In fact as we use a Fourier transform and a\n truncated segments the spectrum is the convolution of the data with a\n rectangular window which Fourier transform is\n\n .. math::\n\n W(s)= \\frac{1}{N^2} \\left[ \\frac{\\sin(\\pi s)}{\\sin(\\pi s/N)} \\right]^2\n\n Thus oscillations and sidelobes appears around the main frequency. One aim of t he tapering is to reduced this effects. We multiply data by a window whose sidelobes are much smaller than the main lobe. Classical window is hanning window. But other windows are available. However we must take into account this energy and divide the spectrum by energy of taper used. Thus periodogram becomes :\n\n .. math::\n\n D_k \\equiv \\sum_{j=0}^{N-1}c_jw_j \\; e^{2\\pi ijk/N} \\qquad k=0,...,N-1\n\n .. math::\n\n P(0)=P(f_0)=\\frac{1}{2\\pi W_{ss}}\\arrowvert{D_0}\\arrowvert^2\n\n .. math::\n\n P(f_k)=\\frac{1}{2\\pi W_{ss}} \\left[\\arrowvert{D_k}\\arrowvert^2+\\arrowvert{D_{N-k}}\\arrowvert^2\\right] \\qquad k=0,1,..., \\left( \\frac{1}{2}-1 \\right)\n\n .. math::\n\n P(f_c)=P(f_{N/2})= \\frac{1}{2\\pi W_{ss}} \\arrowvert{D_{N/2}}\\arrowvert^2\n\n with\n\n .. math::\n\n {W_{ss}} \\equiv N\\sum_{j=0}^{N-1}w_j^2\n\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import WelchPeriodogram, marple_data\n psd = WelchPeriodogram(marple_data, 256)"} {"_id": "q_5808", "text": "Return the centered frequency range as a generator.\n\n ::\n\n >>> print(list(Range(8).centerdc_gen()))\n [-0.5, -0.375, -0.25, -0.125, 0.0, 0.125, 0.25, 0.375]"} {"_id": "q_5809", "text": "Return the one-sided frequency range as a generator.\n\n If :attr:`N` is even, the length is N/2 + 1.\n If :attr:`N` is odd, the length is (N+1)/2.\n\n ::\n\n >>> print(list(Range(8).onesided()))\n [0.0, 0.125, 0.25, 0.375, 0.5]\n >>> print(list(Range(9).onesided()))\n [0.0, 0.1111, 0.2222, 0.3333, 0.4444]"} {"_id": "q_5810", "text": "r\"\"\"Return the power contained in the PSD\n\n if scale_by_freq is False, the power is:\n\n .. math:: P = N \\sum_{k=1}^{N} P_{xx}(k)\n\n else, it is\n\n .. math:: P = \\sum_{k=1}^{N} P_{xx}(k) \\frac{df}{2\\pi}\n\n .. todo:: check these equations"} {"_id": "q_5811", "text": "Returns a dictionary with the elements of a Jupyter notebook"} {"_id": "q_5812", "text": "Converts the RST text from the examples docstrigs and comments\n into markdown text for the IPython notebooks"} {"_id": "q_5813", "text": "Saves the notebook to a file"} {"_id": "q_5814", "text": "Autoregressive and moving average estimators.\n\n This function provides an estimate of the autoregressive\n parameters, the moving average parameters, and the driving\n white noise variance of an ARMA(P,Q) for a complex or real data sequence.\n\n The parameters are estimated using three steps:\n\n * Estimate the AR parameters from the original data based on a least\n squares modified Yule-Walker technique,\n * Produce a residual time sequence by filtering the original data\n with a filter based on the AR parameters,\n * Estimate the MA parameters from the residual time sequence.\n\n :param array X: Array of data samples (length N)\n :param int P: Desired number of AR parameters\n :param int Q: Desired number of MA parameters\n :param int lag: Maximum lag to use for autocorrelation estimates\n\n :return:\n * A - Array of complex P AR parameter estimates\n * B - Array of complex Q MA parameter estimates\n * RHO - White noise variance estimate\n\n .. note::\n * lag must be >= Q (MA order)\n\n **dependencies**:\n * :meth:`spectrum.correlation.CORRELATION`\n * :meth:`spectrum.covar.arcovar`\n * :meth:`spectrum.arma.ma`\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import arma_estimate, arma2psd, marple_data\n import pylab\n\n a,b, rho = arma_estimate(marple_data, 15, 15, 30)\n psd = arma2psd(A=a, B=b, rho=rho, sides='centerdc', norm=True)\n pylab.plot(10 * pylab.log10(psd))\n pylab.ylim([-50,0])\n\n :reference: [Marple]_"} {"_id": "q_5815", "text": "Moving average estimator.\n\n This program provides an estimate of the moving average parameters\n and driving noise variance for a data sequence based on a\n long AR model and a least squares fit.\n\n :param array X: The input data array\n :param int Q: Desired MA model order (must be >0 and >> code = '''\n ... from a.b import c\n ... import d as e\n ... print(c)\n ... e.HelloWorld().f.g\n ... '''\n >>> for name, o in sorted(identify_names(code).items()):\n ... print(name, o['name'], o['module'], o['module_short'])\n c c a.b a.b\n e.HelloWorld HelloWorld d d"} {"_id": "q_5843", "text": "Generates RST to place a thumbnail in a gallery"} {"_id": "q_5844", "text": "r\"\"\"Compute AR coefficients using Yule-Walker method\n\n :param X: Array of complex data values, X(1) to X(N)\n :param int order: Order of autoregressive process to be fitted (integer)\n :param str norm: Use a biased or unbiased correlation.\n :param bool allow_singularity:\n\n :return:\n * AR coefficients (complex)\n * variance of white noise (Real)\n * reflection coefficients for use in lattice filter\n\n .. rubric:: Description:\n\n The Yule-Walker method returns the polynomial A corresponding to the\n AR parametric signal model estimate of vector X using the Yule-Walker\n (autocorrelation) method. The autocorrelation may be computed using a\n **biased** or **unbiased** estimation. In practice, the biased estimate of\n the autocorrelation is used for the unknown true autocorrelation. Indeed,\n an unbiased estimate may result in nonpositive-definite autocorrelation\n matrix.\n So, a biased estimate leads to a stable AR filter.\n The following matrix form represents the Yule-Walker equations. The are\n solved by means of the Levinson-Durbin recursion:\n\n .. math::\n\n \\left( \\begin{array}{cccc}\n r(1) & r(2)^* & \\dots & r(n)^*\\\\\n r(2) & r(1)^* & \\dots & r(n-1)^*\\\\\n \\dots & \\dots & \\dots & \\dots\\\\\n r(n) & \\dots & r(2) & r(1) \\end{array} \\right)\n \\left( \\begin{array}{cccc}\n a(2)\\\\\n a(3) \\\\\n \\dots \\\\\n a(n+1) \\end{array} \\right)\n =\n \\left( \\begin{array}{cccc}\n -r(2)\\\\\n -r(3) \\\\\n \\dots \\\\\n -r(n+1) \\end{array} \\right)\n\n The outputs consists of the AR coefficients, the estimated variance of the\n white noise process, and the reflection coefficients. These outputs can be\n used to estimate the optimal order by using :mod:`~spectrum.criteria`.\n\n .. rubric:: Examples:\n\n From a known AR process or order 4, we estimate those AR parameters using\n the aryule function.\n\n .. doctest::\n\n >>> from scipy.signal import lfilter\n >>> from spectrum import *\n >>> from numpy.random import randn\n >>> A =[1, -2.7607, 3.8106, -2.6535, 0.9238]\n >>> noise = randn(1, 1024)\n >>> y = lfilter([1], A, noise);\n >>> #filter a white noise input to create AR(4) process\n >>> [ar, var, reflec] = aryule(y[0], 4)\n >>> # ar should contains values similar to A\n\n The PSD estimate of a data samples is computed and plotted as follows:\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import *\n from pylab import *\n\n ar, P, k = aryule(marple_data, 15, norm='biased')\n psd = arma2psd(ar)\n plot(linspace(-0.5, 0.5, 4096), 10 * log10(psd/max(psd)))\n axis([-0.5, 0.5, -60, 0])\n\n .. note:: The outputs have been double checked against (1) octave outputs\n (octave has norm='biased' by default) and (2) Marple test code.\n\n .. seealso:: This function uses :func:`~spectrum.levinson.LEVINSON` and\n :func:`~spectrum.correlation.CORRELATION`. See the :mod:`~spectrum.criteria`\n module for criteria to automatically select the AR order.\n\n :References: [Marple]_"} {"_id": "q_5845", "text": "r\"\"\"Levinson-Durbin recursion.\n\n Find the coefficients of a length(r)-1 order autoregressive linear process\n\n :param r: autocorrelation sequence of length N + 1 (first element being the zero-lag autocorrelation)\n :param order: requested order of the autoregressive coefficients. default is N.\n :param allow_singularity: false by default. Other implementations may be True (e.g., octave)\n\n :return:\n * the `N+1` autoregressive coefficients :math:`A=(1, a_1...a_N)`\n * the prediction errors\n * the `N` reflections coefficients values\n\n This algorithm solves the set of complex linear simultaneous equations\n using Levinson algorithm.\n\n .. math::\n\n \\bold{T}_M \\left( \\begin{array}{c} 1 \\\\ \\bold{a}_M \\end{array} \\right) =\n \\left( \\begin{array}{c} \\rho_M \\\\ \\bold{0}_M \\end{array} \\right)\n\n where :math:`\\bold{T}_M` is a Hermitian Toeplitz matrix with elements\n :math:`T_0, T_1, \\dots ,T_M`.\n\n .. note:: Solving this equations by Gaussian elimination would\n require :math:`M^3` operations whereas the levinson algorithm\n requires :math:`M^2+M` additions and :math:`M^2+M` multiplications.\n\n This is equivalent to solve the following symmetric Toeplitz system of\n linear equations\n\n .. math::\n\n \\left( \\begin{array}{cccc}\n r_1 & r_2^* & \\dots & r_{n}^*\\\\\n r_2 & r_1^* & \\dots & r_{n-1}^*\\\\\n \\dots & \\dots & \\dots & \\dots\\\\\n r_n & \\dots & r_2 & r_1 \\end{array} \\right)\n \\left( \\begin{array}{cccc}\n a_2\\\\\n a_3 \\\\\n \\dots \\\\\n a_{N+1} \\end{array} \\right)\n =\n \\left( \\begin{array}{cccc}\n -r_2\\\\\n -r_3 \\\\\n \\dots \\\\\n -r_{N+1} \\end{array} \\right)\n\n where :math:`r = (r_1 ... r_{N+1})` is the input autocorrelation vector, and\n :math:`r_i^*` denotes the complex conjugate of :math:`r_i`. The input r is typically\n a vector of autocorrelation coefficients where lag 0 is the first\n element :math:`r_1`.\n\n\n .. doctest::\n\n >>> import numpy; from spectrum import LEVINSON\n >>> T = numpy.array([3., -2+0.5j, .7-1j])\n >>> a, e, k = LEVINSON(T)"} {"_id": "q_5846", "text": "computes the autocorrelation coefficients, R based\n on the prediction polynomial A and the final prediction error Efinal,\n using the stepdown algorithm.\n\n Works for real or complex data\n\n :param a:\n :param efinal:\n\n :return:\n * R, the autocorrelation\n * U prediction coefficient\n * kr reflection coefficients\n * e errors\n\n A should be a minimum phase polynomial and A(1) is assumed to be unity.\n\n :returns: (P+1) by (P+1) upper triangular matrix, U,\n that holds the i'th order prediction polynomials\n Ai, i=1:P, where P is the order of the input\n polynomial, A.\n\n\n\n [ 1 a1(1)* a2(2)* ..... aP(P) * ]\n [ 0 1 a2(1)* ..... aP(P-1)* ]\n U = [ .................................]\n [ 0 0 0 ..... 1 ]\n\n from which the i'th order prediction polynomial can be extracted\n using Ai=U(i+1:-1:1,i+1)'. The first row of U contains the\n conjugates of the reflection coefficients, and the K's may be\n extracted using, K=conj(U(1,2:end)).\n\n .. todo:: remove the conjugate when data is real data, clean up the code\n test and doc."} {"_id": "q_5847", "text": "LEVUP One step forward Levinson recursion\n\n :param acur:\n :param knxt:\n :return:\n * anxt the P+1'th order prediction polynomial based on the P'th order prediction polynomial, acur, and the\n P+1'th order reflection coefficient, Knxt.\n * enxt the P+1'th order prediction prediction error, based on the P'th order prediction error, ecur.\n\n\n :References: P. Stoica R. Moses, Introduction to Spectral Analysis Prentice Hall, N.J., 1997, Chapter 3."} {"_id": "q_5848", "text": "r\"\"\"Simple and fast implementation of the covariance AR estimate\n\n This code is 10 times faster than :func:`arcovar_marple` and more importantly\n only 10 lines of code, compared to a 200 loc for :func:`arcovar_marple`\n\n\n :param array X: Array of complex data samples\n :param int oder: Order of linear prediction model\n\n :return:\n * a - Array of complex forward linear prediction coefficients\n * e - error\n\n The covariance method fits a Pth order autoregressive (AR) model to the\n input signal, which is assumed to be the output of\n an AR system driven by white noise. This method minimizes the forward\n prediction error in the least-squares sense. The output vector\n contains the normalized estimate of the AR system parameters\n\n The white noise input variance estimate is also returned.\n\n If is the power spectral density of y(n), then:\n\n .. math:: \\frac{e}{\\left| A(e^{jw}) \\right|^2} = \\frac{e}{\\left| 1+\\sum_{k-1}^P a(k)e^{-jwk}\\right|^2}\n\n Because the method characterizes the input data using an all-pole model,\n the correct choice of the model order p is important.\n\n .. plot::\n :width: 80%\n :include-source:\n\n from spectrum import arcovar, marple_data, arma2psd\n from pylab import plot, log10, linspace, axis\n\n ar_values, error = arcovar(marple_data, 15)\n psd = arma2psd(ar_values, sides='centerdc')\n plot(linspace(-0.5, 0.5, len(psd)), 10*log10(psd/max(psd)))\n axis([-0.5, 0.5, -60, 0])\n\n .. seealso:: :class:`pcovar`\n\n :validation: the AR parameters are the same as those returned by\n a completely different function :func:`arcovar_marple`.\n\n :References: [Mathworks]_"} {"_id": "q_5849", "text": "Linear Predictor Coefficients.\n\n :param x:\n :param int N: default is length(X) - 1\n\n :Details:\n\n Finds the coefficients :math:`A=(1, a(2), \\dots a(N+1))`, of an Nth order\n forward linear predictor that predicts the current value value of the\n real-valued time series x based on past samples:\n\n .. math:: \\hat{x}(n) = -a(2)*x(n-1) - a(3)*x(n-2) - ... - a(N+1)*x(n-N)\n\n such that the sum of the squares of the errors\n\n .. math:: err(n) = X(n) - Xp(n)\n\n is minimized. This function uses the Levinson-Durbin recursion to\n solve the normal equations that arise from the least-squares formulation.\n\n .. seealso:: :func:`levinson`, :func:`aryule`, :func:`prony`, :func:`stmcb`\n\n .. todo:: matrix case, references\n\n :Example:\n\n ::\n\n from scipy.signal import lfilter\n noise = randn(50000,1); % Normalized white Gaussian noise\n x = filter([1], [1 1/2 1/3 1/4], noise)\n x = x[45904:50000]\n x.reshape(4096, 1)\n x = x[0]\n\n Compute the predictor coefficients, estimated signal, prediction error, and autocorrelation sequence of the prediction error:\n\n\n 1.00000 + 0.00000i 0.51711 - 0.00000i 0.33908 - 0.00000i 0.24410 - 0.00000i\n\n ::\n\n a = lpc(x, 3)\n est_x = lfilter([0 -a(2:end)],1,x); % Estimated signal\n e = x - est_x; % Prediction error\n [acs,lags] = xcorr(e,'coeff'); % ACS of prediction error"} {"_id": "q_5850", "text": "Return Pascal matrix\n\n :param int n: size of the matrix\n\n .. doctest::\n\n >>> from spectrum import pascal\n >>> pascal(6)\n array([[ 1., 1., 1., 1., 1., 1.],\n [ 1., 2., 3., 4., 5., 6.],\n [ 1., 3., 6., 10., 15., 21.],\n [ 1., 4., 10., 20., 35., 56.],\n [ 1., 5., 15., 35., 70., 126.],\n [ 1., 6., 21., 56., 126., 252.]])\n\n .. todo:: use the symmetric property to improve computational time if needed"} {"_id": "q_5851", "text": "SVD decomposition using numpy.linalg.svd\n\n :param A: a M by N matrix\n\n :return:\n * U, a M by M matrix\n * S the N eigen values\n * V a N by N matrix\n\n See :func:`numpy.linalg.svd` for a detailed documentation.\n\n Should return the same as in [Marple]_ , CSVD routine.\n\n ::\n\n U, S, V = numpy.linalg.svd(A)\n U, S, V = cvsd(A)"} {"_id": "q_5852", "text": "Yield paths to standard modules."} {"_id": "q_5853", "text": "Yield standard module names."} {"_id": "q_5854", "text": "Yield line numbers of unused imports."} {"_id": "q_5855", "text": "Yield line number and module name of unused imports."} {"_id": "q_5856", "text": "Yield line number of star import usage."} {"_id": "q_5857", "text": "Yield line number, undefined name, and its possible origin module."} {"_id": "q_5858", "text": "Yield line numbers of duplicate keys."} {"_id": "q_5859", "text": "Return dict mapping the key to list of messages."} {"_id": "q_5860", "text": "Return messages from pyflakes."} {"_id": "q_5861", "text": "Return package name in import statement."} {"_id": "q_5862", "text": "Return True if import is spans multiples lines."} {"_id": "q_5863", "text": "Parse and filter ``from something import a, b, c``.\n\n Return line without unused import modules, or `pass` if all of the\n module in import is unused."} {"_id": "q_5864", "text": "Return dictionary that maps line number to message."} {"_id": "q_5865", "text": "Return True if value is a literal or a name."} {"_id": "q_5866", "text": "Yield line numbers of unneeded \"pass\" statements."} {"_id": "q_5867", "text": "Yield code with useless \"pass\" lines removed."} {"_id": "q_5868", "text": "Return leading whitespace."} {"_id": "q_5869", "text": "Return line ending."} {"_id": "q_5870", "text": "Return code with all filtering run on it."} {"_id": "q_5871", "text": "Return a set of strings."} {"_id": "q_5872", "text": "Write the data encoding the ObtainLease response payload to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is not defined."} {"_id": "q_5873", "text": "Write the data encoding the Cancel request payload to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is not defined."} {"_id": "q_5874", "text": "Returns a Name object, populated with the given value and type"} {"_id": "q_5875", "text": "Read the data encoding the Digest object and decode it into its\n constituent parts.\n\n Args:\n istream (Stream): A data stream containing encoded object data,\n supporting a read method; usually a BytearrayStream object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5876", "text": "Write the data encoding the Digest object to a stream.\n\n Args:\n ostream (Stream): A data stream in which to encode object data,\n supporting a write method; usually a BytearrayStream object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5877", "text": "Construct a Digest object from provided digest values.\n\n Args:\n hashing_algorithm (HashingAlgorithm): An enumeration representing\n the hash algorithm used to compute the digest. Optional,\n defaults to HashingAlgorithm.SHA_256.\n digest_value (byte string): The bytes of the digest hash. Optional,\n defaults to the empty byte string.\n key_format_type (KeyFormatType): An enumeration representing the\n format of the key corresponding to the digest. Optional,\n defaults to KeyFormatType.RAW.\n\n Returns:\n Digest: The newly created Digest.\n\n Example:\n >>> x = Digest.create(HashingAlgorithm.MD5, b'\\x00',\n ... KeyFormatType.RAW)\n >>> x.hashing_algorithm\n HashingAlgorithm(value=HashingAlgorithm.MD5)\n >>> x.digest_value\n DigestValue(value=bytearray(b'\\x00'))\n >>> x.key_format_type\n KeyFormatType(value=KeyFormatType.RAW)"} {"_id": "q_5878", "text": "Read the data encoding the ApplicationSpecificInformation object and\n decode it into its constituent parts.\n\n Args:\n istream (Stream): A data stream containing encoded object data,\n supporting a read method; usually a BytearrayStream object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5879", "text": "Write the data encoding the ApplicationSpecificInformation object to a\n stream.\n\n Args:\n ostream (Stream): A data stream in which to encode object data,\n supporting a write method; usually a BytearrayStream object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5880", "text": "Construct an ApplicationSpecificInformation object from provided data\n and namespace values.\n\n Args:\n application_namespace (str): The name of the application namespace.\n application_data (str): Application data related to the namespace.\n\n Returns:\n ApplicationSpecificInformation: The newly created set of\n application information.\n\n Example:\n >>> x = ApplicationSpecificInformation.create('namespace', 'data')\n >>> x.application_namespace.value\n 'namespace'\n >>> x.application_data.value\n 'data'"} {"_id": "q_5881", "text": "Read the data encoding the DerivationParameters struct and decode it\n into its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5882", "text": "Write the data encoding the DerivationParameters struct to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5883", "text": "Read the data encoding the Get request payload and decode it into its\n constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5884", "text": "Write the data encoding the Get request payload to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5885", "text": "Read the data encoding the Get response payload and decode it\n into its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the object type, unique identifier, or\n secret attributes are missing from the encoded payload."} {"_id": "q_5886", "text": "Write the data encoding the Get response payload to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the object type, unique identifier, or\n secret attributes are missing from the payload struct."} {"_id": "q_5887", "text": "Read the data encoding the SignatureVerify request payload and decode\n it into its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is missing from the\n encoded payload."} {"_id": "q_5888", "text": "Write the data encoding the SignatureVerify request payload to a\n stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is not defined."} {"_id": "q_5889", "text": "Process a KMIP request message.\n\n This routine is the main driver of the KmipEngine. It breaks apart and\n processes the request header, handles any message errors that may\n result, and then passes the set of request batch items on for\n processing. This routine is thread-safe, allowing multiple client\n connections to use the same KmipEngine.\n\n Args:\n request (RequestMessage): The request message containing the batch\n items to be processed.\n credential (string): Identifying information about the client\n obtained from the client certificate. Optional, defaults to\n None.\n\n Returns:\n ResponseMessage: The response containing all of the results from\n the request batch items."} {"_id": "q_5890", "text": "Build a simple ResponseMessage with a single error result.\n\n Args:\n version (ProtocolVersion): The protocol version the response\n should be addressed with.\n reason (ResultReason): An enumeration classifying the type of\n error occurred.\n message (str): A string providing additional information about\n the error.\n\n Returns:\n ResponseMessage: The simple ResponseMessage containing a\n single error result."} {"_id": "q_5891", "text": "Given a kmip.pie object and a dictionary of attributes, attempt to set\n the attribute values on the object."} {"_id": "q_5892", "text": "Set the attribute value on the kmip.pie managed object."} {"_id": "q_5893", "text": "Write the data encoding the Decrypt request payload to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is not defined."} {"_id": "q_5894", "text": "Create a secret object of the specified type with the given value.\n\n Args:\n secret_type (ObjectType): An ObjectType enumeration specifying the\n type of secret to create.\n value (dict): A dictionary containing secret data. Optional,\n defaults to None.\n\n Returns:\n secret: The newly constructed secret object.\n\n Raises:\n TypeError: If the provided secret type is unrecognized.\n\n Example:\n >>> factory.create(ObjectType.SYMMETRIC_KEY)\n SymmetricKey(...)"} {"_id": "q_5895", "text": "Load configuration settings from the file pointed to by path.\n\n This will overwrite all current setting values.\n\n Args:\n path (string): The path to the configuration file containing\n the settings to load. Required.\n Raises:\n ConfigurationError: Raised if the path does not point to an\n existing file or if a setting value is invalid."} {"_id": "q_5896", "text": "Returns the integer value of the usage mask bitmask. This value is\n stored in the database.\n\n Args:\n value(list): list of enums in the\n usage mask\n dialect(string): SQL dialect"} {"_id": "q_5897", "text": "Returns a new list of enums.CryptographicUsageMask Enums. This converts\n the integer value into the list of enums.\n\n Args:\n value(int): The integer value stored in the database that is used\n to create the list of enums.CryptographicUsageMask Enums.\n dialect(string): SQL dialect"} {"_id": "q_5898", "text": "Read the encoding of the LongInteger from the input stream.\n\n Args:\n istream (stream): A buffer containing the encoded bytes of a\n LongInteger. Usually a BytearrayStream object. Required.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidPrimitiveLength: if the long integer encoding read in has\n an invalid encoded length."} {"_id": "q_5899", "text": "Write the encoding of the LongInteger to the output stream.\n\n Args:\n ostream (stream): A buffer to contain the encoded bytes of a\n LongInteger. Usually a BytearrayStream object. Required.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5900", "text": "Verify that the value of the LongInteger is valid.\n\n Raises:\n TypeError: if the value is not of type int or long\n ValueError: if the value cannot be represented by a signed 64-bit\n integer"} {"_id": "q_5901", "text": "Read the encoding of the BigInteger from the input stream.\n\n Args:\n istream (stream): A buffer containing the encoded bytes of the\n value of a BigInteger. Usually a BytearrayStream object.\n Required.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidPrimitiveLength: if the big integer encoding read in has\n an invalid encoded length."} {"_id": "q_5902", "text": "Write the encoding of the BigInteger to the output stream.\n\n Args:\n ostream (Stream): A buffer to contain the encoded bytes of a\n BigInteger object. Usually a BytearrayStream object.\n Required.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5903", "text": "Verify that the value of the BigInteger is valid.\n\n Raises:\n TypeError: if the value is not of type int or long"} {"_id": "q_5904", "text": "Verify that the value of the Enumeration is valid.\n\n Raises:\n TypeError: if the enum is not of type Enum\n ValueError: if the value is not of the expected Enum subtype or if\n the value cannot be represented by an unsigned 32-bit integer"} {"_id": "q_5905", "text": "Write the encoding of the Boolean object to the output stream.\n\n Args:\n ostream (Stream): A buffer to contain the encoded bytes of a\n Boolean object. Usually a BytearrayStream object. Required.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5906", "text": "Verify that the value of the Boolean object is valid.\n\n Raises:\n TypeError: if the value is not of type bool."} {"_id": "q_5907", "text": "Read the encoding of the Interval from the input stream.\n\n Args:\n istream (stream): A buffer containing the encoded bytes of the\n value of an Interval. Usually a BytearrayStream object.\n Required.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidPrimitiveLength: if the Interval encoding read in has an\n invalid encoded length.\n InvalidPaddingBytes: if the Interval encoding read in does not use\n zeroes for its padding bytes."} {"_id": "q_5908", "text": "Verify that the value of the Interval is valid.\n\n Raises:\n TypeError: if the value is not of type int or long\n ValueError: if the value cannot be represented by an unsigned\n 32-bit integer"} {"_id": "q_5909", "text": "Set the key wrapping data attributes using a dictionary."} {"_id": "q_5910", "text": "Verify that the contents of the PublicKey object are valid.\n\n Raises:\n TypeError: if the types of any PublicKey attributes are invalid."} {"_id": "q_5911", "text": "A utility function that converts an attribute name string into the\n corresponding attribute tag.\n\n For example: 'State' -> enums.Tags.STATE\n\n Args:\n value (string): The string name of the attribute.\n\n Returns:\n enum: The Tags enumeration value that corresponds to the attribute\n name string.\n\n Raises:\n ValueError: if the attribute name string is not a string or if it is\n an unrecognized attribute name"} {"_id": "q_5912", "text": "A utility function that converts an attribute tag into the corresponding\n attribute name string.\n\n For example: enums.Tags.STATE -> 'State'\n\n Args:\n value (enum): The Tags enumeration value of the attribute.\n\n Returns:\n string: The attribute name string that corresponds to the attribute\n tag.\n\n Raises:\n ValueError: if the attribute tag is not a Tags enumeration or if it\n is unrecognized attribute tag"} {"_id": "q_5913", "text": "A utility function that computes a bit mask from a collection of\n enumeration values.\n\n Args:\n enumerations (list): A list of enumeration values to be combined in a\n composite bit mask.\n\n Returns:\n int: The composite bit mask."} {"_id": "q_5914", "text": "A utility function that checks if the provided value is a composite bit\n mask of enumeration values in the specified enumeration class.\n\n Args:\n enumeration (class): One of the mask enumeration classes found in this\n file. These include:\n * Cryptographic Usage Mask\n * Protection Storage Mask\n * Storage Status Mask\n potential_mask (int): A potential bit mask composed of enumeration\n values belonging to the enumeration class.\n\n Returns:\n True: if the potential mask is a valid bit mask of the mask enumeration\n False: otherwise"} {"_id": "q_5915", "text": "Write the data encoding the CreateKeyPair request payload to a buffer.\n\n Args:\n output_buffer (stream): A data buffer in which to encode object\n data, supporting a write method.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5916", "text": "Write the data encoding the CreateKeyPair response payload to a buffer.\n\n Args:\n output_buffer (stream): A data buffer in which to encode object\n data, supporting a write method.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidField: Raised if the private key unique identifier or the\n public key unique identifier is not defined."} {"_id": "q_5917", "text": "Read the data encoding the GetAttributeList request payload and decode\n it into its constituent parts.\n\n Args:\n input_buffer (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5918", "text": "Write the data encoding the GetAttributeList request payload to a\n stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5919", "text": "Write the data encoding the GetAttributeList response payload to a\n stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidField: Raised if the unique identifier or attribute name\n are not defined."} {"_id": "q_5920", "text": "Scan the policy directory for policy data."} {"_id": "q_5921", "text": "Start monitoring operation policy files."} {"_id": "q_5922", "text": "Extract an X.509 certificate from a socket connection."} {"_id": "q_5923", "text": "Given an X.509 certificate, extract and return all common names."} {"_id": "q_5924", "text": "Given an X.509 certificate, extract and return the client identity."} {"_id": "q_5925", "text": "Read the data encoding the Create request payload and decode it into\n its constituent parts.\n\n Args:\n input_buffer (stream): A data buffer containing encoded object\n data, supporting a read method.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidKmipEncoding: Raised if the object type or template\n attribute is missing from the encoded payload."} {"_id": "q_5926", "text": "Write the data encoding the Create request payload to a buffer.\n\n Args:\n output_buffer (stream): A data buffer in which to encode object\n data, supporting a write method.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidField: Raised if the object type attribute or template\n attribute is not defined."} {"_id": "q_5927", "text": "Read the data encoding the Create response payload and decode it into\n its constituent parts.\n\n Args:\n input_buffer (stream): A data buffer containing encoded object\n data, supporting a read method.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidKmipEncoding: Raised if the object type or unique\n identifier is missing from the encoded payload."} {"_id": "q_5928", "text": "Convert a Pie object into a core secret object and vice versa.\n\n Args:\n obj (various): A Pie or core secret object to convert into the\n opposite object space. Required.\n\n Raises:\n TypeError: if the object type is unrecognized or unsupported."} {"_id": "q_5929", "text": "Read the data encoding the Encrypt response payload and decode it\n into its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the unique_identifier or data attributes\n are missing from the encoded payload."} {"_id": "q_5930", "text": "Read the data encoding the DeriveKey request payload and decode it\n into its constituent parts.\n\n Args:\n input_buffer (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is missing from the\n encoded payload."} {"_id": "q_5931", "text": "Write the data encoding the DeriveKey request payload to a stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is not defined."} {"_id": "q_5932", "text": "Check if the attribute is supported by the current KMIP version.\n\n Args:\n attribute (string): The name of the attribute\n (e.g., 'Cryptographic Algorithm'). Required.\n Returns:\n bool: True if the attribute is supported by the current KMIP\n version. False otherwise."} {"_id": "q_5933", "text": "Check if the attribute is deprecated by the current KMIP version.\n\n Args:\n attribute (string): The name of the attribute\n (e.g., 'Unique Identifier'). Required."} {"_id": "q_5934", "text": "Check if the attribute is supported by the given object type.\n\n Args:\n attribute (string): The name of the attribute (e.g., 'Name').\n Required.\n object_type (ObjectType): An ObjectType enumeration\n (e.g., ObjectType.SYMMETRIC_KEY). Required.\n Returns:\n bool: True if the attribute is applicable to the object type.\n False otherwise."} {"_id": "q_5935", "text": "Check if the attribute is allowed to have multiple instances.\n\n Args:\n attribute (string): The name of the attribute\n (e.g., 'State'). Required."} {"_id": "q_5936", "text": "Returns a value that can be used as a parameter in client or\n server. If a direct_value is given, that value will be returned\n instead of the value from the config file. If the appropriate config\n file option is not found, the default_value is returned.\n\n :param direct_value: represents a direct value that should be used.\n supercedes values from config files\n :param config_section: which section of the config file to use\n :param config_option_name: name of config option value\n :param default_value: default value to be used if other options not\n found\n :returns: a value that can be used as a parameter"} {"_id": "q_5937", "text": "Write the data encoding the Check response payload to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is not defined."} {"_id": "q_5938", "text": "Write the AttributeReference structure encoding to the data stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode\n Attributes structure data, supporting a write method.\n kmip_version (enum): A KMIPVersion enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidField: Raised if the vendor identification or attribute name\n fields are not defined.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the AttributeReference structure."} {"_id": "q_5939", "text": "Write the Attributes structure encoding to the data stream.\n\n Args:\n output_stream (stream): A data stream in which to encode\n Attributes structure data, supporting a write method.\n kmip_version (enum): A KMIPVersion enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n AttributeNotSupported: Raised if an unsupported attribute is\n found in the attribute list while encoding.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the Attributes object."} {"_id": "q_5940", "text": "Write the data encoding the Nonce struct to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the nonce ID or nonce value is not defined."} {"_id": "q_5941", "text": "Write the data encoding the UsernamePasswordCredential struct to a\n stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the username is not defined."} {"_id": "q_5942", "text": "Read the data encoding the DeviceCredential struct and decode it into\n its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5943", "text": "Write the data encoding the DeviceCredential struct to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5944", "text": "Read the data encoding the Credential struct and decode it into its\n constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if either the credential type or value are\n missing from the encoding."} {"_id": "q_5945", "text": "Read the data encoding the MACSignatureKeyInformation struct and\n decode it into its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5946", "text": "Write the data encoding the KeyWrappingData struct to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5947", "text": "Read the data encoding the KeyWrappingSpecification struct and decode\n it into its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5948", "text": "Write the data encoding the KeyWrappingSpecification struct to a\n stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5949", "text": "Read the data encoding the ExtensionInformation object and decode it\n into its constituent parts.\n\n Args:\n istream (Stream): A data stream containing encoded object data,\n supporting a read method; usually a BytearrayStream object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5950", "text": "Write the data encoding the ExtensionInformation object to a stream.\n\n Args:\n ostream (Stream): A data stream in which to encode object data,\n supporting a write method; usually a BytearrayStream object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5951", "text": "Construct an ExtensionInformation object from provided extension\n values.\n\n Args:\n extension_name (str): The name of the extension. Optional,\n defaults to None.\n extension_tag (int): The tag number of the extension. Optional,\n defaults to None.\n extension_type (int): The type index of the extension. Optional,\n defaults to None.\n\n Returns:\n ExtensionInformation: The newly created set of extension\n information.\n\n Example:\n >>> x = ExtensionInformation.create('extension', 1, 1)\n >>> x.extension_name.value\n ExtensionName(value='extension')\n >>> x.extension_tag.value\n ExtensionTag(value=1)\n >>> x.extension_type.value\n ExtensionType(value=1)"} {"_id": "q_5952", "text": "Read the data encoding the RevocationReason object and decode it\n into its constituent parts.\n\n Args:\n istream (Stream): A data stream containing encoded object data,\n supporting a read method; usually a BytearrayStream object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5953", "text": "Write the data encoding the RevocationReason object to a stream.\n\n Args:\n ostream (Stream): A data stream in which to encode object data,\n supporting a write method; usually a BytearrayStream object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5954", "text": "validate the RevocationReason object"} {"_id": "q_5955", "text": "Read the data encoding the ObjectDefaults structure and decode it into\n its constituent parts.\n\n Args:\n input_buffer (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidKmipEncoding: Raised if the object type or attributes are\n missing from the encoding.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the ObjectDefaults structure."} {"_id": "q_5956", "text": "Read the data encoding the DefaultsInformation structure and decode it\n into its constituent parts.\n\n Args:\n input_buffer (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidKmipEncoding: Raised if the object defaults are missing\n from the encoding.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the DefaultsInformation structure."} {"_id": "q_5957", "text": "Write the DefaultsInformation structure encoding to the data stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode\n Attributes structure data, supporting a write method.\n kmip_version (enum): A KMIPVersion enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidField: Raised if the object defaults field is not defined.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the DefaultsInformation structure."} {"_id": "q_5958", "text": "Read the data encoding the RNGParameters structure and decode it\n into its constituent parts.\n\n Args:\n input_buffer (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidKmipEncoding: Raised if the RNG algorithm is missing from\n the encoding.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the RNGParameters structure."} {"_id": "q_5959", "text": "Write the RNGParameters structure encoding to the data stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode\n Attributes structure data, supporting a write method.\n kmip_version (enum): A KMIPVersion enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidField: Raised if the RNG algorithm field is not defined.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the RNGParameters structure."} {"_id": "q_5960", "text": "Read the data encoding the ProfileInformation structure and decode it\n into its constituent parts.\n\n Args:\n input_buffer (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidKmipEncoding: Raised if the profile name is missing from\n the encoding.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the ProfileInformation structure."} {"_id": "q_5961", "text": "Write the ProfileInformation structure encoding to the data stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode\n ProfileInformation structure data, supporting a write method.\n kmip_version (enum): A KMIPVersion enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidField: Raised if the profile name field is not defined.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the ProfileInformation structure."} {"_id": "q_5962", "text": "Write the ValidationInformation structure encoding to the data stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode\n ValidationInformation structure data, supporting a write\n method.\n kmip_version (enum): A KMIPVersion enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n InvalidField: Raised if the validation authority type, validation\n version major, validation type, and/or validation level fields\n are not defined.\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the ValidationInformation structure."} {"_id": "q_5963", "text": "Write the CapabilityInformation structure encoding to the data stream.\n\n Args:\n output_buffer (stream): A data stream in which to encode\n CapabilityInformation structure data, supporting a write\n method.\n kmip_version (enum): A KMIPVersion enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 2.0.\n\n Raises:\n VersionNotSupported: Raised when a KMIP version is provided that\n does not support the CapabilityInformation structure."} {"_id": "q_5964", "text": "Stop the server.\n\n Halt server client connections and clean up any existing connection\n threads.\n\n Raises:\n NetworkingError: Raised if a failure occurs while sutting down\n or closing the TLS server socket."} {"_id": "q_5965", "text": "Serve client connections.\n\n Begin listening for client connections, spinning off new KmipSessions\n as connections are handled. Set up signal handling to shutdown\n connection service as needed."} {"_id": "q_5966", "text": "Read the data encoding the Locate request payload and decode it into\n its constituent parts.\n\n Args:\n input_buffer (stream): A data buffer containing encoded object\n data, supporting a read method.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidKmipEncoding: Raised if the attributes structure is missing\n from the encoded payload for KMIP 2.0+ encodings."} {"_id": "q_5967", "text": "Write the data encoding the Locate request payload to a buffer.\n\n Args:\n output_buffer (stream): A data buffer in which to encode object\n data, supporting a write method.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5968", "text": "Write the data encoding the Locate response payload to a buffer.\n\n Args:\n output_buffer (stream): A data buffer in which to encode object\n data, supporting a write method.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5969", "text": "Create an asymmetric key pair.\n\n Args:\n algorithm(CryptographicAlgorithm): An enumeration specifying the\n algorithm for which the created keys will be compliant.\n length(int): The length of the keys to be created. This value must\n be compliant with the constraints of the provided algorithm.\n\n Returns:\n dict: A dictionary containing the public key data, with at least\n the following key/value fields:\n * value - the bytes of the key\n * format - a KeyFormatType enumeration for the bytes format\n dict: A dictionary containing the private key data, identical in\n structure to the one above.\n\n Raises:\n InvalidField: Raised when the algorithm is unsupported or the\n length is incompatible with the algorithm.\n CryptographicFailure: Raised when the key generation process\n fails.\n\n Example:\n >>> engine = CryptographyEngine()\n >>> key = engine.create_asymmetric_key(\n ... CryptographicAlgorithm.RSA, 2048)"} {"_id": "q_5970", "text": "Encrypt data using symmetric or asymmetric encryption.\n\n Args:\n encryption_algorithm (CryptographicAlgorithm): An enumeration\n specifying the encryption algorithm to use for encryption.\n encryption_key (bytes): The bytes of the encryption key to use for\n encryption.\n plain_text (bytes): The bytes to be encrypted.\n cipher_mode (BlockCipherMode): An enumeration specifying the\n block cipher mode to use with the encryption algorithm.\n Required in the general case. Optional if the encryption\n algorithm is RC4 (aka ARC4). If optional, defaults to None.\n padding_method (PaddingMethod): An enumeration specifying the\n padding method to use on the data before encryption. Required\n if the cipher mode is for block ciphers (e.g., CBC, ECB).\n Optional otherwise, defaults to None.\n iv_nonce (bytes): The IV/nonce value to use to initialize the mode\n of the encryption algorithm. Optional, defaults to None. If\n required and not provided, it will be autogenerated and\n returned with the cipher text.\n hashing_algorithm (HashingAlgorithm): An enumeration specifying\n the hashing algorithm to use with the encryption algorithm,\n if needed. Required for OAEP-based asymmetric encryption.\n Optional, defaults to None.\n\n Returns:\n dict: A dictionary containing the encrypted data, with at least\n the following key/value fields:\n * cipher_text - the bytes of the encrypted data\n * iv_nonce - the bytes of the IV/counter/nonce used if it\n was needed by the encryption scheme and if it was\n automatically generated for the encryption\n\n Raises:\n InvalidField: Raised when the algorithm is unsupported or the\n length is incompatible with the algorithm.\n CryptographicFailure: Raised when the key generation process\n fails.\n\n Example:\n >>> engine = CryptographyEngine()\n >>> result = engine.encrypt(\n ... encryption_algorithm=CryptographicAlgorithm.AES,\n ... encryption_key=(\n ... b'\\xF3\\x96\\xE7\\x1C\\xCF\\xCD\\xEC\\x1F'\n ... b'\\xFC\\xE2\\x8E\\xA6\\xF8\\x74\\x28\\xB0'\n ... ),\n ... plain_text=(\n ... b'\\x00\\x01\\x02\\x03\\x04\\x05\\x06\\x07'\n ... b'\\x08\\x09\\x0A\\x0B\\x0C\\x0D\\x0E\\x0F'\n ... ),\n ... cipher_mode=BlockCipherMode.CBC,\n ... padding_method=PaddingMethod.ANSI_X923,\n ... )\n >>> result.get('cipher_text')\n b'\\x18[\\xb9y\\x1bL\\xd1\\x8f\\x9a\\xa0e\\x02b\\xa3=c'\n >>> result.iv_counter_nonce\n b'8qA\\x05\\xc4\\x86\\x03\\xd9=\\xef\\xdf\\xb8ke\\x9a\\xa2'"} {"_id": "q_5971", "text": "Encrypt data using symmetric encryption.\n\n Args:\n encryption_algorithm (CryptographicAlgorithm): An enumeration\n specifying the symmetric encryption algorithm to use for\n encryption.\n encryption_key (bytes): The bytes of the symmetric key to use for\n encryption.\n plain_text (bytes): The bytes to be encrypted.\n cipher_mode (BlockCipherMode): An enumeration specifying the\n block cipher mode to use with the encryption algorithm.\n Required in the general case. Optional if the encryption\n algorithm is RC4 (aka ARC4). If optional, defaults to None.\n padding_method (PaddingMethod): An enumeration specifying the\n padding method to use on the data before encryption. Required\n if the cipher mode is for block ciphers (e.g., CBC, ECB).\n Optional otherwise, defaults to None.\n iv_nonce (bytes): The IV/nonce value to use to initialize the mode\n of the encryption algorithm. Optional, defaults to None. If\n required and not provided, it will be autogenerated and\n returned with the cipher text.\n\n Returns:\n dict: A dictionary containing the encrypted data, with at least\n the following key/value fields:\n * cipher_text - the bytes of the encrypted data\n * iv_nonce - the bytes of the IV/counter/nonce used if it\n was needed by the encryption scheme and if it was\n automatically generated for the encryption\n\n Raises:\n InvalidField: Raised when the algorithm is unsupported or the\n encryption key is incompatible with the algorithm.\n CryptographicFailure: Raised when the key generation process\n fails."} {"_id": "q_5972", "text": "Encrypt data using asymmetric encryption.\n\n Args:\n encryption_algorithm (CryptographicAlgorithm): An enumeration\n specifying the asymmetric encryption algorithm to use for\n encryption. Required.\n encryption_key (bytes): The bytes of the public key to use for\n encryption. Required.\n plain_text (bytes): The bytes to be encrypted. Required.\n padding_method (PaddingMethod): An enumeration specifying the\n padding method to use with the asymmetric encryption\n algorithm. Required.\n hashing_algorithm (HashingAlgorithm): An enumeration specifying\n the hashing algorithm to use with the encryption padding\n method. Required, if the padding method is OAEP. Optional\n otherwise, defaults to None.\n\n Returns:\n dict: A dictionary containing the encrypted data, with at least\n the following key/value field:\n * cipher_text - the bytes of the encrypted data\n\n Raises:\n InvalidField: Raised when the algorithm is unsupported or the\n length is incompatible with the algorithm.\n CryptographicFailure: Raised when the key generation process\n fails."} {"_id": "q_5973", "text": "Create an RSA key pair.\n\n Args:\n length(int): The length of the keys to be created. This value must\n be compliant with the constraints of the provided algorithm.\n public_exponent(int): The value of the public exponent needed to\n generate the keys. Usually a small Fermat prime number.\n Optional, defaults to 65537.\n\n Returns:\n dict: A dictionary containing the public key data, with the\n following key/value fields:\n * value - the bytes of the key\n * format - a KeyFormatType enumeration for the bytes format\n * public_exponent - the public exponent integer\n dict: A dictionary containing the private key data, identical in\n structure to the one above.\n\n Raises:\n CryptographicFailure: Raised when the key generation process\n fails."} {"_id": "q_5974", "text": "Derive key data using a variety of key derivation functions.\n\n Args:\n derivation_method (DerivationMethod): An enumeration specifying\n the key derivation method to use. Required.\n derivation_length (int): An integer specifying the size of the\n derived key data in bytes. Required.\n derivation_data (bytes): The non-cryptographic bytes to be used\n in the key derivation process (e.g., the data to be encrypted,\n hashed, HMACed). Required in the general case. Optional if the\n derivation method is Hash and the key material is provided.\n Optional, defaults to None.\n key_material (bytes): The bytes of the key material to use for\n key derivation. Required in the general case. Optional if\n the derivation_method is HASH and derivation_data is provided.\n Optional, defaults to None.\n hash_algorithm (HashingAlgorithm): An enumeration specifying the\n hashing algorithm to use with the key derivation method.\n Required in the general case, optional if the derivation\n method specifies encryption. Optional, defaults to None.\n salt (bytes): Bytes representing a randomly generated salt.\n Required if the derivation method is PBKDF2. Optional,\n defaults to None.\n iteration_count (int): An integer representing the number of\n iterations to use when deriving key material. Required if\n the derivation method is PBKDF2. Optional, defaults to None.\n encryption_algorithm (CryptographicAlgorithm): An enumeration\n specifying the symmetric encryption algorithm to use for\n encryption-based key derivation. Required if the derivation\n method specifies encryption. Optional, defaults to None.\n cipher_mode (BlockCipherMode): An enumeration specifying the\n block cipher mode to use with the encryption algorithm.\n Required in in the general case if the derivation method\n specifies encryption and the encryption algorithm is\n specified. Optional if the encryption algorithm is RC4 (aka\n ARC4). Optional, defaults to None.\n padding_method (PaddingMethod): An enumeration specifying the\n padding method to use on the data before encryption. Required\n in in the general case if the derivation method specifies\n encryption and the encryption algorithm is specified. Required\n if the cipher mode is for block ciphers (e.g., CBC, ECB).\n Optional otherwise, defaults to None.\n iv_nonce (bytes): The IV/nonce value to use to initialize the mode\n of the encryption algorithm. Required in the general case if\n the derivation method specifies encryption and the encryption\n algorithm is specified. Optional, defaults to None. If\n required and not provided, it will be autogenerated.\n\n Returns:\n bytes: the bytes of the derived data\n\n Raises:\n InvalidField: Raised when cryptographic data and/or settings are\n unsupported or incompatible with the derivation method.\n\n Example:\n >>> engine = CryptographyEngine()\n >>> result = engine.derive_key(\n ... derivation_method=enums.DerivationMethod.HASH,\n ... derivation_length=16,\n ... derivation_data=b'abc',\n ... hash_algorithm=enums.HashingAlgorithm.MD5\n ... )\n >>> result\n b'\\x90\\x01P\\x98<\\xd2O\\xb0\\xd6\\x96?}(\\xe1\\x7fr'"} {"_id": "q_5975", "text": "Instantiates an RSA key from bytes.\n\n Args:\n bytes (byte string): Bytes of RSA private key.\n Returns:\n private_key\n (cryptography.hazmat.primitives.asymmetric.rsa.RSAPrivateKey):\n RSA private key created from key bytes."} {"_id": "q_5976", "text": "Verify a message signature.\n\n Args:\n signing_key (bytes): The bytes of the signing key to use for\n signature verification. Required.\n message (bytes): The bytes of the message that corresponds with\n the signature. Required.\n signature (bytes): The bytes of the signature to be verified.\n Required.\n padding_method (PaddingMethod): An enumeration specifying the\n padding method to use during signature verification. Required.\n signing_algorithm (CryptographicAlgorithm): An enumeration\n specifying the cryptographic algorithm to use for signature\n verification. Only RSA is supported. Optional, must match the\n algorithm specified by the digital signature algorithm if both\n are provided. Defaults to None.\n hashing_algorithm (HashingAlgorithm): An enumeration specifying\n the hashing algorithm to use with the cryptographic algortihm,\n if needed. Optional, must match the algorithm specified by the\n digital signature algorithm if both are provided. Defaults to\n None.\n digital_signature_algorithm (DigitalSignatureAlgorithm): An\n enumeration specifying both the cryptographic and hashing\n algorithms to use for signature verification. Optional, must\n match the cryptographic and hashing algorithms if both are\n provided. Defaults to None.\n\n Returns:\n boolean: the result of signature verification, True for valid\n signatures, False for invalid signatures\n\n Raises:\n InvalidField: Raised when various settings or values are invalid.\n CryptographicFailure: Raised when the signing key bytes cannot be\n loaded, or when the signature verification process fails\n unexpectedly."} {"_id": "q_5977", "text": "Read the data encoding the Sign response payload and decode it.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the unique_identifier or signature attributes\n are missing from the encoded payload."} {"_id": "q_5978", "text": "Write the data encoding the Sign response to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n\n Raises:\n ValueError: Raised if the unique_identifier or signature\n attributes are not defined."} {"_id": "q_5979", "text": "Read the data encoding the GetUsageAllocation request payload and\n decode it into its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is missing from the\n encoded payload."} {"_id": "q_5980", "text": "Convert a ProtocolVersion struct to its KMIPVersion enumeration equivalent.\n\n Args:\n value (ProtocolVersion): A ProtocolVersion struct to be converted into\n a KMIPVersion enumeration.\n\n Returns:\n KMIPVersion: The enumeration equivalent of the struct. If the struct\n cannot be converted to a valid enumeration, None is returned."} {"_id": "q_5981", "text": "Read the data encoding the ProtocolVersion struct and decode it into\n its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if either the major or minor protocol versions\n are missing from the encoding."} {"_id": "q_5982", "text": "Write the data encoding the Authentication struct to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_5983", "text": "Read the data encoding the Poll request payload and decode it into\n its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is missing from the\n encoded payload."} {"_id": "q_5984", "text": "Query the configured SLUGS service with the provided credentials.\n\n Args:\n connection_certificate (cryptography.x509.Certificate): An X.509\n certificate object obtained from the connection being\n authenticated. Required for SLUGS authentication.\n connection_info (tuple): A tuple of information pertaining to the\n connection being authenticated, including the source IP address\n and a timestamp (e.g., ('127.0.0.1', 1519759267.467451)).\n Optional, defaults to None. Ignored for SLUGS authentication.\n request_credentials (list): A list of KMIP Credential structures\n containing credential information to use for authentication.\n Optional, defaults to None. Ignored for SLUGS authentication."} {"_id": "q_5985", "text": "Read the data encoding the Archive response payload and decode it\n into its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is missing from the\n encoded payload."} {"_id": "q_5986", "text": "Write the data encoding the Archive response payload to a stream.\n\n Args:\n output_stream (stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the data attribute is not defined."} {"_id": "q_5987", "text": "The main thread routine executed by invoking thread.start.\n\n This method manages the new client connection, running a message\n handling loop. Once this method completes, the thread is finished."} {"_id": "q_5988", "text": "Read the data encoding the Rekey response payload and decode it into\n its constituent parts.\n\n Args:\n input_stream (stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n ValueError: Raised if the unique identifier attribute is missing\n from the encoded payload."} {"_id": "q_5989", "text": "Check if a profile is supported by the client.\n\n Args:\n conformance_clause (ConformanceClause):\n authentication_suite (AuthenticationSuite):\n\n Returns:\n bool: True if the profile is supported, False otherwise.\n\n Example:\n >>> client.is_profile_supported(\n ... ConformanceClause.DISCOVER_VERSIONS,\n ... AuthenticationSuite.BASIC)\n True"} {"_id": "q_5990", "text": "Derive a new key or secret data from an existing managed object.\n\n Args:\n object_type (ObjectType): An ObjectType enumeration specifying\n what type of object to create. Required.\n unique_identifiers (list): A list of strings specifying the unique\n IDs of the existing managed objects to use for key derivation.\n Required.\n derivation_method (DerivationMethod): A DerivationMethod\n enumeration specifying what key derivation method to use.\n Required.\n derivation_parameters (DerivationParameters): A\n DerivationParameters struct containing the settings and\n options to use for key derivation.\n template_attribute (TemplateAttribute): A TemplateAttribute struct\n containing the attributes to set on the newly derived object.\n credential (Credential): A Credential struct containing a set of\n authorization parameters for the operation. Optional, defaults\n to None.\n\n Returns:\n dict: The results of the derivation operation, containing the\n following key/value pairs:\n\n Key | Value\n ---------------------|-----------------------------------------\n 'unique_identifier' | (string) The unique ID of the newly\n | derived object.\n 'template_attribute' | (TemplateAttribute) A struct containing\n | any attributes set on the newly derived\n | object.\n 'result_status' | (ResultStatus) An enumeration indicating\n | the status of the operation result.\n 'result_reason' | (ResultReason) An enumeration providing\n | context for the result status.\n 'result_message' | (string) A message providing additional\n | context for the operation result."} {"_id": "q_5991", "text": "Send a GetAttributes request to the server.\n\n Args:\n uuid (string): The ID of the managed object with which the\n retrieved attributes should be associated. Optional, defaults\n to None.\n attribute_names (list): A list of AttributeName values indicating\n what object attributes the client wants from the server.\n Optional, defaults to None.\n\n Returns:\n result (GetAttributesResult): A structure containing the results\n of the operation."} {"_id": "q_5992", "text": "Send a GetAttributeList request to the server.\n\n Args:\n uid (string): The ID of the managed object with which the retrieved\n attribute names should be associated.\n\n Returns:\n result (GetAttributeListResult): A structure containing the results\n of the operation."} {"_id": "q_5993", "text": "Send a Query request to the server.\n\n Args:\n batch (boolean): A flag indicating if the operation should be sent\n with a batch of additional operations. Defaults to False.\n query_functions (list): A list of QueryFunction enumerations\n indicating what information the client wants from the server.\n Optional, defaults to None.\n credential (Credential): A Credential object containing\n authentication information for the server. Optional, defaults\n to None."} {"_id": "q_5994", "text": "Open the client connection.\n\n Raises:\n ClientConnectionFailure: if the client connection is already open\n Exception: if an error occurs while trying to open the connection"} {"_id": "q_5995", "text": "Close the client connection.\n\n Raises:\n Exception: if an error occurs while trying to close the connection"} {"_id": "q_5996", "text": "Create a symmetric key on a KMIP appliance.\n\n Args:\n algorithm (CryptographicAlgorithm): An enumeration defining the\n algorithm to use to generate the symmetric key.\n length (int): The length in bits for the symmetric key.\n operation_policy_name (string): The name of the operation policy\n to use for the new symmetric key. Optional, defaults to None\n name (string): The name to give the key. Optional, defaults to None\n cryptographic_usage_mask (list): list of enumerations of crypto\n usage mask passing to the symmetric key. Optional, defaults to\n None\n\n Returns:\n string: The uid of the newly created symmetric key.\n\n Raises:\n ClientConnectionNotOpen: if the client connection is unusable\n KmipOperationFailure: if the operation result is a failure\n TypeError: if the input arguments are invalid"} {"_id": "q_5997", "text": "Create an asymmetric key pair on a KMIP appliance.\n\n Args:\n algorithm (CryptographicAlgorithm): An enumeration defining the\n algorithm to use to generate the key pair.\n length (int): The length in bits for the key pair.\n operation_policy_name (string): The name of the operation policy\n to use for the new key pair. Optional, defaults to None.\n public_name (string): The name to give the public key. Optional,\n defaults to None.\n public_usage_mask (list): A list of CryptographicUsageMask\n enumerations indicating how the public key should be used.\n Optional, defaults to None.\n private_name (string): The name to give the public key. Optional,\n defaults to None.\n private_usage_mask (list): A list of CryptographicUsageMask\n enumerations indicating how the private key should be used.\n Optional, defaults to None.\n\n Returns:\n string: The uid of the newly created public key.\n string: The uid of the newly created private key.\n\n Raises:\n ClientConnectionNotOpen: if the client connection is unusable\n KmipOperationFailure: if the operation result is a failure\n TypeError: if the input arguments are invalid"} {"_id": "q_5998", "text": "Register a managed object with a KMIP appliance.\n\n Args:\n managed_object (ManagedObject): A managed object to register. An\n instantiatable subclass of ManagedObject from the Pie API.\n\n Returns:\n string: The uid of the newly registered managed object.\n\n Raises:\n ClientConnectionNotOpen: if the client connection is unusable\n KmipOperationFailure: if the operation result is a failure\n TypeError: if the input argument is invalid"} {"_id": "q_5999", "text": "Rekey an existing key.\n\n Args:\n uid (string): The unique ID of the symmetric key to rekey.\n Optional, defaults to None.\n offset (int): The time delta, in seconds, between the new key's\n initialization date and activation date. Optional, defaults\n to None.\n **kwargs (various): A placeholder for object attributes that\n should be set on the newly rekeyed key. Currently\n supported attributes include:\n activation_date (int)\n process_start_date (int)\n protect_stop_date (int)\n deactivation_date (int)\n\n Returns:\n string: The unique ID of the newly rekeyed key.\n\n Raises:\n ClientConnectionNotOpen: if the client connection is unusable\n KmipOperationFailure: if the operation result is a failure\n TypeError: if the input arguments are invalid"} {"_id": "q_6000", "text": "Derive a new key or secret data from existing managed objects.\n\n Args:\n object_type (ObjectType): An ObjectType enumeration specifying\n what type of object to derive. Only SymmetricKeys and\n SecretData can be specified. Required.\n unique_identifiers (list): A list of strings specifying the\n unique IDs of the existing managed objects to use for\n derivation. Multiple objects can be specified to fit the\n requirements of the given derivation method. Required.\n derivation_method (DerivationMethod): A DerivationMethod\n enumeration specifying how key derivation should be done.\n Required.\n derivation_parameters (dict): A dictionary containing various\n settings for the key derivation process. See Note below.\n Required.\n **kwargs (various): A placeholder for object attributes that\n should be set on the newly derived object. Currently\n supported attributes include:\n cryptographic_algorithm (enums.CryptographicAlgorithm)\n cryptographic_length (int)\n\n Returns:\n string: The unique ID of the newly derived object.\n\n Raises:\n ClientConnectionNotOpen: if the client connection is unusable\n KmipOperationFailure: if the operation result is a failure\n TypeError: if the input arguments are invalid\n\n Notes:\n The derivation_parameters argument is a dictionary that can\n contain the following key/value pairs:\n\n Key | Value\n ---------------------------|---------------------------------------\n 'cryptographic_parameters' | A dictionary containing additional\n | cryptographic settings. See the\n | decrypt method for more information.\n 'initialization_vector' | Bytes to be used to initialize the key\n | derivation function, if needed.\n 'derivation_data' | Bytes to be used as the basis for the\n | key derivation process (e.g., the\n | bytes to be encrypted, hashed, etc).\n 'salt' | Bytes to used as a salt value for the\n | key derivation function, if needed.\n | Usually used with PBKDF2.\n 'iteration_count' | An integer defining how many\n | iterations should be used with the key\n | derivation function, if needed.\n | Usually used with PBKDF2."} {"_id": "q_6001", "text": "Check the constraints for a managed object.\n\n Args:\n uid (string): The unique ID of the managed object to check.\n Optional, defaults to None.\n usage_limits_count (int): The number of items that can be secured\n with the specified managed object. Optional, defaults to None.\n cryptographic_usage_mask (list): A list of CryptographicUsageMask\n enumerations specifying the operations possible with the\n specified managed object. Optional, defaults to None.\n lease_time (int): The number of seconds that can be leased for the\n specified managed object. Optional, defaults to None."} {"_id": "q_6002", "text": "Get a managed object from a KMIP appliance.\n\n Args:\n uid (string): The unique ID of the managed object to retrieve.\n key_wrapping_specification (dict): A dictionary containing various\n settings to be used when wrapping the key during retrieval.\n See Note below. Optional, defaults to None.\n\n Returns:\n ManagedObject: The retrieved managed object object.\n\n Raises:\n ClientConnectionNotOpen: if the client connection is unusable\n KmipOperationFailure: if the operation result is a failure\n TypeError: if the input argument is invalid\n\n Notes:\n The derivation_parameters argument is a dictionary that can\n contain the following key/value pairs:\n\n Key | Value\n --------------------------------|---------------------------------\n 'wrapping_method' | A WrappingMethod enumeration\n | that specifies how the object\n | should be wrapped.\n 'encryption_key_information' | A dictionary containing the ID\n | of the wrapping key and\n | associated cryptographic\n | parameters.\n 'mac_signature_key_information' | A dictionary containing the ID\n | of the wrapping key and\n | associated cryptographic\n | parameters.\n 'attribute_names' | A list of strings representing\n | the names of attributes that\n | should be included with the\n | wrapped object.\n 'encoding_option' | An EncodingOption enumeration\n | that specifies the encoding of\n | the object before it is wrapped."} {"_id": "q_6003", "text": "Get the attributes associated with a managed object.\n\n If the uid is not specified, the appliance will use the ID placeholder\n by default.\n\n If the attribute_names list is not specified, the appliance will\n return all viable attributes for the managed object.\n\n Args:\n uid (string): The unique ID of the managed object with which the\n retrieved attributes should be associated. Optional, defaults\n to None.\n attribute_names (list): A list of string attribute names\n indicating which attributes should be retrieved. Optional,\n defaults to None."} {"_id": "q_6004", "text": "Revoke a managed object stored by a KMIP appliance.\n\n Args:\n revocation_reason (RevocationReasonCode): An enumeration indicating\n the revocation reason.\n uid (string): The unique ID of the managed object to revoke.\n Optional, defaults to None.\n revocation_message (string): A message regarding the revocation.\n Optional, defaults to None.\n compromise_occurrence_date (int): An integer, the number of seconds\n since the epoch, which will be converted to the Datetime when\n the managed object was first believed to be compromised.\n Optional, defaults to None.\n\n Returns:\n None\n\n Raises:\n ClientConnectionNotOpen: if the client connection is unusable\n KmipOperationFailure: if the operation result is a failure\n TypeError: if the input argument is invalid"} {"_id": "q_6005", "text": "Get the message authentication code for data.\n\n Args:\n data (string): The data to be MACed.\n uid (string): The unique ID of the managed object that is the key\n to use for the MAC operation.\n algorithm (CryptographicAlgorithm): An enumeration defining the\n algorithm to use to generate the MAC.\n\n Returns:\n string: The unique ID of the managed object that is the key\n to use for the MAC operation.\n string: The data MACed\n\n Raises:\n ClientConnectionNotOpen: if the client connection is unusable\n KmipOperationFailure: if the operation result is a failure\n TypeError: if the input arguments are invalid"} {"_id": "q_6006", "text": "Build a CryptographicParameters struct from a dictionary.\n\n Args:\n value (dict): A dictionary containing the key/value pairs for a\n CryptographicParameters struct.\n\n Returns:\n None: if value is None\n CryptographicParameters: a CryptographicParameters struct\n\n Raises:\n TypeError: if the input argument is invalid"} {"_id": "q_6007", "text": "Build an EncryptionKeyInformation struct from a dictionary.\n\n Args:\n value (dict): A dictionary containing the key/value pairs for a\n EncryptionKeyInformation struct.\n\n Returns:\n EncryptionKeyInformation: an EncryptionKeyInformation struct\n\n Raises:\n TypeError: if the input argument is invalid"} {"_id": "q_6008", "text": "Build an MACSignatureKeyInformation struct from a dictionary.\n\n Args:\n value (dict): A dictionary containing the key/value pairs for a\n MACSignatureKeyInformation struct.\n\n Returns:\n MACSignatureInformation: a MACSignatureKeyInformation struct\n\n Raises:\n TypeError: if the input argument is invalid"} {"_id": "q_6009", "text": "Build a KeyWrappingSpecification struct from a dictionary.\n\n Args:\n value (dict): A dictionary containing the key/value pairs for a\n KeyWrappingSpecification struct.\n\n Returns:\n KeyWrappingSpecification: a KeyWrappingSpecification struct\n\n Raises:\n TypeError: if the input argument is invalid"} {"_id": "q_6010", "text": "Build a name attribute, returned in a list for ease\n of use in the caller"} {"_id": "q_6011", "text": "Read the data encoding the QueryRequestPayload object and decode it\n into its constituent parts.\n\n Args:\n input_buffer (Stream): A data stream containing encoded object\n data, supporting a read method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be decoded. Optional,\n defaults to KMIP 1.0.\n\n Raises:\n InvalidKmipEncoding: Raised if the query functions are missing\n from the encoded payload."} {"_id": "q_6012", "text": "Write the data encoding the QueryResponsePayload object to a stream.\n\n Args:\n output_buffer (Stream): A data stream in which to encode object\n data, supporting a write method; usually a BytearrayStream\n object.\n kmip_version (KMIPVersion): An enumeration defining the KMIP\n version with which the object will be encoded. Optional,\n defaults to KMIP 1.0."} {"_id": "q_6013", "text": "Find a group of entry points with unique names.\n\n Returns a dictionary of names to :class:`EntryPoint` objects."} {"_id": "q_6014", "text": "Find all entry points in a group.\n\n Returns a list of :class:`EntryPoint` objects."} {"_id": "q_6015", "text": "Load the object to which this entry point refers."} {"_id": "q_6016", "text": "Parse an entry point from the syntax in entry_points.txt\n\n :param str epstr: The entry point string (not including 'name =')\n :param str name: The name of this entry point\n :param Distribution distro: The distribution in which the entry point was found\n :rtype: EntryPoint\n :raises BadEntryPoint: if *epstr* can't be parsed as an entry point."} {"_id": "q_6017", "text": "Try to return a path to static the static files compatible all\n the way back to Django 1.2. If anyone has a cleaner or better\n way to do this let me know!"} {"_id": "q_6018", "text": "Run livereload server"} {"_id": "q_6019", "text": "Generate controller, include the controller file, template & css & js directories."} {"_id": "q_6020", "text": "Generate action."} {"_id": "q_6021", "text": "Generate model."} {"_id": "q_6022", "text": "Genarate macro."} {"_id": "q_6023", "text": "mkdir -p path"} {"_id": "q_6024", "text": "Friendly time gap"} {"_id": "q_6025", "text": "Check url schema."} {"_id": "q_6026", "text": "JSON decorator."} {"_id": "q_6027", "text": "Absolute url for endpoint."} {"_id": "q_6028", "text": "Get current user."} {"_id": "q_6029", "text": "Register routes."} {"_id": "q_6030", "text": "Register HTTP error pages."} {"_id": "q_6031", "text": "Register hooks."} {"_id": "q_6032", "text": "Returns csv data as a pandas Dataframe object"} {"_id": "q_6033", "text": "Serialize the specified DataFrame and replace the existing dataset.\n\n Parameters\n ----------\n dataframe : pandas.DataFrame\n Data to serialize.\n data_type_id : str, optional\n Format to serialize to.\n If None, the existing format is preserved.\n Supported formats are:\n 'PlainText'\n 'GenericCSV'\n 'GenericTSV'\n 'GenericCSVNoHeader'\n 'GenericTSVNoHeader'\n See the azureml.DataTypeIds class for constants.\n name : str, optional\n Name for the dataset.\n If None, the name of the existing dataset is used.\n description : str, optional\n Description for the dataset.\n If None, the name of the existing dataset is used."} {"_id": "q_6034", "text": "Upload already serialized raw data and replace the existing dataset.\n\n Parameters\n ----------\n raw_data: bytes\n Dataset contents to upload.\n data_type_id : str\n Serialization format of the raw data.\n If None, the format of the existing dataset is used.\n Supported formats are:\n 'PlainText'\n 'GenericCSV'\n 'GenericTSV'\n 'GenericCSVNoHeader'\n 'GenericTSVNoHeader'\n 'ARFF'\n See the azureml.DataTypeIds class for constants.\n name : str, optional\n Name for the dataset.\n If None, the name of the existing dataset is used.\n description : str, optional\n Description for the dataset.\n If None, the name of the existing dataset is used."} {"_id": "q_6035", "text": "Upload already serialized raw data as a new dataset.\n\n Parameters\n ----------\n raw_data: bytes\n Dataset contents to upload.\n data_type_id : str\n Serialization format of the raw data.\n Supported formats are:\n 'PlainText'\n 'GenericCSV'\n 'GenericTSV'\n 'GenericCSVNoHeader'\n 'GenericTSVNoHeader'\n 'ARFF'\n See the azureml.DataTypeIds class for constants.\n name : str\n Name for the new dataset.\n description : str\n Description for the new dataset.\n\n Returns\n -------\n SourceDataset\n Dataset that was just created.\n Use open(), read_as_binary(), read_as_text() or to_dataframe() on\n the dataset object to get its contents as a stream, bytes, str or\n pandas DataFrame."} {"_id": "q_6036", "text": "Open and return a stream for the dataset contents."} {"_id": "q_6037", "text": "Read and return the dataset contents as text."} {"_id": "q_6038", "text": "Read and return the dataset contents as a pandas DataFrame."} {"_id": "q_6039", "text": "Get an intermediate dataset.\n\n Parameters\n ----------\n node_id : str\n Module node id from the experiment graph.\n port_name : str\n Output port of the module.\n data_type_id : str\n Serialization format of the raw data.\n See the azureml.DataTypeIds class for constants.\n\n Returns\n -------\n IntermediateDataset\n Dataset object.\n Use open(), read_as_binary(), read_as_text() or to_dataframe() on\n the dataset object to get its contents as a stream, bytes, str or\n pandas DataFrame."} {"_id": "q_6040", "text": "Runs HTTP GET request to retrieve the list of datasets."} {"_id": "q_6041", "text": "Runs HTTP GET request to retrieve a single dataset."} {"_id": "q_6042", "text": "publishes a callable function or decorates a function to be published. \n\nReturns a callable, iterable object. Calling the object will invoke the published service.\nIterating the object will give the API URL, API key, and API help url.\n \nTo define a function which will be published to Azure you can simply decorate it with\nthe @publish decorator. This will publish the service, and then future calls to the\nfunction will run against the operationalized version of the service in the cloud.\n\n>>> @publish(workspace_id, workspace_token)\n>>> def func(a, b): \n>>> return a + b\n\nAfter publishing you can then invoke the function using:\nfunc.service(1, 2)\n\nOr continue to invoke the function locally:\nfunc(1, 2)\n\nYou can also just call publish directly to publish a function:\n\n>>> def func(a, b): return a + b\n>>> \n>>> res = publish(func, workspace_id, workspace_token)\n>>> \n>>> url, api_key, help_url = res\n>>> res(2, 3)\n5\n>>> url, api_key, help_url = res.url, res.api_key, res.help_url\n\nThe returned result will be the published service.\n\nYou can specify a list of files which should be published along with the function.\nThe resulting files will be stored in a subdirectory called 'Script Bundle'. The\nlist of files can be one of:\n (('file1.txt', None), ) # file is read from disk\n (('file1.txt', b'contents'), ) # file contents are provided\n ('file1.txt', 'file2.txt') # files are read from disk, written with same filename\n ((('file1.txt', 'destname.txt'), None), ) # file is read from disk, written with different destination name\n\nThe various formats for each filename can be freely mixed and matched."} {"_id": "q_6043", "text": "Specifies the types used for the arguments of a published service.\n\n@types(a=int, b = str)\ndef f(a, b):\n pass"} {"_id": "q_6044", "text": "Specifies the return type for a published service.\n\n@returns(int)\ndef f(...):\n pass"} {"_id": "q_6045", "text": "attaches a file to the payload to be uploaded.\n\nIf contents is omitted the file is read from disk.\nIf name is a tuple it specifies the on-disk filename and the destination filename."} {"_id": "q_6046", "text": "walks the byte code to find the variables which are actually globals"} {"_id": "q_6047", "text": "Return a list of all implemented keyrings that can be constructed without\n parameters."} {"_id": "q_6048", "text": "The keyring name, suitable for display.\n\n The name is derived from module and class name."} {"_id": "q_6049", "text": "Gets the username and password for the service.\n Returns a Credential instance.\n\n The *username* argument is optional and may be omitted by\n the caller or ignored by the backend. Callers must use the\n returned username."} {"_id": "q_6050", "text": "If self.preferred_collection contains a D-Bus path,\n the collection at that address is returned. Otherwise,\n the default collection is returned."} {"_id": "q_6051", "text": "Discover all keyrings for chaining."} {"_id": "q_6052", "text": "Set current keyring backend."} {"_id": "q_6053", "text": "Load the keyring class indicated by name.\n\n These popular names are tested to ensure their presence.\n\n >>> popular_names = [\n ... 'keyring.backends.Windows.WinVaultKeyring',\n ... 'keyring.backends.OS_X.Keyring',\n ... 'keyring.backends.kwallet.DBusKeyring',\n ... 'keyring.backends.SecretService.Keyring',\n ... ]\n >>> list(map(_load_keyring_class, popular_names))\n [...]\n\n These legacy names are retained for compatibility.\n\n >>> legacy_names = [\n ... ]\n >>> list(map(_load_keyring_class, legacy_names))\n [...]"} {"_id": "q_6054", "text": "Load a keyring using the config file in the config root."} {"_id": "q_6055", "text": "Use freedesktop.org Base Dir Specfication to determine storage\n location."} {"_id": "q_6056", "text": "Use freedesktop.org Base Dir Specfication to determine config\n location."} {"_id": "q_6057", "text": "Returns a callable that outputs the data. Defaults to print."} {"_id": "q_6058", "text": "Runs the subcommand configured in args on the netgear session"} {"_id": "q_6059", "text": "Scan for devices and print results."} {"_id": "q_6060", "text": "Try to autodetect the base URL of the router SOAP service.\n\n Returns None if it can't be found."} {"_id": "q_6061", "text": "Convert value to to_type, returns default if fails."} {"_id": "q_6062", "text": "Login to the router.\n\n Will be called automatically by other actions."} {"_id": "q_6063", "text": "Return list of connected devices to the router with details.\n\n This call is slower and probably heavier on the router load.\n\n Returns None if error occurred."} {"_id": "q_6064", "text": "Make an API request to the router."} {"_id": "q_6065", "text": "Return RGBA values of color c\n\n c should be either an X11 color or a brewer color set and index\n e.g. \"navajowhite\", \"greens3/2\""} {"_id": "q_6066", "text": "Draw this shape with the given cairo context"} {"_id": "q_6067", "text": "Find extremas of a function of real domain defined by evaluating\n a cubic bernstein polynomial of given bernstein coefficients."} {"_id": "q_6068", "text": "Build choices list runtime using 'sitetree_tree' tag"} {"_id": "q_6069", "text": "Compatibility function to get rid of optparse in management commands after Django 1.10.\n\n :param tuple command_options: tuple with `CommandOption` objects."} {"_id": "q_6070", "text": "Registers a hook callable to process tree items right before they are passed to templates.\n\n Callable should be able to:\n\n a) handle ``tree_items`` and ``tree_sender`` key params.\n ``tree_items`` will contain a list of extended TreeItem objects ready to pass to template.\n ``tree_sender`` will contain navigation type identifier\n (e.g.: `menu`, `sitetree`, `breadcrumbs`, `menu.children`, `sitetree.children`)\n\n b) return a list of extended TreeItems objects to pass to template.\n\n\n Example::\n\n # Put the following code somewhere where it'd be triggered as expected. E.g. in app view.py.\n\n # First import the register function.\n from sitetree.sitetreeapp import register_items_hook\n\n # The following function will be used as items processor.\n def my_items_processor(tree_items, tree_sender):\n # Suppose we want to process only menu child items.\n if tree_sender == 'menu.children':\n # Lets add 'Hooked: ' to resolved titles of every item.\n for item in tree_items:\n item.title_resolved = 'Hooked: %s' % item.title_resolved\n # Return items list mutated or not.\n return tree_items\n\n # And we register items processor.\n register_items_hook(my_items_processor)\n\n :param func:"} {"_id": "q_6071", "text": "Returns a structure describing a dynamic sitetree.utils\n The structure can be built from various sources,\n\n :param str|iterable src: If a string is passed to `src`, it'll be treated as the name of an app,\n from where one want to import sitetrees definitions. `src` can be an iterable\n of tree definitions (see `sitetree.toolbox.tree()` and `item()` functions).\n\n :param str|unicode target_tree_alias: Static tree alias to attach items from dynamic trees to.\n\n :param str|unicode parent_tree_item_alias: Tree item alias from a static tree to attach items from dynamic trees to.\n\n :param list include_trees: Sitetree aliases to filter `src`.\n\n :rtype: dict"} {"_id": "q_6072", "text": "Initializes local cache from Django cache."} {"_id": "q_6073", "text": "Updates cache entry parameter with new data.\n\n :param str|unicode entry_name:\n :param key:\n :param value:"} {"_id": "q_6074", "text": "Initializes sitetree to handle new request.\n\n :param Context|None context:"} {"_id": "q_6075", "text": "Resolves internationalized tree alias.\n Verifies whether a separate sitetree is available for currently active language.\n If so, returns i18n alias. If not, returns the initial alias.\n\n :param str|unicode alias:\n :rtype: str|unicode"} {"_id": "q_6076", "text": "Returns boolean whether current application is Admin contrib.\n\n :rtype: bool"} {"_id": "q_6077", "text": "Calculates depth of the item in the tree.\n\n :param str|unicode tree_alias:\n :param int item_id:\n :param int depth:\n :rtype: int"} {"_id": "q_6078", "text": "Builds and returns menu structure for 'sitetree_menu' tag.\n\n :param str|unicode tree_alias:\n :param str|unicode tree_branches:\n :param Context context:\n :rtype: list|str"} {"_id": "q_6079", "text": "Checks whether a current user has an access to a certain item.\n\n :param TreeItemBase item:\n :param Context context:\n :rtype: bool"} {"_id": "q_6080", "text": "Builds and returns breadcrumb trail structure for 'sitetree_breadcrumbs' tag.\n\n :param str|unicode tree_alias:\n :param Context context:\n :rtype: list|str"} {"_id": "q_6081", "text": "Builds and returns tree structure for 'sitetree_tree' tag.\n\n :param str|unicode tree_alias:\n :param Context context:\n :rtype: list|str"} {"_id": "q_6082", "text": "Builds and returns site tree item children structure for 'sitetree_children' tag.\n\n :param TreeItemBase parent_item:\n :param str|unicode navigation_type: menu, sitetree\n :param str|unicode use_template:\n :param Context context:\n :rtype: list"} {"_id": "q_6083", "text": "Returns item's children.\n\n :param str|unicode tree_alias:\n :param TreeItemBase|None item:\n :rtype: list"} {"_id": "q_6084", "text": "Updates 'has_children' attribute for tree items inplace.\n\n :param str|unicode tree_alias:\n :param list tree_items:\n :param str|unicode navigation_type: sitetree, breadcrumbs, menu"} {"_id": "q_6085", "text": "Filters sitetree item's children if hidden and by navigation type.\n\n NB: We do not apply any filters to sitetree in admin app.\n\n :param list items:\n :param str|unicode navigation_type: sitetree, breadcrumbs, menu\n :rtype: list"} {"_id": "q_6086", "text": "Climbs up the site tree to resolve root item for chosen one.\n\n :param str|unicode tree_alias:\n :param TreeItemBase base_item:\n :rtype: TreeItemBase"} {"_id": "q_6087", "text": "Resolves name as a variable in a given context.\n\n If no context specified page context' is considered as context.\n\n :param str|unicode varname:\n :param Context context:\n :return:"} {"_id": "q_6088", "text": "Parses sitetree_breadcrumbs tag parameters.\n\n Two notation types are possible:\n 1. Two arguments:\n {% sitetree_breadcrumbs from \"mytree\" %}\n Used to render breadcrumb path for \"mytree\" site tree.\n\n 2. Four arguments:\n {% sitetree_breadcrumbs from \"mytree\" template \"sitetree/mycrumb.html\" %}\n Used to render breadcrumb path for \"mytree\" site tree using specific\n template \"sitetree/mycrumb.html\""} {"_id": "q_6089", "text": "Parses sitetree_menu tag parameters.\n\n {% sitetree_menu from \"mytree\" include \"trunk,1,level3\" %}\n Used to render trunk, branch with id 1 and branch aliased 'level3'\n elements from \"mytree\" site tree as a menu.\n\n These are reserved aliases:\n * 'trunk' - items without parents\n * 'this-children' - items under item resolved as current for the current page\n * 'this-siblings' - items under parent of item resolved as current for\n the current page (current item included)\n * 'this-ancestor-children' - items under grandparent item (closest to root)\n for the item resolved as current for the current page\n\n {% sitetree_menu from \"mytree\" include \"trunk,1,level3\" template \"sitetree/mymenu.html\" %}"} {"_id": "q_6090", "text": "Render helper is used by template node functions\n to render given template with given tree items in context."} {"_id": "q_6091", "text": "Node constructor to be used in tags."} {"_id": "q_6092", "text": "Fixes Admin contrib redirects compatibility problems\n introduced in Django 1.4 by url handling changes."} {"_id": "q_6093", "text": "Redirects to the appropriate items' 'continue' page on item add.\n\n As we administer tree items within tree itself, we\n should make some changes to redirection process."} {"_id": "q_6094", "text": "Redirects to the appropriate items' 'add' page on item change.\n\n As we administer tree items within tree itself, we\n should make some changes to redirection process."} {"_id": "q_6095", "text": "Returns modified form for TreeItem model.\n 'Parent' field choices are built by sitetree itself."} {"_id": "q_6096", "text": "Fetches Tree for current or given TreeItem."} {"_id": "q_6097", "text": "Moves item up or down by swapping 'sort_order' field values of neighboring items."} {"_id": "q_6098", "text": "Manages not only TreeAdmin URLs but also TreeItemAdmin URLs."} {"_id": "q_6099", "text": "Dumps sitetrees with items using django-smuggler.\n\n :param request:\n :return:"} {"_id": "q_6100", "text": "Dynamically creates and returns a sitetree.\n\n :param str|unicode alias:\n :param str|unicode title:\n :param iterable items: dynamic sitetree items objects created by `item` function.\n :param kwargs: Additional arguments to pass to tree item initializer.\n\n :rtype: TreeBase"} {"_id": "q_6101", "text": "Dynamically creates and returns a sitetree item object.\n\n :param str|unicode title:\n\n :param str|unicode url:\n\n :param list, set children: a list of children for tree item. Children should also be created by `item` function.\n\n :param bool url_as_pattern: consider URL as a name of a named URL\n\n :param str|unicode hint: hints are usually shown to users\n\n :param str|unicode alias: item name to address it from templates\n\n :param str|unicode description: additional information on item (usually is not shown to users)\n\n :param bool in_menu: show this item in menus\n\n :param bool in_breadcrumbs: show this item in breadcrumbs\n\n :param bool in_sitetree: show this item in sitetrees\n\n :param bool access_loggedin: show item to logged in users only\n\n :param bool access_guest: show item to guest users only\n\n :param list|str||unicode|int, Permission access_by_perms: restrict access to users with these permissions.\n\n This can be set to one or a list of permission names, IDs or Permission instances.\n\n Permission names are more portable and should be in a form `.`, e.g.:\n my_app.allow_save\n\n\n :param bool perms_mode_all: permissions set interpretation rule:\n True - user should have all the permissions;\n False - user should have any of chosen permissions.\n\n :rtype: TreeItemBase"} {"_id": "q_6102", "text": "Imports sitetree module from a given app.\n\n :param str|unicode app: Application name\n :return: module|None"} {"_id": "q_6103", "text": "Returns a certain sitetree model as defined in the project settings.\n\n :param str|unicode settings_entry_name:\n :rtype: TreeItemBase|TreeBase"} {"_id": "q_6104", "text": "Wrapper function for IPv4 and IPv6 converters.\n\n :arg ip: IPv4 or IPv6 address"} {"_id": "q_6105", "text": "Populate location dict for converted IP.\n Returns dict with numerous location properties.\n\n :arg ipnum: Result of ip2long conversion"} {"_id": "q_6106", "text": "Hostname lookup method, supports both IPv4 and IPv6."} {"_id": "q_6107", "text": "Returns the database ID for specified hostname.\n The id might be useful as array index. 0 is unknown.\n\n :arg hostname: Hostname to get ID from."} {"_id": "q_6108", "text": "Returns the database ID for specified address.\n The ID might be useful as array index. 0 is unknown.\n\n :arg addr: IPv4 or IPv6 address (eg. 203.0.113.30)"} {"_id": "q_6109", "text": "Returns full country name for specified hostname.\n\n :arg hostname: Hostname (e.g. example.com)"} {"_id": "q_6110", "text": "Returns Organization, ISP, or ASNum name for given IP address.\n\n :arg addr: IP address (e.g. 203.0.113.30)"} {"_id": "q_6111", "text": "Returns Organization, ISP, or ASNum name for given hostname.\n\n :arg hostname: Hostname (e.g. example.com)"} {"_id": "q_6112", "text": "Returns time zone from country and region code.\n\n :arg country_code: Country code\n :arg region_code: Region code"} {"_id": "q_6113", "text": "If the given filename should be compressed, returns the\n compressed filename.\n\n A file can be compressed if:\n\n - It is a whitelisted extension\n - The compressed file does not exist\n - The compressed file exists by is older than the file itself\n\n Otherwise, it returns False."} {"_id": "q_6114", "text": "Copy or symlink the file."} {"_id": "q_6115", "text": "Transform path to url, converting backslashes to slashes if needed."} {"_id": "q_6116", "text": "Reads markdown file, converts output and fetches title and meta-data for\n further processing."} {"_id": "q_6117", "text": "Loads the exif data of all images in an album from cache"} {"_id": "q_6118", "text": "Restores the exif data cache from the cache file"} {"_id": "q_6119", "text": "Stores the exif data of all images in the gallery"} {"_id": "q_6120", "text": "Removes all filtered Media and subdirs from an Album"} {"_id": "q_6121", "text": "Run sigal to process a directory.\n\n If provided, 'source', 'destination' and 'theme' will override the\n corresponding values from the settings file."} {"_id": "q_6122", "text": "Run a simple web server."} {"_id": "q_6123", "text": "Write metadata keys to .md file.\n\n TARGET can be a media file or an album directory. KEYS are key/value pairs.\n\n Ex, to set the title of test.jpg to \"My test image\":\n\n sigal set_meta test.jpg title \"My test image\""} {"_id": "q_6124", "text": "Create output directories for thumbnails and original images."} {"_id": "q_6125", "text": "URL of the album, relative to its parent."} {"_id": "q_6126", "text": "Path to the thumbnail of the album."} {"_id": "q_6127", "text": "Make a ZIP archive with all media files and return its path.\n\n If the ``zip_gallery`` setting is set,it contains the location of a zip\n archive with all original images of the corresponding directory."} {"_id": "q_6128", "text": "Create the image gallery"} {"_id": "q_6129", "text": "Process a list of images in a directory."} {"_id": "q_6130", "text": "Returns an image with reduced opacity."} {"_id": "q_6131", "text": "Returns the dimensions of the video."} {"_id": "q_6132", "text": "Video processor.\n\n :param source: path to a video\n :param outname: path to the generated video\n :param settings: settings dict\n :param options: array of options passed to ffmpeg"} {"_id": "q_6133", "text": "Generate the HTML page and save it."} {"_id": "q_6134", "text": "Return the path to the thumb.\n\n examples:\n >>> default_settings = create_settings()\n >>> get_thumb(default_settings, \"bar/foo.jpg\")\n \"bar/thumbnails/foo.jpg\"\n >>> get_thumb(default_settings, \"bar/foo.png\")\n \"bar/thumbnails/foo.png\"\n\n for videos, it returns a jpg file:\n >>> get_thumb(default_settings, \"bar/foo.webm\")\n \"bar/thumbnails/foo.jpg\""} {"_id": "q_6135", "text": "Generate the media page and save it"} {"_id": "q_6136", "text": "Create a configuration from a mapping.\n\n This allows either a mapping to be directly passed or as\n keyword arguments, for example,\n\n .. code-block:: python\n\n config = {'keep_alive_timeout': 10}\n Config.from_mapping(config)\n Config.form_mapping(keep_alive_timeout=10)\n\n Arguments:\n mapping: Optionally a mapping object.\n kwargs: Optionally a collection of keyword arguments to\n form a mapping."} {"_id": "q_6137", "text": "Create a configuration from a Python file.\n\n .. code-block:: python\n\n Config.from_pyfile('hypercorn_config.py')\n\n Arguments:\n filename: The filename which gives the path to the file."} {"_id": "q_6138", "text": "Load the configuration values from a TOML formatted file.\n\n This allows configuration to be loaded as so\n\n .. code-block:: python\n\n Config.from_toml('config.toml')\n\n Arguments:\n filename: The filename which gives the path to the file."} {"_id": "q_6139", "text": "Create a configuration from a Python object.\n\n This can be used to reference modules or objects within\n modules for example,\n\n .. code-block:: python\n\n Config.from_object('module')\n Config.from_object('module.instance')\n from module import instance\n Config.from_object(instance)\n\n are valid.\n\n Arguments:\n instance: Either a str referencing a python object or the\n object itself."} {"_id": "q_6140", "text": "Creates a set of zipkin attributes for a span.\n\n :param sample_rate: Float between 0.0 and 100.0 to determine sampling rate\n :type sample_rate: float\n :param trace_id: Optional 16-character hex string representing a trace_id.\n If this is None, a random trace_id will be generated.\n :type trace_id: str\n :param span_id: Optional 16-character hex string representing a span_id.\n If this is None, a random span_id will be generated.\n :type span_id: str\n :param use_128bit_trace_id: If true, generate 128-bit trace_ids\n :type use_128bit_trace_id: boolean"} {"_id": "q_6141", "text": "Exit the span context. Zipkin attrs are pushed onto the\n threadlocal stack regardless of sampling, so they always need to be\n popped off. The actual logging of spans depends on sampling and that\n the logging was correctly set up."} {"_id": "q_6142", "text": "Adds a 'sa' binary annotation to the current span.\n\n 'sa' binary annotations are useful for situations where you need to log\n where a request is going but the destination doesn't support zipkin.\n\n Note that the span must have 'cs'/'cr' annotations.\n\n :param port: The port number of the destination\n :type port: int\n :param service_name: The name of the destination service\n :type service_name: str\n :param host: Host address of the destination\n :type host: str"} {"_id": "q_6143", "text": "Overrides the current span name.\n\n This is useful if you don't know the span name yet when you create the\n zipkin_span object. i.e. pyramid_zipkin doesn't know which route the\n request matched until the function wrapped by the context manager\n completes.\n\n :param name: New span name\n :type name: str"} {"_id": "q_6144", "text": "Creates a new Endpoint object.\n\n :param port: TCP/UDP port. Defaults to 0.\n :type port: int\n :param service_name: service name as a str. Defaults to 'unknown'.\n :type service_name: str\n :param host: ipv4 or ipv6 address of the host. Defaults to the\n current host ip.\n :type host: str\n :param use_defaults: whether to use defaults.\n :type use_defaults: bool\n :returns: zipkin Endpoint object"} {"_id": "q_6145", "text": "Creates a copy of a given endpoint with a new service name.\n\n :param endpoint: existing Endpoint object\n :type endpoint: Endpoint\n :param new_service_name: new service name\n :type new_service_name: str\n :returns: zipkin new Endpoint object"} {"_id": "q_6146", "text": "Builds and returns a V1 Span.\n\n :return: newly generated _V1Span\n :rtype: _V1Span"} {"_id": "q_6147", "text": "Encode list of protobuf Spans to binary.\n\n :param pb_spans: list of protobuf Spans.\n :type pb_spans: list of zipkin_pb2.Span\n :return: encoded list.\n :rtype: bytes"} {"_id": "q_6148", "text": "Converts a py_zipkin Span in a protobuf Span.\n\n :param span: py_zipkin Span to convert.\n :type span: py_zipkin.encoding.Span\n :return: protobuf's Span\n :rtype: zipkin_pb2.Span"} {"_id": "q_6149", "text": "Encodes to hexadecimal ids to big-endian binary.\n\n :param hex_id: hexadecimal id to encode.\n :type hex_id: str\n :return: binary representation.\n :type: bytes"} {"_id": "q_6150", "text": "Converts py_zipkin's Kind to Protobuf's Kind.\n\n :param kind: py_zipkin's Kind.\n :type kind: py_zipkin.Kind\n :return: correcponding protobuf's kind value.\n :rtype: zipkin_pb2.Span.Kind"} {"_id": "q_6151", "text": "Converts py_zipkin's Endpoint to Protobuf's Endpoint.\n\n :param endpoint: py_zipkins' endpoint to convert.\n :type endpoint: py_zipkin.encoding.Endpoint\n :return: corresponding protobuf's endpoint.\n :rtype: zipkin_pb2.Endpoint"} {"_id": "q_6152", "text": "Create a zipkin annotation object\n\n :param timestamp: timestamp of when the annotation occured in microseconds\n :param value: name of the annotation, such as 'sr'\n :param host: zipkin endpoint object\n\n :returns: zipkin annotation object"} {"_id": "q_6153", "text": "Create a zipkin binary annotation object\n\n :param key: name of the annotation, such as 'http.uri'\n :param value: value of the annotation, such as a URI\n :param annotation_type: type of annotation, such as AnnotationType.I32\n :param host: zipkin endpoint object\n\n :returns: zipkin binary annotation object"} {"_id": "q_6154", "text": "Create a zipkin Endpoint object.\n\n An Endpoint object holds information about the network context of a span.\n\n :param port: int value of the port. Defaults to 0\n :param service_name: service name as a str. Defaults to 'unknown'\n :param ipv4: ipv4 host address\n :param ipv6: ipv6 host address\n :returns: thrift Endpoint object"} {"_id": "q_6155", "text": "Copies a copy of a given endpoint with a new service name.\n This should be very fast, on the order of several microseconds.\n\n :param endpoint: existing zipkin_core.Endpoint object\n :param service_name: str of new service name\n :returns: zipkin Endpoint object"} {"_id": "q_6156", "text": "Reformat annotations dict to return list of corresponding zipkin_core objects.\n\n :param annotations: dict containing key as annotation name,\n value being timestamp in seconds(float).\n :type host: :class:`zipkin_core.Endpoint`\n :returns: a list of annotation zipkin_core objects\n :rtype: list"} {"_id": "q_6157", "text": "Takes a bunch of span attributes and returns a thriftpy2 representation\n of the span. Timestamps passed in are in seconds, they're converted to\n microseconds before thrift encoding."} {"_id": "q_6158", "text": "Returns a TBinaryProtocol encoded Thrift span.\n\n :param thrift_span: thrift object to encode.\n :returns: thrift object in TBinaryProtocol format bytes."} {"_id": "q_6159", "text": "Returns a TBinaryProtocol encoded list of Thrift objects.\n\n :param binary_thrift_obj_list: list of TBinaryProtocol objects to encode.\n :returns: bynary object representing the encoded list."} {"_id": "q_6160", "text": "Returns the span type and encoding for the message provided.\n\n The logic in this function is a Python port of\n https://github.com/openzipkin/zipkin/blob/master/zipkin/src/main/java/zipkin/internal/DetectingSpanDecoder.java\n\n :param message: span to perform operations on.\n :type message: byte array\n :returns: span encoding.\n :rtype: Encoding"} {"_id": "q_6161", "text": "Converts encoded spans to a different encoding.\n\n param spans: encoded input spans.\n type spans: byte array\n param output_encoding: desired output encoding.\n type output_encoding: Encoding\n param input_encoding: optional input encoding. If this is not specified, it'll\n try to understand the encoding automatically by inspecting the input spans.\n type input_encoding: Encoding\n :returns: encoded spans.\n :rtype: byte array"} {"_id": "q_6162", "text": "Encodes the current span to thrift."} {"_id": "q_6163", "text": "Encodes a single span to protobuf."} {"_id": "q_6164", "text": "Decodes an encoded list of spans.\n\n :param spans: encoded list of spans\n :type spans: bytes\n :return: list of spans\n :rtype: list of Span"} {"_id": "q_6165", "text": "Accepts a thrift decoded endpoint and converts it to an Endpoint.\n\n :param thrift_endpoint: thrift encoded endpoint\n :type thrift_endpoint: thrift endpoint\n :returns: decoded endpoint\n :rtype: Encoding"} {"_id": "q_6166", "text": "Accepts a thrift annotation and converts it to a v1 annotation.\n\n :param thrift_annotations: list of thrift annotations.\n :type thrift_annotations: list of zipkin_core.Span.Annotation\n :returns: (annotations, local_endpoint, kind)"} {"_id": "q_6167", "text": "Accepts a thrift decoded binary annotation and converts it\n to a v1 binary annotation."} {"_id": "q_6168", "text": "Decodes a thrift span.\n\n :param thrift_span: thrift span\n :type thrift_span: thrift Span object\n :returns: span builder representing this span\n :rtype: Span"} {"_id": "q_6169", "text": "Converts the provided unsigned long value to a hex string.\n\n :param value: the value to convert\n :type value: unsigned long\n :returns: value as a hex string"} {"_id": "q_6170", "text": "Writes an unsigned long value across a byte array.\n\n :param data: the buffer to write the value to\n :type data: bytearray\n :param pos: the starting position\n :type pos: int\n :param value: the value to write\n :type value: unsigned long"} {"_id": "q_6171", "text": "mBank Collect uses transaction code 911 to distinguish icoming mass\n payments transactions, adding transaction_code may be helpful in further\n processing"} {"_id": "q_6172", "text": "mBank Collect uses ID IPH to distinguish between virtual accounts,\n adding iph_id may be helpful in further processing"} {"_id": "q_6173", "text": "mBank Collect states TNR in transaction details as unique id for\n transactions, that may be used to identify the same transactions in\n different statement files eg. partial mt942 and full mt940\n Information about tnr uniqueness has been obtained from mBank support,\n it lacks in mt940 mBank specification."} {"_id": "q_6174", "text": "Parses mt940 data and returns transactions object\n\n :param src: file handler to read, filename to read or raw data as string\n :return: Collection of transactions\n :rtype: Transactions"} {"_id": "q_6175", "text": "Join strings together and strip whitespace in between if needed"} {"_id": "q_6176", "text": "Handles the message shown when we are ratelimited"} {"_id": "q_6177", "text": "Handles requests to the API"} {"_id": "q_6178", "text": "Gets the information of the given Bot ID"} {"_id": "q_6179", "text": "Gets an object of bots on DBL"} {"_id": "q_6180", "text": "Write outgoing message."} {"_id": "q_6181", "text": "Encode Erlang external term."} {"_id": "q_6182", "text": "Asks user for removal of project directory and eventually removes it"} {"_id": "q_6183", "text": "Check the defined project name against keywords, builtins and existing\n modules to avoid name clashing"} {"_id": "q_6184", "text": "Checks and validate provided input"} {"_id": "q_6185", "text": "Converts the current version to the next one for inserting into requirements\n in the ' < version' format"} {"_id": "q_6186", "text": "Parse config file.\n\n Returns a list of additional args."} {"_id": "q_6187", "text": "Install aldryn boilerplate\n\n :param config_data: configuration data"} {"_id": "q_6188", "text": "Create admin user without user input\n\n :param config_data: configuration data"} {"_id": "q_6189", "text": "Method sleeps, if nothing to do"} {"_id": "q_6190", "text": "cleans up and stops the discovery server"} {"_id": "q_6191", "text": "construct a a raw SOAP XML string, given a prepared SoapEnvelope object"} {"_id": "q_6192", "text": "Return a list of RelatedObject records for child relations of the given model,\n including ones attached to ancestors of the model"} {"_id": "q_6193", "text": "Return a list of ParentalManyToManyFields on the given model,\n including ones attached to ancestors of the model"} {"_id": "q_6194", "text": "Save the model and commit all child relations."} {"_id": "q_6195", "text": "Build an instance of this model from the JSON-like structure passed in,\n recursing into related objects as required.\n If check_fks is true, it will check whether referenced foreign keys still\n exist in the database.\n - dangling foreign keys on related objects are dealt with by either nullifying the key or\n dropping the related object, according to the 'on_delete' setting.\n - dangling foreign keys on the base object will be nullified, unless strict_fks is true,\n in which case any dangling foreign keys with on_delete=CASCADE will cause None to be\n returned for the entire object."} {"_id": "q_6196", "text": "This clean method will check for unique_together condition"} {"_id": "q_6197", "text": "Return True if data differs from initial."} {"_id": "q_6198", "text": "Returns the address with a valid checksum attached."} {"_id": "q_6199", "text": "Generates the correct checksum for this address."} {"_id": "q_6200", "text": "Returns the argument parser that will be used to interpret\n arguments and options from argv."} {"_id": "q_6201", "text": "Returns whether a sequence of signature fragments is valid.\n\n :param fragments:\n Sequence of signature fragments (usually\n :py:class:`iota.transaction.Fragment` instances).\n\n :param hash_:\n Hash used to generate the signature fragments (usually a\n :py:class:`iota.transaction.BundleHash` instance).\n\n :param public_key:\n The public key value used to verify the signature digest (usually a\n :py:class:`iota.types.Address` instance).\n\n :param sponge_type:\n The class used to create the cryptographic sponge (i.e., Curl or Kerl)."} {"_id": "q_6202", "text": "Generates the key associated with the specified address.\n\n Note that this method will generate the wrong key if the input\n address was generated from a different key!"} {"_id": "q_6203", "text": "Creates a generator that can be used to progressively generate\n new keys.\n\n :param start:\n Starting index.\n\n Warning: This method may take awhile to reset if ``start``\n is a large number!\n\n :param step:\n Number of indexes to advance after each key.\n\n This value can be negative; the generator will exit if it\n reaches an index < 0.\n\n Warning: The generator may take awhile to advance between\n iterations if ``step`` is a large number!\n\n :param security_level:\n Number of _transform iterations to apply to each key.\n Must be >= 1.\n\n Increasing this value makes key generation slower, but more\n resistant to brute-forcing."} {"_id": "q_6204", "text": "Absorb trits into the sponge.\n\n :param trits:\n Sequence of trits to absorb.\n\n :param offset:\n Starting offset in ``trits``.\n\n :param length:\n Number of trits to absorb. Defaults to ``len(trits)``."} {"_id": "q_6205", "text": "Squeeze trits from the sponge.\n\n :param trits:\n Sequence that the squeezed trits will be copied to.\n Note: this object will be modified!\n\n :param offset:\n Starting offset in ``trits``.\n\n :param length:\n Number of trits to squeeze, default to ``HASH_LENGTH``"} {"_id": "q_6206", "text": "Transforms internal state."} {"_id": "q_6207", "text": "Generates one or more private keys from the seed.\n\n As the name implies, private keys should not be shared.\n However, in a few cases it may be necessary (e.g., for M-of-N\n transactions).\n\n :param index:\n The starting key index.\n\n :param count:\n Number of keys to generate.\n\n :param security_level:\n Number of iterations to use when generating new keys.\n\n Larger values take longer, but the resulting signatures are\n more secure.\n\n This value must be between 1 and 3, inclusive.\n\n :return:\n Dict with the following items::\n\n {\n 'keys': List[PrivateKey],\n Always contains a list, even if only one key was\n generated.\n }\n\n References:\n\n - :py:class:`iota.crypto.signing.KeyGenerator`\n - https://github.com/iotaledger/wiki/blob/master/multisigs.md#how-m-of-n-works"} {"_id": "q_6208", "text": "Prepares a bundle that authorizes the spending of IOTAs from a\n multisig address.\n\n .. note::\n This method is used exclusively to spend IOTAs from a\n multisig address.\n\n If you want to spend IOTAs from non-multisig addresses, or\n if you want to create 0-value transfers (i.e., that don't\n require inputs), use\n :py:meth:`iota.api.Iota.prepare_transfer` instead.\n\n :param transfers:\n Transaction objects to prepare.\n\n .. important::\n Must include at least one transaction that spends IOTAs\n (i.e., has a nonzero ``value``). If you want to prepare\n a bundle that does not spend any IOTAs, use\n :py:meth:`iota.api.prepare_transfer` instead.\n\n :param multisig_input:\n The multisig address to use as the input for the transfers.\n\n .. note::\n This method only supports creating a bundle with a\n single multisig input.\n\n If you would like to spend from multiple multisig\n addresses in the same bundle, create the\n :py:class:`iota.multisig.transaction.ProposedMultisigBundle`\n object manually.\n\n :param change_address:\n If inputs are provided, any unspent amount will be sent to\n this address.\n\n If the bundle has no unspent inputs, ``change_address` is\n ignored.\n\n .. important::\n Unlike :py:meth:`iota.api.Iota.prepare_transfer`, this\n method will NOT generate a change address automatically.\n If there are unspent inputs and ``change_address`` is\n empty, an exception will be raised.\n\n This is because multisig transactions typically involve\n multiple individuals, and it would be unfair to the\n participants if we generated a change address\n automatically using the seed of whoever happened to run\n the ``prepare_multisig_transfer`` method!\n\n .. danger::\n Note that this protective measure is not a\n substitute for due diligence!\n\n Always verify the details of every transaction in a\n bundle (including the change transaction) before\n signing the input(s)!\n\n :return:\n Dict containing the following values::\n\n {\n 'trytes': List[TransactionTrytes],\n Finalized bundle, as trytes.\n The input transactions are not signed.\n }\n\n In order to authorize the spending of IOTAs from the multisig\n input, you must generate the correct private keys and invoke\n the :py:meth:`iota.crypto.types.PrivateKey.sign_input_at`\n method for each key, in the correct order.\n\n Once the correct signatures are applied, you can then perform\n proof of work (``attachToTangle``) and broadcast the bundle\n using :py:meth:`iota.api.Iota.send_trytes`."} {"_id": "q_6209", "text": "Adds two individual trits together.\n\n The result is always a single trit."} {"_id": "q_6210", "text": "Outputs the user's seed to stdout, along with lots of warnings\n about security."} {"_id": "q_6211", "text": "Find the transactions which match the specified input and\n return.\n\n All input values are lists, for which a list of return values\n (transaction hashes), in the same order, is returned for all\n individual elements.\n\n Using multiple of these input fields returns the intersection of\n the values.\n\n :param bundles:\n List of bundle IDs.\n\n :param addresses:\n List of addresses.\n\n :param tags:\n List of tags.\n\n :param approvees:\n List of approvee transaction IDs.\n\n References:\n\n - https://iota.readme.io/docs/findtransactions"} {"_id": "q_6212", "text": "Gets all possible inputs of a seed and returns them, along with\n the total balance.\n\n This is either done deterministically (by generating all\n addresses until :py:meth:`find_transactions` returns an empty\n result), or by providing a key range to search.\n\n :param start:\n Starting key index.\n Defaults to 0.\n\n :param stop:\n Stop before this index.\n\n Note that this parameter behaves like the ``stop`` attribute\n in a :py:class:`slice` object; the stop index is *not*\n included in the result.\n\n If ``None`` (default), then this method will not stop until\n it finds an unused address.\n\n :param threshold:\n If set, determines the minimum threshold for a successful\n result:\n\n - As soon as this threshold is reached, iteration will stop.\n - If the command runs out of addresses before the threshold\n is reached, an exception is raised.\n\n .. note::\n This method does not attempt to \"optimize\" the result\n (e.g., smallest number of inputs, get as close to\n ``threshold`` as possible, etc.); it simply accumulates\n inputs in order until the threshold is met.\n\n If ``threshold`` is 0, the first address in the key range\n with a non-zero balance will be returned (if it exists).\n\n If ``threshold`` is ``None`` (default), this method will\n return **all** inputs in the specified key range.\n\n :param security_level:\n Number of iterations to use when generating new addresses\n (see :py:meth:`get_new_addresses`).\n\n This value must be between 1 and 3, inclusive.\n\n If not set, defaults to\n :py:attr:`AddressGenerator.DEFAULT_SECURITY_LEVEL`.\n\n :return:\n Dict with the following structure::\n\n {\n 'inputs': List[Address],\n Addresses with nonzero balances that can be used\n as inputs.\n\n 'totalBalance': int,\n Aggregate balance from all matching addresses.\n }\n\n Note that each Address in the result has its ``balance``\n attribute set.\n\n Example:\n\n .. code-block:: python\n\n response = iota.get_inputs(...)\n\n input0 = response['inputs'][0] # type: Address\n input0.balance # 42\n\n :raise:\n - :py:class:`iota.adapter.BadApiResponse` if ``threshold``\n is not met. Not applicable if ``threshold`` is ``None``.\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#getinputs"} {"_id": "q_6213", "text": "Generates one or more new addresses from the seed.\n\n :param index:\n The key index of the first new address to generate (must be\n >= 1).\n\n :param count:\n Number of addresses to generate (must be >= 1).\n\n .. tip::\n This is more efficient than calling ``get_new_address``\n inside a loop.\n\n If ``None``, this method will progressively generate\n addresses and scan the Tangle until it finds one that has no\n transactions referencing it.\n\n :param security_level:\n Number of iterations to use when generating new addresses.\n\n Larger values take longer, but the resulting signatures are\n more secure.\n\n This value must be between 1 and 3, inclusive.\n\n :param checksum:\n Specify whether to return the address with the checksum.\n Defaults to ``False``.\n\n :return:\n Dict with the following structure::\n\n {\n 'addresses': List[Address],\n Always a list, even if only one address was\n generated.\n }\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#getnewaddress"} {"_id": "q_6214", "text": "Promotes a transaction by adding spam on top of it.\n\n :return:\n Dict with the following structure::\n\n {\n 'bundle': Bundle,\n The newly-published bundle.\n }"} {"_id": "q_6215", "text": "Takes a tail transaction hash as input, gets the bundle\n associated with the transaction and then replays the bundle by\n attaching it to the Tangle.\n\n :param transaction:\n Transaction hash. Must be a tail.\n\n :param depth:\n Depth at which to attach the bundle.\n Defaults to 3.\n\n :param min_weight_magnitude:\n Min weight magnitude, used by the node to calibrate Proof of\n Work.\n\n If not provided, a default value will be used.\n\n :return:\n Dict with the following structure::\n\n {\n 'trytes': List[TransactionTrytes],\n Raw trytes that were published to the Tangle.\n }\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#replaytransfer"} {"_id": "q_6216", "text": "Prepares a set of transfers and creates the bundle, then\n attaches the bundle to the Tangle, and broadcasts and stores the\n transactions.\n\n :param transfers:\n Transfers to include in the bundle.\n\n :param depth:\n Depth at which to attach the bundle.\n Defaults to 3.\n\n :param inputs:\n List of inputs used to fund the transfer.\n Not needed for zero-value transfers.\n\n :param change_address:\n If inputs are provided, any unspent amount will be sent to\n this address.\n\n If not specified, a change address will be generated\n automatically.\n\n :param min_weight_magnitude:\n Min weight magnitude, used by the node to calibrate Proof of\n Work.\n\n If not provided, a default value will be used.\n\n :param security_level:\n Number of iterations to use when generating new addresses\n (see :py:meth:`get_new_addresses`).\n\n This value must be between 1 and 3, inclusive.\n\n If not set, defaults to\n :py:attr:`AddressGenerator.DEFAULT_SECURITY_LEVEL`.\n\n :return:\n Dict with the following structure::\n\n {\n 'bundle': Bundle,\n The newly-published bundle.\n }\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#sendtransfer"} {"_id": "q_6217", "text": "Attaches transaction trytes to the Tangle, then broadcasts and\n stores them.\n\n :param trytes:\n Transaction encoded as a tryte sequence.\n\n :param depth:\n Depth at which to attach the bundle.\n Defaults to 3.\n\n :param min_weight_magnitude:\n Min weight magnitude, used by the node to calibrate Proof of\n Work.\n\n If not provided, a default value will be used.\n\n :return:\n Dict with the following structure::\n\n {\n 'trytes': List[TransactionTrytes],\n Raw trytes that were published to the Tangle.\n }\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#sendtrytes"} {"_id": "q_6218", "text": "Given a URI, returns a properly-configured adapter instance."} {"_id": "q_6219", "text": "Sends an API request to the node.\n\n :param payload:\n JSON payload.\n\n :param kwargs:\n Additional keyword arguments for the adapter.\n\n :return:\n Decoded response from the node.\n\n :raise:\n - :py:class:`BadApiResponse` if a non-success response was\n received."} {"_id": "q_6220", "text": "Sends a message to the instance's logger, if configured."} {"_id": "q_6221", "text": "Sends the actual HTTP request.\n\n Split into its own method so that it can be mocked during unit\n tests."} {"_id": "q_6222", "text": "Absorbs a digest into the sponge.\n\n .. important::\n Keep track of the order that digests are added!\n\n To spend inputs from a multisig address, you must provide\n the private keys in the same order!\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/multisigs.md#spending-inputs"} {"_id": "q_6223", "text": "Creates an iterator that can be used to progressively generate new\n addresses.\n\n :param start:\n Starting index.\n\n Warning: This method may take awhile to reset if ``start``\n is a large number!\n\n :param step:\n Number of indexes to advance after each address.\n\n Warning: The generator may take awhile to advance between\n iterations if ``step`` is a large number!"} {"_id": "q_6224", "text": "Generates an address from a private key digest."} {"_id": "q_6225", "text": "Generates a new address.\n\n Used in the event of a cache miss."} {"_id": "q_6226", "text": "Scans the Tangle for used addresses.\n\n This is basically the opposite of invoking ``getNewAddresses`` with\n ``stop=None``."} {"_id": "q_6227", "text": "Determines which codec to use for the specified encoding.\n\n References:\n\n - https://docs.python.org/3/library/codecs.html#codecs.register"} {"_id": "q_6228", "text": "Encodes a byte string into trytes."} {"_id": "q_6229", "text": "Decodes a tryte string into bytes."} {"_id": "q_6230", "text": "Adds a route to the wrapper.\n\n :param command:\n The name of the command to route (e.g., \"attachToTangle\").\n\n :param adapter:\n The adapter object or URI to route requests to."} {"_id": "q_6231", "text": "Returns a JSON-compatible representation of the object.\n\n References:\n\n - :py:class:`iota.json.JsonEncoder`."} {"_id": "q_6232", "text": "Returns the values needed to validate the transaction's\n ``signature_message_fragment`` value."} {"_id": "q_6233", "text": "Returns TryteString representations of the transactions in this\n bundle.\n\n :param head_to_tail:\n Determines the order of the transactions:\n\n - ``True``: head txn first, tail txn last.\n - ``False`` (default): tail txn first, head txn last.\n\n Note that the order is reversed by default, as this is the\n way bundles are typically broadcast to the Tangle."} {"_id": "q_6234", "text": "Automatically discover commands in the specified package.\n\n :param package:\n Package path or reference.\n\n :param recursively:\n If True, will descend recursively into sub-packages.\n\n :return:\n All commands discovered in the specified package, indexed by\n command name (note: not class name)."} {"_id": "q_6235", "text": "Sends the request object to the adapter and returns the response.\n\n The command name will be automatically injected into the request\n before it is sent (note: this will modify the request object)."} {"_id": "q_6236", "text": "Applies a filter to a value. If the value does not pass the\n filter, an exception will be raised with lots of contextual info\n attached to it."} {"_id": "q_6237", "text": "Returns the URL to check job status.\n\n :param job_id:\n The ID of the job to check."} {"_id": "q_6238", "text": "Validates the signature fragments in the bundle.\n\n :return:\n List of error messages.\n If empty, signature fragments are valid."} {"_id": "q_6239", "text": "Validates the signature fragments for a group of transactions\n using the specified sponge type.\n\n Note: this method assumes that the transactions in the group\n have already passed basic validation (see\n :py:meth:`_create_validator`).\n\n :return:\n - ``None``: Indicates that the signature fragments are valid.\n - ``Text``: Error message indicating the fragments are invalid."} {"_id": "q_6240", "text": "Recursively traverse the Tangle, collecting transactions until\n we hit a new bundle.\n\n This method is (usually) faster than ``findTransactions``, and\n it ensures we don't collect transactions from replayed bundles."} {"_id": "q_6241", "text": "Starts the REPL."} {"_id": "q_6242", "text": "Generates a random seed using a CSPRNG.\n\n :param length:\n Length of seed, in trytes.\n\n For maximum security, this should always be set to 81, but\n you can change it if you're 110% sure you know what you're\n doing.\n\n See https://iota.stackexchange.com/q/249 for more info."} {"_id": "q_6243", "text": "Generates the digest used to do the actual signing.\n\n Signing keys can have variable length and tend to be quite long,\n which makes them not-well-suited for use in crypto algorithms.\n\n The digest is essentially the result of running the signing key\n through a PBKDF, yielding a constant-length hash that can be\n used for crypto."} {"_id": "q_6244", "text": "Makes JSON-serializable objects play nice with IPython's default\n pretty-printer.\n\n Sadly, :py:func:`pprint.pprint` does not have a similar\n mechanism.\n\n References:\n\n - http://ipython.readthedocs.io/en/stable/api/generated/IPython.lib.pretty.html\n - :py:meth:`IPython.lib.pretty.RepresentationPrinter.pretty`\n - :py:func:`pprint._safe_repr`"} {"_id": "q_6245", "text": "Absorb trits into the sponge from a buffer.\n\n :param trits:\n Buffer that contains the trits to absorb.\n\n :param offset:\n Starting offset in ``trits``.\n\n :param length:\n Number of trits to absorb. Defaults to ``len(trits)``."} {"_id": "q_6246", "text": "Squeeze trits from the sponge into a buffer.\n\n :param trits:\n Buffer that will hold the squeezed trits.\n\n IMPORTANT: If ``trits`` is too small, it will be extended!\n\n :param offset:\n Starting offset in ``trits``.\n\n :param length:\n Number of trits to squeeze from the sponge.\n\n If not specified, defaults to :py:data:`TRIT_HASH_LENGTH`\n (i.e., by default, we will try to squeeze exactly 1 hash)."} {"_id": "q_6247", "text": "Increments the transaction's legacy tag, used to fix insecure\n bundle hashes when finalizing a bundle.\n\n References:\n\n - https://github.com/iotaledger/iota.lib.py/issues/84"} {"_id": "q_6248", "text": "Determines the most relevant tag for the bundle."} {"_id": "q_6249", "text": "Adds a transaction to the bundle.\n\n If the transaction message is too long, it will be split\n automatically into multiple transactions."} {"_id": "q_6250", "text": "Finalizes the bundle, preparing it to be attached to the Tangle."} {"_id": "q_6251", "text": "Signs the input at the specified index.\n\n :param start_index:\n The index of the first input transaction.\n\n If necessary, the resulting signature will be split across\n multiple transactions automatically (i.e., if an input has\n ``security_level=2``, you still only need to call\n :py:meth:`sign_input_at` once).\n\n :param private_key:\n The private key that will be used to generate the signature.\n\n .. important::\n Be sure that the private key was generated using the\n correct seed, or the resulting signature will be\n invalid!"} {"_id": "q_6252", "text": "Creates transactions for the specified input address."} {"_id": "q_6253", "text": "Converts between any two standard units of iota.\n\n :param value:\n Value (affixed) to convert. For example: '1.618 Mi'.\n\n :param symbol:\n Unit symbol of iota to convert to. For example: 'Gi'.\n\n :return:\n Float as units of given symbol to convert to."} {"_id": "q_6254", "text": "Pass an argument list to SoX.\n\n Parameters\n ----------\n args : iterable\n Argument list for SoX. The first item can, but does not\n need to, be 'sox'.\n\n Returns:\n --------\n status : bool\n True on success."} {"_id": "q_6255", "text": "Calls SoX help for a lists of audio formats available with the current\n install of SoX.\n\n Returns:\n --------\n formats : list\n List of audio file extensions that SoX can process."} {"_id": "q_6256", "text": "Base call to SoXI.\n\n Parameters\n ----------\n filepath : str\n Path to audio file.\n\n argument : str\n Argument to pass to SoXI.\n\n Returns\n -------\n shell_output : str\n Command line output of SoXI"} {"_id": "q_6257", "text": "Pass an argument list to play.\n\n Parameters\n ----------\n args : iterable\n Argument list for play. The first item can, but does not\n need to, be 'play'.\n\n Returns:\n --------\n status : bool\n True on success."} {"_id": "q_6258", "text": "Validate that combine method can be performed with given files.\n Raises IOError if input file formats are incompatible."} {"_id": "q_6259", "text": "Check if files in input file list have the same sample rate"} {"_id": "q_6260", "text": "Set input formats given input_volumes.\n\n Parameters\n ----------\n input_filepath_list : list of str\n List of input files\n input_volumes : list of float, default=None\n List of volumes to be applied upon combining input files. Volumes\n are applied to the input files in order.\n If None, input files will be combined at their original volumes.\n input_format : list of lists, default=None\n List of input formats to be applied to each input file. Formatting\n arguments are applied to the input files in order.\n If None, the input formats will be inferred from the file header."} {"_id": "q_6261", "text": "Check input_volumes contains a valid list of volumes.\n\n Parameters\n ----------\n input_volumes : list\n list of volume values. Castable to numbers."} {"_id": "q_6262", "text": "Input file validation function. Checks that file exists and can be\n processed by SoX.\n\n Parameters\n ----------\n input_filepath : str\n The input filepath."} {"_id": "q_6263", "text": "Output file validation function. Checks that file can be written, and\n has a valid file extension. Throws a warning if the path already exists,\n as it will be overwritten on build.\n\n Parameters\n ----------\n output_filepath : str\n The output filepath.\n\n Returns:\n --------\n output_filepath : str\n The output filepath."} {"_id": "q_6264", "text": "Get a dictionary of file information\n\n Parameters\n ----------\n filepath : str\n File path.\n\n Returns:\n --------\n info_dictionary : dict\n Dictionary of file information. Fields are:\n * channels\n * sample_rate\n * bitrate\n * duration\n * num_samples\n * encoding\n * silent"} {"_id": "q_6265", "text": "Call sox's stat function.\n\n Parameters\n ----------\n filepath : str\n File path.\n\n Returns\n -------\n stat_output : str\n Sox output from stderr."} {"_id": "q_6266", "text": "Apply a biquad IIR filter with the given coefficients.\n\n Parameters\n ----------\n b : list of floats\n Numerator coefficients. Must be length 3\n a : list of floats\n Denominator coefficients. Must be length 3\n\n See Also\n --------\n fir, treble, bass, equalizer"} {"_id": "q_6267", "text": "Change the number of channels in the audio signal. If decreasing the\n number of channels it mixes channels together, if increasing the number\n of channels it duplicates.\n\n Note: This overrides arguments used in the convert effect!\n\n Parameters\n ----------\n n_channels : int\n Desired number of channels.\n\n See Also\n --------\n convert"} {"_id": "q_6268", "text": "Comparable with compression, this effect modifies an audio signal to\n make it sound louder.\n\n Parameters\n ----------\n amount : float\n Amount of enhancement between 0 and 100.\n\n See Also\n --------\n compand, mcompand"} {"_id": "q_6269", "text": "Apply a DC shift to the audio.\n\n Parameters\n ----------\n shift : float\n Amount to shift audio between -2 and 2. (Audio is between -1 and 1)\n\n See Also\n --------\n highpass"} {"_id": "q_6270", "text": "Apply a flanging effect to the audio.\n\n Parameters\n ----------\n delay : float, default=0\n Base delay (in miliseconds) between 0 and 30.\n depth : float, default=2\n Added swept delay (in miliseconds) between 0 and 10.\n regen : float, default=0\n Percentage regeneration between -95 and 95.\n width : float, default=71,\n Percentage of delayed signal mixed with original between 0 and 100.\n speed : float, default=0.5\n Sweeps per second (in Hz) between 0.1 and 10.\n shape : 'sine' or 'triangle', default='sine'\n Swept wave shape\n phase : float, default=25\n Swept wave percentage phase-shift for multi-channel flange between\n 0 and 100. 0 = 100 = same phase on each channel\n interp : 'linear' or 'quadratic', default='linear'\n Digital delay-line interpolation type.\n\n See Also\n --------\n tremolo"} {"_id": "q_6271", "text": "Apply amplification or attenuation to the audio signal.\n\n Parameters\n ----------\n gain_db : float, default=0.0\n Gain adjustment in decibels (dB).\n normalize : bool, default=True\n If True, audio is normalized to gain_db relative to full scale.\n If False, simply adjusts the audio power level by gain_db.\n limiter : bool, default=False\n If True, a simple limiter is invoked to prevent clipping.\n balance : str or None, default=None\n Balance gain across channels. Can be one of:\n * None applies no balancing (default)\n * 'e' applies gain to all channels other than that with the\n highest peak level, such that all channels attain the same\n peak level\n * 'B' applies gain to all channels other than that with the\n highest RMS level, such that all channels attain the same\n RMS level\n * 'b' applies gain with clipping protection to all channels other\n than that with the highest RMS level, such that all channels\n attain the same RMS level\n If normalize=True, 'B' and 'b' are equivalent.\n\n See Also\n --------\n loudness"} {"_id": "q_6272", "text": "Loudness control. Similar to the gain effect, but provides\n equalisation for the human auditory system.\n\n The gain is adjusted by gain_db and the signal is equalised according\n to ISO 226 w.r.t. reference_level.\n\n Parameters\n ----------\n gain_db : float, default=-10.0\n Loudness adjustment amount (in dB)\n reference_level : float, default=65.0\n Reference level (in dB) according to which the signal is equalized.\n Must be between 50 and 75 (dB)\n\n See Also\n --------\n gain"} {"_id": "q_6273", "text": "Calculate a profile of the audio for use in noise reduction.\n Running this command does not effect the Transformer effects\n chain. When this function is called, the calculated noise profile\n file is saved to the `profile_path`.\n\n Parameters\n ----------\n input_filepath : str\n Path to audiofile from which to compute a noise profile.\n profile_path : str\n Path to save the noise profile file.\n\n See Also\n --------\n noisered"} {"_id": "q_6274", "text": "Normalize an audio file to a particular db level.\n This behaves identically to the gain effect with normalize=True.\n\n Parameters\n ----------\n db_level : float, default=-3.0\n Output volume (db)\n\n See Also\n --------\n gain, loudness"} {"_id": "q_6275", "text": "Add silence to the beginning or end of a file.\n Calling this with the default arguments has no effect.\n\n Parameters\n ----------\n start_duration : float\n Number of seconds of silence to add to beginning.\n end_duration : float\n Number of seconds of silence to add to end.\n\n See Also\n --------\n delay"} {"_id": "q_6276", "text": "Pitch shift the audio without changing the tempo.\n\n This effect uses the WSOLA algorithm. The audio is chopped up into\n segments which are then shifted in the time domain and overlapped\n (cross-faded) at points where their waveforms are most similar as\n determined by measurement of least squares.\n\n Parameters\n ----------\n n_semitones : float\n The number of semitones to shift. Can be positive or negative.\n quick : bool, default=False\n If True, this effect will run faster but with lower sound quality.\n\n See Also\n --------\n bend, speed, tempo"} {"_id": "q_6277", "text": "Remix the channels of an audio file.\n\n Note: volume options are not yet implemented\n\n Parameters\n ----------\n remix_dictionary : dict or None\n Dictionary mapping output channel to list of input channel(s).\n Empty lists indicate the corresponding output channel should be\n empty. If None, mixes all channels down to a single mono file.\n num_output_channels : int or None\n The number of channels in the output file. If None, the number of\n output channels is equal to the largest key in remix_dictionary.\n If remix_dictionary is None, this variable is ignored.\n\n Examples\n --------\n Remix a 4-channel input file. The output file will have\n input channel 2 in channel 1, a mixdown of input channels 1 an 3 in\n channel 2, an empty channel 3, and a copy of input channel 4 in\n channel 4.\n\n >>> import sox\n >>> tfm = sox.Transformer()\n >>> remix_dictionary = {1: [2], 2: [1, 3], 4: [4]}\n >>> tfm.remix(remix_dictionary)"} {"_id": "q_6278", "text": "Repeat the entire audio count times.\n\n Parameters\n ----------\n count : int, default=1\n The number of times to repeat the audio."} {"_id": "q_6279", "text": "Reverse the audio completely"} {"_id": "q_6280", "text": "Removes silent regions from an audio file.\n\n Parameters\n ----------\n location : int, default=0\n Where to remove silence. One of:\n * 0 to remove silence throughout the file (default),\n * 1 to remove silence from the beginning,\n * -1 to remove silence from the end,\n silence_threshold : float, default=0.1\n Silence threshold as percentage of maximum sample amplitude.\n Must be between 0 and 100.\n min_silence_duration : float, default=0.1\n The minimum ammount of time in seconds required for a region to be\n considered non-silent.\n buffer_around_silence : bool, default=False\n If True, leaves a buffer of min_silence_duration around removed\n silent regions.\n\n See Also\n --------\n vad"} {"_id": "q_6281", "text": "Display time domain statistical information about the audio\n channels. Audio is passed unmodified through the SoX processing chain.\n Statistics are calculated and displayed for each audio channel\n\n Unlike other Transformer methods, this does not modify the transformer\n effects chain. Instead it computes statistics on the output file that\n would be created if the build command were invoked.\n\n Note: The file is downmixed to mono prior to computation.\n\n Parameters\n ----------\n input_filepath : str\n Path to input file to compute stats on.\n\n Returns\n -------\n stats_dict : dict\n List of frequency (Hz), amplitude pairs.\n\n See Also\n --------\n stat, sox.file_info"} {"_id": "q_6282", "text": "Swap stereo channels. If the input is not stereo, pairs of channels\n are swapped, and a possible odd last channel passed through.\n\n E.g., for seven channels, the output order will be 2, 1, 4, 3, 6, 5, 7.\n\n See Also\n ----------\n remix"} {"_id": "q_6283", "text": "Excerpt a clip from an audio file, given the start timestamp and end timestamp of the clip within the file, expressed in seconds. If the end timestamp is set to `None` or left unspecified, it defaults to the duration of the audio file.\n\n Parameters\n ----------\n start_time : float\n Start time of the clip (seconds)\n end_time : float or None, default=None\n End time of the clip (seconds)"} {"_id": "q_6284", "text": "Voice Activity Detector. Attempts to trim silence and quiet\n background sounds from the ends of recordings of speech. The algorithm\n currently uses a simple cepstral power measurement to detect voice, so\n may be fooled by other things, especially music.\n\n The effect can trim only from the front of the audio, so in order to\n trim from the back, the reverse effect must also be used.\n\n Parameters\n ----------\n location : 1 or -1, default=1\n If 1, trims silence from the beginning\n If -1, trims silence from the end\n normalize : bool, default=True\n If true, normalizes audio before processing.\n activity_threshold : float, default=7.0\n The measurement level used to trigger activity detection. This may\n need to be cahnged depending on the noise level, signal level, and\n other characteristics of the input audio.\n min_activity_duration : float, default=0.25\n The time constant (in seconds) used to help ignore short bursts of\n sound.\n initial_search_buffer : float, default=1.0\n The amount of audio (in seconds) to search for quieter/shorter\n bursts of audio to include prior to the detected trigger point.\n max_gap : float, default=0.25\n The allowed gap (in seconds) between quiteter/shorter bursts of\n audio to include prior to the detected trigger point\n initial_pad : float, default=0.0\n The amount of audio (in seconds) to preserve before the trigger\n point and any found quieter/shorter bursts.\n\n See Also\n --------\n silence\n\n Examples\n --------\n >>> tfm = sox.Transformer()\n\n Remove silence from the beginning of speech\n\n >>> tfm.vad(initial_pad=0.3)\n\n Remove silence from the end of speech\n\n >>> tfm.vad(location=-1, initial_pad=0.2)"} {"_id": "q_6285", "text": "Apply an amplification or an attenuation to the audio signal.\n\n Parameters\n ----------\n gain : float\n Interpreted according to the given `gain_type`.\n If `gain_type' = 'amplitude', `gain' is a positive amplitude ratio.\n If `gain_type' = 'power', `gain' is a power (voltage squared).\n If `gain_type' = 'db', `gain' is in decibels.\n gain_type : string, default='amplitude'\n Type of gain. One of:\n - 'amplitude'\n - 'power'\n - 'db'\n limiter_gain : float or None, default=None\n If specified, a limiter is invoked on peaks greater than\n `limiter_gain' to prevent clipping.\n `limiter_gain` should be a positive value much less than 1.\n\n See Also\n --------\n gain, compand"} {"_id": "q_6286", "text": "Extended euclidean algorithm to find modular inverses for integers"} {"_id": "q_6287", "text": "Lets a user join a room on a specific Namespace."} {"_id": "q_6288", "text": "Lets a user leave a room on a specific Namespace."} {"_id": "q_6289", "text": "Main SocketIO management function, call from within your Framework of\n choice's view.\n\n The ``environ`` variable is the WSGI ``environ``. It is used to extract\n Socket object from the underlying server (as the 'socketio' key), and will\n be attached to both the ``Socket`` and ``Namespace`` objects.\n\n The ``namespaces`` parameter is a dictionary of the namespace string\n representation as key, and the BaseNamespace namespace class descendant as\n a value. The empty string ('') namespace is the global namespace. You can\n use Socket.GLOBAL_NS to be more explicit. So it would look like:\n\n .. code-block:: python\n\n namespaces={'': GlobalNamespace,\n '/chat': ChatNamespace}\n\n The ``request`` object is not required, but will probably be useful to pass\n framework-specific things into your Socket and Namespace functions. It will\n simply be attached to the Socket and Namespace object (accessible through\n ``self.request`` in both cases), and it is not accessed in any case by the\n ``gevent-socketio`` library.\n\n Pass in an ``error_handler`` if you want to override the default\n error_handler (which is :func:`socketio.virtsocket.default_error_handler`.\n The callable you pass in should have the same signature as the default\n error handler.\n\n The ``json_loads`` and ``json_dumps`` are overrides for the default\n ``json.loads`` and ``json.dumps`` function calls. Override these at\n the top-most level here. This will affect all sockets created by this\n socketio manager, and all namespaces inside.\n\n This function will block the current \"view\" or \"controller\" in your\n framework to do the recv/send on the socket, and dispatch incoming messages\n to your namespaces.\n\n This is a simple example using Pyramid:\n\n .. code-block:: python\n\n def my_view(request):\n socketio_manage(request.environ, {'': GlobalNamespace}, request)\n\n NOTE: You must understand that this function is going to be called\n *only once* per socket opening, *even though* you are using a long\n polling mechanism. The subsequent calls (for long polling) will\n be hooked directly at the server-level, to interact with the\n active ``Socket`` instance. This means you will *not* get access\n to the future ``request`` or ``environ`` objects. This is of\n particular importance regarding sessions (like Beaker). The\n session will be opened once at the opening of the Socket, and not\n closed until the socket is closed. You are responsible for\n opening and closing the cookie-based session yourself if you want\n to keep its data in sync with the rest of your GET/POST calls."} {"_id": "q_6290", "text": "Keep a reference of the callback on this socket."} {"_id": "q_6291", "text": "Fetch the callback for a given msgid, if it exists, otherwise,\n return None"} {"_id": "q_6292", "text": "Get multiple messages, in case we're going through the various\n XHR-polling methods, on which we can pack more than one message if the\n rate is high, and encode the payload for the HTTP channel."} {"_id": "q_6293", "text": "This removes a Namespace object from the socket.\n\n This is usually called by\n :meth:`~socketio.namespace.BaseNamespace.disconnect`."} {"_id": "q_6294", "text": "Low-level interface to queue a packet on the wire (encoded as wire\n protocol"} {"_id": "q_6295", "text": "Spawn a new Greenlet, attached to this Socket instance.\n\n It will be monitored by the \"watcher\" method"} {"_id": "q_6296", "text": "Start the heartbeat Greenlet to check connection health."} {"_id": "q_6297", "text": "You should always use this function to call the methods,\n as it checks if the user is allowed according to the ACLs.\n\n If you override :meth:`process_packet` or\n :meth:`process_event`, you should definitely want to use this\n instead of ``getattr(self, 'my_method')()``"} {"_id": "q_6298", "text": "Use this to use the configured ``error_handler`` yield an\n error message to your application.\n\n :param error_name: is a short string, to associate messages to recovery\n methods\n :param error_message: is some human-readable text, describing the error\n :param msg_id: is used to associate with a request\n :param quiet: specific to error_handlers. The default doesn't send a\n message to the user, but shows a debug message on the\n developer console."} {"_id": "q_6299", "text": "Use send to send a simple string message.\n\n If ``json`` is True, the message will be encoded as a JSON object\n on the wire, and decoded on the other side.\n\n This is mostly for backwards compatibility. ``emit()`` is more fun.\n\n :param callback: This is a callback function that will be\n called automatically by the client upon\n reception. It does not verify that the\n listener over there was completed with\n success. It just tells you that the browser\n got a hold of the packet.\n :type callback: callable"} {"_id": "q_6300", "text": "Spawn a new process, attached to this Namespace.\n\n It will be monitored by the \"watcher\" process in the Socket. If the\n socket disconnects, all these greenlets are going to be killed, after\n calling BaseNamespace.disconnect()\n\n This method uses the ``exception_handler_decorator``. See\n Namespace documentation for more information."} {"_id": "q_6301", "text": "Return an existing or new client Socket."} {"_id": "q_6302", "text": "Handles post from the \"Add room\" form on the homepage, and\n redirects to the new room."} {"_id": "q_6303", "text": "This will fetch the messages from the Socket's queue, and if\n there are many messes, pack multiple messages in one payload and return"} {"_id": "q_6304", "text": "Just quote out stuff before sending it out"} {"_id": "q_6305", "text": "This is sent to all in the sockets in this particular Namespace,\n including itself."} {"_id": "q_6306", "text": "Add a parent to this role,\n and add role itself to the parent's children set.\n you should override this function if neccessary.\n\n Example::\n\n logged_user = RoleMixin('logged_user')\n student = RoleMixin('student')\n student.add_parent(logged_user)\n\n :param parent: Parent role to add in."} {"_id": "q_6307", "text": "Add allowing rules.\n\n :param role: Role of this rule.\n :param method: Method to allow in rule, include GET, POST, PUT etc.\n :param resource: Resource also view function.\n :param with_children: Allow role's children in rule as well\n if with_children is `True`"} {"_id": "q_6308", "text": "Add denying rules.\n\n :param role: Role of this rule.\n :param method: Method to deny in rule, include GET, POST, PUT etc.\n :param resource: Resource also view function.\n :param with_children: Deny role's children in rule as well\n if with_children is `True`"} {"_id": "q_6309", "text": "Check whether role is allowed to access resource\n\n :param role: Role to be checked.\n :param method: Method to be checked.\n :param resource: View function to be checked."} {"_id": "q_6310", "text": "Check wherther role is denied to access resource\n\n :param role: Role to be checked.\n :param method: Method to be checked.\n :param resource: View function to be checked."} {"_id": "q_6311", "text": "This is a decorator function.\n\n You can allow roles to access the view func with it.\n\n An example::\n\n @app.route('/website/setting', methods=['GET', 'POST'])\n @rbac.allow(['administrator', 'super_user'], ['GET', 'POST'])\n def website_setting():\n return Response('Setting page.')\n\n :param roles: List, each name of roles. Please note that,\n `anonymous` is refered to anonymous.\n If you add `anonymous` to the rule,\n everyone can access the resource,\n unless you deny other roles.\n :param methods: List, each name of methods.\n methods is valid in ['GET', 'POST', 'PUT', 'DELETE']\n :param with_children: Whether allow children of roles as well.\n True by default."} {"_id": "q_6312", "text": "Given a string and a category, finds and combines words into\n groups based on their proximity.\n\n Args:\n text (str): Some text.\n tokens (list): A list of regex strings.\n\n Returns:\n list. The combined strings it found.\n\n Example:\n COLOURS = [r\"red(?:dish)?\", r\"grey(?:ish)?\", r\"green(?:ish)?\"]\n s = 'GREYISH-GREEN limestone with RED or GREY sandstone.'\n find_word_groups(s, COLOURS) --> ['greyish green', 'red', 'grey']"} {"_id": "q_6313", "text": "Given a string and a dict of synonyms, returns the 'preferred'\n word. Case insensitive.\n\n Args:\n word (str): A word.\n\n Returns:\n str: The preferred word, or the input word if not found.\n\n Example:\n >>> syn = {'snake': ['python', 'adder']}\n >>> find_synonym('adder', syn)\n 'snake'\n >>> find_synonym('rattler', syn)\n 'rattler'\n\n TODO:\n Make it handle case, returning the same case it received."} {"_id": "q_6314", "text": "Parse a piece of text and replace any abbreviations with their full\n word equivalents. Uses the lexicon.abbreviations dictionary to find\n abbreviations.\n\n Args:\n text (str): The text to parse.\n\n Returns:\n str: The text with abbreviations replaced."} {"_id": "q_6315", "text": "Split a description into parts, each of which can be turned into\n a single component."} {"_id": "q_6316", "text": "Returns a minimal Decor with a random colour."} {"_id": "q_6317", "text": "Make a simple plot of the Decor.\n\n Args:\n fmt (str): A Python format string for the component summaries.\n fig (Pyplot figure): A figure, optional. Use either fig or ax, not\n both.\n ax (Pyplot axis): An axis, optional. Use either fig or ax, not\n both.\n\n Returns:\n fig or ax or None. If you pass in an ax, you get it back. If you pass\n in a fig, you get it. If you pass nothing, the function creates a\n plot object as a side-effect."} {"_id": "q_6318", "text": "Generate a default legend.\n\n Args:\n name (str): The name of the legend you want. Not case sensitive.\n 'nsdoe': Nova Scotia Dept. of Energy\n 'canstrat': Canstrat\n 'nagmdm__6_2': USGS N. Am. Geol. Map Data Model 6.2\n 'nagmdm__6_1': USGS N. Am. Geol. Map Data Model 6.1\n 'nagmdm__4_3': USGS N. Am. Geol. Map Data Model 4.3\n 'sgmc': USGS State Geologic Map Compilation\n\n Default 'nagmdm__6_2'.\n\n Returns:\n Legend: The legend stored in `defaults.py`."} {"_id": "q_6319", "text": "Generate a random legend for a given list of components.\n\n Args:\n components (list or Striplog): A list of components. If you pass\n a Striplog, it will use the primary components. If you pass a\n component on its own, you will get a random Decor.\n width (bool): Also generate widths for the components, based on the\n order in which they are encountered.\n colour (str): If you want to give the Decors all the same colour,\n provide a hex string.\n Returns:\n Legend or Decor: A legend (or Decor) with random colours.\n TODO:\n It might be convenient to have a partial method to generate an\n 'empty' legend. Might be an easy way for someone to start with a\n template, since it'll have the components in it already."} {"_id": "q_6320", "text": "A slightly easier way to make legends from images.\n\n Args:\n filename (str)\n components (list)\n ignore (list): Colours to ignore, e.g. \"#FFFFFF\" to ignore white.\n col_offset (Number): If < 1, interpreted as proportion of way\n across the image. If > 1, interpreted as pixels from left.\n row_offset (int): Number of pixels to skip at the top of each\n interval."} {"_id": "q_6321", "text": "Read CSV text and generate a Legend.\n\n Args:\n string (str): The CSV string.\n\n In the first row, list the properties. Precede the properties of the\n component with 'comp ' or 'component '. For example:\n\n colour, width, comp lithology, comp colour\n #FFFFFF, 0, ,\n #F7E9A6, 3, Sandstone, Grey\n #FF99CC, 2, Anhydrite,\n ... etc\n\n Note:\n To edit a legend, the easiest thing to do is probably this:\n\n - `legend.to_csv()`\n - Edit the legend, call it `new_legend`.\n - `legend = Legend.from_csv(text=new_legend)`"} {"_id": "q_6322", "text": "Renders a legend as a CSV string.\n\n No arguments.\n\n Returns:\n str: The legend as a CSV."} {"_id": "q_6323", "text": "The maximum width of all the Decors in the Legend. This is needed\n to scale a Legend or Striplog when plotting with widths turned on."} {"_id": "q_6324", "text": "Get the decor for a component.\n\n Args:\n c (component): The component to look up.\n match_only (list of str): The component attributes to include in the\n comparison. Default: All of them.\n\n Returns:\n Decor. The matching Decor from the Legend, or None if not found."} {"_id": "q_6325", "text": "Get the component corresponding to a display colour. This is for\n generating a Striplog object from a colour image of a striplog.\n\n Args:\n colour (str): The hex colour string to look up.\n tolerance (float): The colourspace distance within which to match.\n default (component or None): The component to return in the event\n of no match.\n\n Returns:\n component. The component best matching the provided colour."} {"_id": "q_6326", "text": "Generate a Component from a text string, using a Lexicon.\n\n Args:\n text (str): The text string to parse.\n lexicon (Lexicon): The dictionary to use for the\n categories and lexemes.\n required (str): An attribute that we must have. If a required\n attribute is missing from the component, then None is returned.\n first_only (bool): Whether to only take the first\n match of a lexeme against the text string.\n\n Returns:\n Component: A Component object, or None if there was no\n must-have field."} {"_id": "q_6327", "text": "Given a format string, return a summary description of a component.\n\n Args:\n component (dict): A component dictionary.\n fmt (str): Describes the format with a string. If no format is\n given, you will just get a list of attributes. If you give the\n empty string (''), you'll get `default` back. By default this\n gives you the empty string, effectively suppressing the\n summary.\n initial (bool): Whether to capitialize the first letter. Default is\n True.\n default (str): What to give if there's no component defined.\n\n Returns:\n str: A summary string.\n\n Example:\n\n r = Component({'colour': 'Red',\n 'grainsize': 'VF-F',\n 'lithology': 'Sandstone'})\n\n r.summary() --> 'Red, vf-f, sandstone'"} {"_id": "q_6328", "text": "Graceful deprecation for old class name."} {"_id": "q_6329", "text": "Processes a single row from the file."} {"_id": "q_6330", "text": "Read all the rows and return a dict of the results."} {"_id": "q_6331", "text": "Private method. Checks if striplog is monotonically increasing in\n depth.\n\n Returns:\n Bool."} {"_id": "q_6332", "text": "Property. Summarize a Striplog with some statistics.\n\n Returns:\n List. A list of (Component, total thickness thickness) tuples."} {"_id": "q_6333", "text": "Private method. Take a sequence of tops in an arbitrary dimension,\n and provide a list of intervals from which a striplog can be made.\n\n This is only intended to be used by ``from_image()``.\n\n Args:\n tops (iterable). A list of floats.\n values (iterable). A list of values to look up.\n basis (iterable). A list of components.\n components (iterable). A list of Components.\n\n Returns:\n List. A list of Intervals."} {"_id": "q_6334", "text": "Private function. Make sure we have what we need to make a striplog."} {"_id": "q_6335", "text": "Private function. Takes a data dictionary and reconstructs a list\n of Intervals from it.\n\n Args:\n data_dict (dict)\n stop (float): Where to end the last interval.\n points (bool)\n include (dict)\n exclude (dict)\n ignore (list)\n lexicon (Lexicon)\n\n Returns:\n list."} {"_id": "q_6336", "text": "Load from a CSV file or text."} {"_id": "q_6337", "text": "Turn a 1D array into a striplog, given a cutoff.\n\n Args:\n log (array-like): A 1D array or a list of integers.\n cutoff (number or array-like): The log value(s) at which to bin\n the log. Optional.\n components (array-like): A list of components. Use this or\n ``legend``.\n legend (``Legend``): A legend object. Use this or ``components``.\n legend_field ('str'): If you're not trying to match against\n components, then you can match the log values to this field in\n the Decors.\n field (str): The field in the Interval's ``data`` to store the log\n values as.\n right (bool): Which side of the cutoff to send things that are\n equal to, i.e. right on, the cutoff.\n basis (array-like): A depth basis for the log, so striplog knows\n where to put the boundaries.\n source (str): The source of the data. Default 'Log'.\n\n Returns:\n Striplog: The ``striplog`` object."} {"_id": "q_6338", "text": "Turn LAS3 'lithology' section into a Striplog.\n\n Args:\n string (str): A section from an LAS3 file.\n lexicon (Lexicon): The language for conversion to components.\n source (str): A source for the data.\n dlm (str): The delimiter.\n abbreviations (bool): Whether to expand abbreviations.\n\n Returns:\n Striplog: The ``striplog`` object.\n\n Note:\n Handles multiple 'Data' sections. It would be smarter for it\n to handle one at a time, and to deal with parsing the multiple\n sections in the Well object.\n\n Does not read an actual LAS file. Use the Well object for that."} {"_id": "q_6339", "text": "Eat a Canstrat DAT file and make a striplog."} {"_id": "q_6340", "text": "Returns a shallow copy."} {"_id": "q_6341", "text": "Returns an LAS 3.0 section string.\n\n Args:\n use_descriptions (bool): Whether to use descriptions instead\n of summaries, if available.\n dlm (str): The delimiter.\n source (str): The sourse of the data.\n\n Returns:\n str: A string forming Lithology section of an LAS3 file."} {"_id": "q_6342", "text": "Get data from the striplog."} {"_id": "q_6343", "text": "'Extract' a log into the components of a striplog.\n\n Args:\n log (array_like). A log or other 1D data.\n basis (array_like). The depths or elevations of the log samples.\n name (str). The name of the attribute to store in the components.\n function (function). A function that takes an array as the only\n input, and returns whatever you want to store in the 'name'\n attribute of the primary component.\n Returns:\n None. The function works on the striplog in place."} {"_id": "q_6344", "text": "Look for a regex expression in the descriptions of the striplog.\n If there's no description, it looks in the summaries.\n\n If you pass a Component, then it will search the components, not the\n descriptions or summaries.\n\n Case insensitive.\n\n Args:\n search_term (string or Component): The thing you want to search\n for. Strings are treated as regular expressions.\n index (bool): Whether to return the index instead of the interval.\n Returns:\n Striplog: A striplog that contains only the 'hit' Intervals.\n However, if ``index`` was ``True``, then that's what you get."} {"_id": "q_6345", "text": "Find overlaps in a striplog.\n\n Args:\n index (bool): If True, returns indices of intervals with\n gaps after them.\n\n Returns:\n Striplog: A striplog of all the overlaps as intervals."} {"_id": "q_6346", "text": "Finds gaps in a striplog.\n\n Args:\n index (bool): If True, returns indices of intervals with\n gaps after them.\n\n Returns:\n Striplog: A striplog of all the gaps. A sort of anti-striplog."} {"_id": "q_6347", "text": "Remove intervals below a certain limit thickness. In place.\n\n Args:\n limit (float): Anything thinner than this will be pruned.\n n (int): The n thinnest beds will be pruned.\n percentile (float): The thinnest specified percentile will be\n pruned.\n keep_ends (bool): Whether to keep the first and last, regardless\n of whether they meet the pruning criteria."} {"_id": "q_6348", "text": "Fill in empty intervals by growing from top and base.\n\n Note that this operation happens in-place and destroys any information\n about the ``Position`` (e.g. metadata associated with the top or base).\n See GitHub issue #54."} {"_id": "q_6349", "text": "Fill gaps with the component provided.\n\n Example\n t = s.fill(Component({'lithology': 'cheese'}))"} {"_id": "q_6350", "text": "Makes a striplog of all intersections.\n\n Args:\n Striplog. The striplog instance to intersect with.\n\n Returns:\n Striplog. The result of the intersection."} {"_id": "q_6351", "text": "Merges overlaps by merging overlapping Intervals.\n\n The function takes no arguments and returns ``None``. It operates on\n the striplog 'in place'\n\n TODO: This function will not work if any interval overlaps more than\n one other intervals at either its base or top."} {"_id": "q_6352", "text": "Plots a histogram and returns the data for it.\n\n Args:\n lumping (str): If given, the bins will be lumped based on this\n attribute of the primary components of the intervals\n encountered.\n summary (bool): If True, the summaries of the components are\n returned as the bins. Otherwise, the default behaviour is to\n return the Components themselves.\n sort (bool): If True (default), the histogram is sorted by value,\n starting with the largest.\n plot (bool): If True (default), produce a bar plot.\n legend (Legend): The legend with which to colour the bars.\n ax (axis): An axis object, which will be returned if provided.\n If you don't provide one, it will be created but not returned.\n\n Returns:\n Tuple: A tuple of tuples of entities and counts.\n\n TODO:\n Deal with numeric properties, so I can histogram 'Vp' values, say."} {"_id": "q_6353", "text": "Inverts the striplog, changing its order and the order of its contents.\n\n Operates in place by default.\n\n Args:\n copy (bool): Whether to operate in place or make a copy.\n\n Returns:\n None if operating in-place, or an inverted copy of the striplog\n if not."} {"_id": "q_6354", "text": "Run a series of tests and return the corresponding results.\n\n Based on curve testing for ``welly``.\n\n Args:\n tests (list): a list of functions.\n\n Returns:\n list. The results. Stick to booleans (True = pass) or ints."} {"_id": "q_6355", "text": "Get a log-like stream of RGB values from an image.\n\n Args:\n filename (str): The filename of a PNG image.\n offset (Number): If < 1, interpreted as proportion of way across\n the image. If > 1, interpreted as pixels from left.\n\n Returns:\n ndarray: A 2d array (a column of RGB triples) at the specified\n offset.\n\n TODO:\n Generalize this to extract 'logs' from images in other ways, such\n as giving the mean of a range of pixel columns, or an array of\n columns. See also a similar routine in pythonanywhere/freqbot."} {"_id": "q_6356", "text": "Return an underscore if the attribute is absent.\n Not all components have the same attributes."} {"_id": "q_6357", "text": "Lists all the jobs registered with Nomad.\n\n https://www.nomadproject.io/docs/http/jobs.html\n arguments:\n - prefix :(str) optional, specifies a string to filter jobs on based on an prefix.\n This is specified as a querystring parameter.\n returns: list\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"} {"_id": "q_6358", "text": "Parse a HCL Job file. Returns a dict with the JSON formatted job.\n This API endpoint is only supported from Nomad version 0.8.3.\n\n https://www.nomadproject.io/api/jobs.html#parse-job\n\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"} {"_id": "q_6359", "text": "Update token.\n\n https://www.nomadproject.io/api/acl-tokens.html\n\n arguments:\n - AccdesorID\n - token\n returns: dict\n\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"} {"_id": "q_6360", "text": "Lists all the allocations.\n\n https://www.nomadproject.io/docs/http/allocs.html\n arguments:\n - prefix :(str) optional, specifies a string to filter allocations on based on an prefix.\n This is specified as a querystring parameter.\n returns: list\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"} {"_id": "q_6361", "text": "This endpoint is used to mark a deployment as failed. This should be done to force the scheduler to stop\n creating allocations as part of the deployment or to cause a rollback to a previous job version.\n\n https://www.nomadproject.io/docs/http/deployments.html\n\n arguments:\n - id\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"} {"_id": "q_6362", "text": "Toggle the drain mode of the node.\n When enabled, no further allocations will be\n assigned and existing allocations will be migrated.\n\n https://www.nomadproject.io/docs/http/node.html\n\n arguments:\n - id (str uuid): node id\n - enable (bool): enable node drain or not to enable node drain\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"} {"_id": "q_6363", "text": "This endpoint toggles the drain mode of the node. When draining is enabled,\n no further allocations will be assigned to this node, and existing allocations\n will be migrated to new nodes.\n\n If an empty dictionary is given as drain_spec this will disable/toggle the drain.\n\n https://www.nomadproject.io/docs/http/node.html\n\n arguments:\n - id (str uuid): node id\n - drain_spec (dict): https://www.nomadproject.io/api/nodes.html#drainspec\n - mark_eligible (bool): https://www.nomadproject.io/api/nodes.html#markeligible\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"} {"_id": "q_6364", "text": "Toggle the eligibility of the node.\n\n https://www.nomadproject.io/docs/http/node.html\n\n arguments:\n - id (str uuid): node id\n - eligible (bool): Set to True to mark node eligible\n - ineligible (bool): Set to True to mark node ineligible\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"} {"_id": "q_6365", "text": "This endpoint streams the contents of a file in an allocation directory.\n\n https://www.nomadproject.io/api/client.html#stream-file\n\n arguments:\n - id: (str) allocation_id required\n - offset: (int) required\n - origin: (str) either start|end\n - path: (str) optional\n returns: (str) text\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.BadRequestNomadException"} {"_id": "q_6366", "text": "Stat a file in an allocation directory.\n\n https://www.nomadproject.io/docs/http/client-fs-stat.html\n\n arguments:\n - id\n - path\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"} {"_id": "q_6367", "text": "Initiate a join between the agent and target peers.\n\n https://www.nomadproject.io/docs/http/agent-join.html\n\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"} {"_id": "q_6368", "text": "Lists all the evaluations.\n\n https://www.nomadproject.io/docs/http/evals.html\n arguments:\n - prefix :(str) optional, specifies a string to filter evaluations on based on an prefix.\n This is specified as a querystring parameter.\n returns: list\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"} {"_id": "q_6369", "text": "Lists all the namespaces registered with Nomad.\n\n https://www.nomadproject.io/docs/enterprise/namespaces/index.html\n arguments:\n - prefix :(str) optional, specifies a string to filter namespaces on based on an prefix.\n This is specified as a querystring parameter.\n returns: list\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"} {"_id": "q_6370", "text": "Dispatches a new instance of a parameterized job.\n\n https://www.nomadproject.io/docs/http/job.html\n\n arguments:\n - id\n - payload\n - meta\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"} {"_id": "q_6371", "text": "Deregisters a job, and stops all allocations part of it.\n\n https://www.nomadproject.io/docs/http/job.html\n\n arguments:\n - id\n - purge (bool), optionally specifies whether the job should be\n stopped and purged immediately (`purge=True`) or deferred to the\n Nomad garbage collector (`purge=False`).\n\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException\n - nomad.api.exceptions.InvalidParameters"} {"_id": "q_6372", "text": "Query the status of a client node registered with Nomad.\n\n https://www.nomadproject.io/docs/http/operator.html\n\n returns: dict\n optional arguments:\n - stale, (defaults to False), Specifies if the cluster should respond without an active leader.\n This is specified as a querystring parameter.\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"} {"_id": "q_6373", "text": "Remove the Nomad server with given address from the Raft configuration.\n The return code signifies success or failure.\n\n https://www.nomadproject.io/docs/http/operator.html\n\n arguments:\n - peer_address, The address specifies the server to remove and is given as an IP:port\n optional arguments:\n - stale, (defaults to False), Specifies if the cluster should respond without an active leader.\n This is specified as a querystring parameter.\n returns: Boolean\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"} {"_id": "q_6374", "text": "This endpoint lists all deployments.\n\n https://www.nomadproject.io/docs/http/deployments.html\n\n optional_arguments:\n - prefix, (default \"\") Specifies a string to filter deployments on based on an index prefix.\n This is specified as a querystring parameter.\n\n returns: list of dicts\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException"} {"_id": "q_6375", "text": "Get a random mutator from a list of mutators"} {"_id": "q_6376", "text": "Return a polyglot attack containing the original object"} {"_id": "q_6377", "text": "Perform the fuzzing"} {"_id": "q_6378", "text": "Safely return an unicode encoded string"} {"_id": "q_6379", "text": "Kill the servers"} {"_id": "q_6380", "text": "Serve custom HTML page"} {"_id": "q_6381", "text": "Serve fuzzed JSON object"} {"_id": "q_6382", "text": "Generic fuzz mutator, use a decorator for the given type"} {"_id": "q_6383", "text": "Spawn a new process using subprocess"} {"_id": "q_6384", "text": "Try to get output in a separate thread"} {"_id": "q_6385", "text": "Wait until we got output or until timeout is over"} {"_id": "q_6386", "text": "Terminate the newly created process"} {"_id": "q_6387", "text": "Parse the command line and start PyJFuzz"} {"_id": "q_6388", "text": "Perform the actual external fuzzing, you may replace this method in order to increase performance"} {"_id": "q_6389", "text": "Build the ``And`` instance\n\n :param list pre: The prerequisites list\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated."} {"_id": "q_6390", "text": "Build the ``Quote`` instance\n\n :param list pre: The prerequisites list\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated."} {"_id": "q_6391", "text": "Build the ``Or`` instance\n\n :param list pre: The prerequisites list\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated."} {"_id": "q_6392", "text": "Build the current ``Opt`` instance\n\n :param list pre: The prerequisites list\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated."} {"_id": "q_6393", "text": "Build the STAR field.\n\n :param list pre: The prerequisites list\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated."} {"_id": "q_6394", "text": "Shutdown the running process and the monitor"} {"_id": "q_6395", "text": "Run command in a loop and check exit status plus restart process when needed"} {"_id": "q_6396", "text": "Fuzz all elements inside the object"} {"_id": "q_6397", "text": "Mutate a generic object based on type"} {"_id": "q_6398", "text": "When we get term signal\n if we are waiting and got a sigterm, we just exit.\n if we have a child running, we pass the signal first to the child\n then we exit.\n\n :param signum:\n :param frame:\n :return:"} {"_id": "q_6399", "text": "\\\n if we have a running child we kill it and set our state to paused\n if we don't have a running child, we set our state to paused\n this will pause all the nodes in single-beat cluster\n\n its useful when you deploy some code and don't want your child to spawn\n randomly\n\n :param msg:\n :return:"} {"_id": "q_6400", "text": "\\\n sets state to waiting - so we resume spawning children"} {"_id": "q_6401", "text": "\\\n stops the running child process - if its running\n it will re-spawn in any single-beat node after sometime\n\n :param msg:\n :return:"} {"_id": "q_6402", "text": "\\\n restart the subprocess\n i. we set our state to RESTARTING - on restarting we still send heartbeat\n ii. we kill the subprocess\n iii. we start again\n iv. if its started we set our state to RUNNING, else we set it to WAITING\n\n :param msg:\n :return:"} {"_id": "q_6403", "text": "Close the connection to the TwinCAT message router."} {"_id": "q_6404", "text": "Return the local AMS-address and the port number.\n\n :rtype: pyads.structs.AmsAddr\n :return: AMS-address"} {"_id": "q_6405", "text": "Read data synchronous from an ADS-device.\n\n :param int port: local AMS port as returned by adsPortOpenEx()\n :param pyads.structs.AmsAddr address: local or remote AmsAddr\n :param int index_group: PLC storage area, according to the INDEXGROUP\n constants\n :param int index_offset: PLC storage address\n :param Type data_type: type of the data given to the PLC, according to\n PLCTYPE constants\n :param bool return_ctypes: return ctypes instead of python types if True\n (default: False)\n :rtype: data_type\n :return: value: **value**"} {"_id": "q_6406", "text": "Remove a device notification.\n\n :param int port: local AMS port as returned by adsPortOpenEx()\n :param pyads.structs.AmsAddr adr: local or remote AmsAddr\n :param int notification_handle: Notification Handle\n :param int user_handle: User Handle"} {"_id": "q_6407", "text": "Set Timeout.\n\n :param int port: local AMS port as returned by adsPortOpenEx()\n :param int nMs: timeout in ms"} {"_id": "q_6408", "text": "Removes `node` from the hash ring and its replicas."} {"_id": "q_6409", "text": "Return a new Lock object using key ``name`` that mimics\n the behavior of threading.Lock.\n\n If specified, ``timeout`` indicates a maximum life for the lock.\n By default, it will remain locked until release() is called.\n\n ``sleep`` indicates the amount of time to sleep per loop iteration\n when the lock is in blocking mode and another client is currently\n holding the lock."} {"_id": "q_6410", "text": "Retrieve a list of events since the last poll. Multiple calls may be needed to retrieve all events.\n\n If no events occur, the API will block for up to 30 seconds, after which an empty list is returned. As soon as\n an event is received in this time, it is returned immediately.\n\n Returns:\n :class:`.SkypeEvent` list: a list of events, possibly empty"} {"_id": "q_6411", "text": "Retrieve various metadata associated with a URL, as seen by Skype.\n\n Args:\n url (str): address to ping for info\n\n Returns:\n dict: metadata for the website queried"} {"_id": "q_6412", "text": "Retrieve all details for a specific contact, including fields such as birthday and mood.\n\n Args:\n id (str): user identifier to lookup\n\n Returns:\n SkypeContact: resulting contact object"} {"_id": "q_6413", "text": "Retrieve a list of all known bots.\n\n Returns:\n SkypeBotUser list: resulting bot user objects"} {"_id": "q_6414", "text": "Retrieve a single bot.\n\n Args:\n id (str): UUID or username of the bot\n\n Returns:\n SkypeBotUser: resulting bot user object"} {"_id": "q_6415", "text": "Search the Skype Directory for a user.\n\n Args:\n query (str): name to search for\n\n Returns:\n SkypeUser list: collection of possible results"} {"_id": "q_6416", "text": "Retrieve any pending contact requests.\n\n Returns:\n :class:`SkypeRequest` list: collection of requests"} {"_id": "q_6417", "text": "Create a new instance based on the raw properties of an API response.\n\n This can be overridden to automatically create subclass instances based on the raw content.\n\n Args:\n skype (Skype): parent Skype instance\n raw (dict): raw object, as provided by the API\n\n Returns:\n SkypeObj: the new class instance"} {"_id": "q_6418", "text": "Copy properties from other into self, skipping ``None`` values. Also merges the raw data.\n\n Args:\n other (SkypeObj): second object to copy fields from"} {"_id": "q_6419", "text": "Add a given object to the cache, or update an existing entry to include more fields.\n\n Args:\n obj (SkypeObj): object to add to the cache"} {"_id": "q_6420", "text": "Follow and track sync state URLs provided by an API endpoint, in order to implicitly handle pagination.\n\n In the first call, ``url`` and ``params`` are used as-is. If a ``syncState`` endpoint is provided in the\n response, subsequent calls go to the latest URL instead.\n\n Args:\n method (str): HTTP request method\n url (str): full URL to connect to\n params (dict): query parameters to include in the URL\n kwargs (dict): any extra parameters to pass to :meth:`__call__`"} {"_id": "q_6421", "text": "Store details of the current connection in the named file.\n\n This can be used by :meth:`readToken` to re-authenticate at a later time."} {"_id": "q_6422", "text": "Ensure the authentication token for the given auth method is still valid.\n\n Args:\n auth (Auth): authentication type to check\n\n Raises:\n .SkypeAuthException: if Skype auth is required, and the current token has expired and can't be renewed"} {"_id": "q_6423", "text": "Take the existing Skype token and refresh it, to extend the expiry time without other credentials.\n\n Raises:\n .SkypeAuthException: if the login request is rejected\n .SkypeApiException: if the login form can't be processed"} {"_id": "q_6424", "text": "Acquire a new registration token.\n\n Once successful, all tokens and expiry times are written to the token file (if specified on initialisation)."} {"_id": "q_6425", "text": "Retrieve all current endpoints for the connected user."} {"_id": "q_6426", "text": "Query a username or email address to see if a corresponding Microsoft account exists.\n\n Args:\n user (str): username or email address of an account\n\n Returns:\n bool: whether the account exists"} {"_id": "q_6427", "text": "Request a new registration token using a current Skype token.\n\n Args:\n skypeToken (str): existing Skype token\n\n Returns:\n (str, datetime.datetime, str, SkypeEndpoint) tuple: registration token, associated expiry if known,\n resulting endpoint hostname, endpoint if provided\n\n Raises:\n .SkypeAuthException: if the login request is rejected\n .SkypeApiException: if the login form can't be processed"} {"_id": "q_6428", "text": "Configure this endpoint to allow setting presence.\n\n Args:\n name (str): display name for this endpoint"} {"_id": "q_6429", "text": "Send a keep-alive request for the endpoint.\n\n Args:\n timeout (int): maximum amount of time for the endpoint to stay active"} {"_id": "q_6430", "text": "Retrieve a selection of conversations with the most recent activity, and store them in the cache.\n\n Each conversation is only retrieved once, so subsequent calls will retrieve older conversations.\n\n Returns:\n :class:`SkypeChat` list: collection of recent conversations"} {"_id": "q_6431", "text": "Get a single conversation by identifier.\n\n Args:\n id (str): single or group chat identifier"} {"_id": "q_6432", "text": "Create a new group chat with the given users.\n\n The current user is automatically added to the conversation as an admin. Any other admin identifiers must also\n be present in the member list.\n\n Args:\n members (str list): user identifiers to initially join the conversation\n admins (str list): user identifiers to gain admin privileges"} {"_id": "q_6433", "text": "Extract the username from a contact URL.\n\n Matches addresses containing ``users/`` or ``users/ME/contacts/``.\n\n Args:\n url (str): Skype API URL\n\n Returns:\n str: extracted identifier"} {"_id": "q_6434", "text": "Extract the conversation ID from a conversation URL.\n\n Matches addresses containing ``conversations/``.\n\n Args:\n url (str): Skype API URL\n\n Returns:\n str: extracted identifier"} {"_id": "q_6435", "text": "Repeatedly call a function, starting with init, until false-y, yielding each item in turn.\n\n The ``transform`` parameter can be used to map a collection to another format, for example iterating over a\n :class:`dict` by value rather than key.\n\n Use with state-synced functions to retrieve all results.\n\n Args:\n fn (method): function to call\n transform (method): secondary function to convert result into an iterable\n args (list): positional arguments to pass to ``fn``\n kwargs (dict): keyword arguments to pass to ``fn``\n\n Returns:\n generator: generator of objects produced from the method"} {"_id": "q_6436", "text": "Return a language-server diagnostic from a line of the Mypy error report;\n optionally, use the whole document to provide more context on it."} {"_id": "q_6437", "text": "Return unicode text, no matter what"} {"_id": "q_6438", "text": "Figure out which handler to use, based on metadata.\n Returns a handler instance or None.\n\n ``text`` should be unicode text about to be parsed.\n\n ``handlers`` is a dictionary where keys are opening delimiters \n and values are handler instances."} {"_id": "q_6439", "text": "Parse text with frontmatter, return metadata and content.\n Pass in optional metadata defaults as keyword args.\n\n If frontmatter is not found, returns an empty metadata dictionary\n (or defaults) and original text content.\n\n ::\n\n >>> with open('tests/hello-world.markdown') as f:\n ... metadata, content = frontmatter.parse(f.read())\n >>> print(metadata['title'])\n Hello, world!"} {"_id": "q_6440", "text": "Post as a dict, for serializing"} {"_id": "q_6441", "text": "Parse YAML front matter. This uses yaml.SafeLoader by default."} {"_id": "q_6442", "text": "Export metadata as YAML. This uses yaml.SafeDumper by default."} {"_id": "q_6443", "text": "Establishes a connection to the Lavalink server."} {"_id": "q_6444", "text": "Waits to receive a payload from the Lavalink server and processes it."} {"_id": "q_6445", "text": "Returns the voice channel the player is connected to."} {"_id": "q_6446", "text": "Connects to a voice channel."} {"_id": "q_6447", "text": "Disconnects from the voice channel, if any."} {"_id": "q_6448", "text": "Stores custom user data."} {"_id": "q_6449", "text": "Adds a track to beginning of the queue"} {"_id": "q_6450", "text": "Adds a track at a specific index in the queue."} {"_id": "q_6451", "text": "Plays previous track if it exist, if it doesn't raises a NoPreviousTrack error."} {"_id": "q_6452", "text": "Seeks to a given position in the track."} {"_id": "q_6453", "text": "Makes the player play the next song from the queue if a song has finished or an issue occurred."} {"_id": "q_6454", "text": "Returns a player from the cache, or creates one if it does not exist."} {"_id": "q_6455", "text": "Searches and plays a song from a given query."} {"_id": "q_6456", "text": "Shows the player's queue."} {"_id": "q_6457", "text": "Removes an item from the player's queue with the given index."} {"_id": "q_6458", "text": "A few checks to make sure the bot can join a voice channel."} {"_id": "q_6459", "text": "Dispatches an event to all registered hooks."} {"_id": "q_6460", "text": "Returns a Dictionary containing search results for a given query."} {"_id": "q_6461", "text": "Destroys the Lavalink client."} {"_id": "q_6462", "text": "Plays immediately a song."} {"_id": "q_6463", "text": "Plays the queue from a specific point. Disregards tracks before the index."} {"_id": "q_6464", "text": "Return the match object for the current list."} {"_id": "q_6465", "text": "Return items as a list of strings.\n\n Don't include sub-items and the start pattern."} {"_id": "q_6466", "text": "Return the Lists inside the item with the given index.\n\n :param i: The index if the item which its sub-lists are desired.\n The performance is likely to be better if `i` is None.\n\n :param pattern: The starting symbol for the desired sub-lists.\n The `pattern` of the current list will be automatically added\n as prefix.\n Although this parameter is optional, but specifying it can improve\n the performance."} {"_id": "q_6467", "text": "Convert to another list type by replacing starting pattern."} {"_id": "q_6468", "text": "Parse template content. Create self.name and self.arguments."} {"_id": "q_6469", "text": "Return the lists in all arguments.\n\n For performance reasons it is usually preferred to get a specific\n Argument and use the `lists` method of that argument instead."} {"_id": "q_6470", "text": "Create a Trie out of a list of words and return an atomic regex pattern.\n\n The corresponding Regex should match much faster than a simple Regex union."} {"_id": "q_6471", "text": "Insert the given string before the specified index.\n\n This method has the same effect as ``self[index:index] = string``;\n it only avoids some condition checks as it rules out the possibility\n of the key being an slice, or the need to shrink any of the sub-spans.\n\n If parse is False, don't parse the inserted string."} {"_id": "q_6472", "text": "Partition self.string where `char`'s not in atomic sub-spans."} {"_id": "q_6473", "text": "Return all the sub-span including self._span."} {"_id": "q_6474", "text": "Update self._type_to_spans according to the removed span.\n\n Warning: If an operation involves both _shrink_update and\n _insert_update, you might wanna consider doing the\n _insert_update before the _shrink_update as this function\n can cause data loss in self._type_to_spans."} {"_id": "q_6475", "text": "Return the nesting level of self.\n\n The minimum nesting_level is 0. Being part of any Template or\n ParserFunction increases the level by one."} {"_id": "q_6476", "text": "Return a copy of self.string with specific sub-spans replaced.\n\n Comments blocks are replaced by spaces. Other sub-spans are replaced\n by underscores.\n\n The replaced sub-spans are: (\n 'Template', 'WikiLink', 'ParserFunction', 'ExtensionTag',\n 'Comment',\n )\n\n This function is called upon extracting tables or extracting the data\n inside them."} {"_id": "q_6477", "text": "Replace the invalid chars of SPAN_PARSER_TYPES with b'_'.\n\n For comments, all characters are replaced, but for ('Template',\n 'ParserFunction', 'Parameter') only invalid characters are replaced."} {"_id": "q_6478", "text": "Create the arguments for the parse function used in pformat method.\n\n Only return sub-spans and change the them to fit the new scope, i.e\n self.string."} {"_id": "q_6479", "text": "Deprecated, use self.pformat instead."} {"_id": "q_6480", "text": "Return a list of parameter objects."} {"_id": "q_6481", "text": "Return a list of templates as template objects."} {"_id": "q_6482", "text": "Return a list of comment objects."} {"_id": "q_6483", "text": "Return a list of section in current wikitext.\n\n The first section will always be the lead section, even if it is an\n empty string."} {"_id": "q_6484", "text": "Return a list of found table objects."} {"_id": "q_6485", "text": "r\"\"\"Return a list of WikiList objects.\n\n :param pattern: The starting pattern for list items.\n Return all types of lists (ol, ul, and dl) if pattern is None.\n If pattern is not None, it will be passed to the regex engine,\n remember to escape the `*` character. Examples:\n\n - `\\#` means top-level ordered lists\n - `\\#\\*` means unordred lists inside an ordered one\n - Currently definition lists are not well supported, but you\n can use `[:;]` as their pattern.\n\n Tips and tricks:\n\n Be careful when using the following patterns as they will\n probably cause malfunction in the `sublists` method of the\n resultant List. (However don't worry about them if you are\n not going to use the `sublists` method.)\n\n - Use `\\*+` as a pattern and nested unordered lists will be\n treated as flat.\n - Use `\\*\\s*` as pattern to rtstrip `items` of the list.\n\n Although the pattern parameter is optional, but specifying it\n can improve the performance."} {"_id": "q_6486", "text": "Yield all the sub-span indices excluding self._span."} {"_id": "q_6487", "text": "Return the parent node of the current object.\n\n :param type_: the type of the desired parent object.\n Currently the following types are supported: {Template,\n ParserFunction, WikiLink, Comment, Parameter, ExtensionTag}.\n The default is None and means the first parent, of any type above.\n :return: parent WikiText object or None if no parent with the desired\n `type_` is found."} {"_id": "q_6488", "text": "Return normal form of self.name.\n\n - Remove comments.\n - Remove language code.\n - Remove namespace (\"template:\" or any of `localized_namespaces`.\n - Use space instead of underscore.\n - Remove consecutive spaces.\n - Use uppercase for the first letter if `capitalize`.\n - Remove #anchor.\n\n :param rm_namespaces: is used to provide additional localized\n namespaces for the template namespace. They will be removed from\n the result. Default is ('Template',).\n :param capitalize: If True, convert the first letter of the\n template's name to a capital letter. See\n [[mw:Manual:$wgCapitalLinks]] for more info.\n :param code: is the language code.\n :param capital_links: deprecated.\n :param _code: deprecated.\n\n Example:\n >>> Template(\n ... '{{ eN : tEmPlAtE : t_1 # b | a }}'\n ... ).normal_name(code='en')\n 'T 1'"} {"_id": "q_6489", "text": "Eliminate duplicate arguments by removing the first occurrences.\n\n Remove the first occurrences of duplicate arguments, regardless of\n their value. Result of the rendered wikitext should remain the same.\n Warning: Some meaningful data may be removed from wikitext.\n\n Also see `rm_dup_args_safe` function."} {"_id": "q_6490", "text": "Remove duplicate arguments in a safe manner.\n\n Remove the duplicate arguments only in the following situations:\n 1. Both arguments have the same name AND value. (Remove one of\n them.)\n 2. Arguments have the same name and one of them is empty. (Remove\n the empty one.)\n\n Warning: Although this is considered to be safe and no meaningful data\n is removed from wikitext, but the result of the rendered wikitext\n may actually change if the second arg is empty and removed but\n the first had had a value.\n\n If `tag` is defined, it should be a string that will be appended to\n the value of the remaining duplicate arguments.\n\n Also see `rm_first_of_dup_args` function."} {"_id": "q_6491", "text": "Set the value for `name` argument. Add it if it doesn't exist.\n\n - Use `positional`, `before` and `after` keyword arguments only when\n adding a new argument.\n - If `before` is given, ignore `after`.\n - If neither `before` nor `after` are given and it's needed to add a\n new argument, then append the new argument to the end.\n - If `positional` is True, try to add the given value as a positional\n argument. Ignore `preserve_spacing` if positional is True.\n If it's None, do what seems more appropriate."} {"_id": "q_6492", "text": "Delete all arguments with the given then."} {"_id": "q_6493", "text": "Add suggestion terms to the AutoCompleter engine. Each suggestion has a score and string.\n\n If kwargs['increment'] is true and the terms are already in the server's dictionary, we increment their scores"} {"_id": "q_6494", "text": "Get a list of suggestions from the AutoCompleter, for a given prefix\n\n ### Parameters:\n - **prefix**: the prefix we are searching. **Must be valid ascii or utf-8**\n - **fuzzy**: If set to true, the prefix search is done in fuzzy mode. \n **NOTE**: Running fuzzy searches on short (<3 letters) prefixes can be very slow, and even scan the entire index.\n - **with_scores**: if set to true, we also return the (refactored) score of each suggestion. \n This is normally not needed, and is NOT the original score inserted into the index\n - **with_payloads**: Return suggestion payloads\n - **num**: The maximum number of results we return. Note that we might return less. The algorithm trims irrelevant suggestions.\n \n Returns a list of Suggestion objects. If with_scores was False, the score of all suggestions is 1."} {"_id": "q_6495", "text": "Create the search index. The index must not already exist.\n\n ### Parameters:\n\n - **fields**: a list of TextField or NumericField objects\n - **no_term_offsets**: If true, we will not save term offsets in the index\n - **no_field_flags**: If true, we will not save field flags that allow searching in specific fields\n - **stopwords**: If not None, we create the index with this custom stopword list. The list can be empty"} {"_id": "q_6496", "text": "Internal add_document used for both batch and single doc indexing"} {"_id": "q_6497", "text": "Add a single document to the index.\n\n ### Parameters\n\n - **doc_id**: the id of the saved document.\n - **nosave**: if set to true, we just index the document, and don't save a copy of it. This means that searches will just return ids.\n - **score**: the document ranking, between 0.0 and 1.0 \n - **payload**: optional inner-index payload we can save for fast access in scoring functions\n - **replace**: if True, and the document already is in the index, we perform an update and reindex the document\n - **partial**: if True, the fields specified will be added to the existing document.\n This has the added benefit that any fields specified with `no_index`\n will not be reindexed again. Implies `replace`\n - **language**: Specify the language used for document tokenization.\n - **fields** kwargs dictionary of the document fields to be saved and/or indexed. \n NOTE: Geo points shoule be encoded as strings of \"lon,lat\""} {"_id": "q_6498", "text": "Delete a document from index\n Returns 1 if the document was deleted, 0 if not"} {"_id": "q_6499", "text": "Load a single document by id"} {"_id": "q_6500", "text": "Get info an stats about the the current index, including the number of documents, memory consumption, etc"} {"_id": "q_6501", "text": "Search the index for a given query, and return a result of documents\n\n ### Parameters\n\n - **query**: the search query. Either a text for simple queries with default parameters, or a Query object for complex queries.\n See RediSearch's documentation on query format\n - **snippet_sizes**: A dictionary of {field: snippet_size} used to trim and format the result. e.g.e {'body': 500}"} {"_id": "q_6502", "text": "Issue an aggregation query\n\n ### Parameters\n\n **query**: This can be either an `AggeregateRequest`, or a `Cursor`\n\n An `AggregateResult` object is returned. You can access the rows from its\n `rows` property, which will always yield the rows of the result"} {"_id": "q_6503", "text": "Set the alias for this reducer.\n\n ### Parameters\n\n - **alias**: The value of the alias for this reducer. If this is the\n special value `aggregation.FIELDNAME` then this reducer will be\n aliased using the same name as the field upon which it operates.\n Note that using `FIELDNAME` is only possible on reducers which\n operate on a single field value.\n\n This method returns the `Reducer` object making it suitable for\n chaining."} {"_id": "q_6504", "text": "Specify by which fields to group the aggregation.\n\n ### Parameters\n\n - **fields**: Fields to group by. This can either be a single string,\n or a list of strings. both cases, the field should be specified as\n `@field`.\n - **reducers**: One or more reducers. Reducers may be found in the\n `aggregation` module."} {"_id": "q_6505", "text": "Sets the limit for the most recent group or query.\n\n If no group has been defined yet (via `group_by()`) then this sets\n the limit for the initial pool of results from the query. Otherwise,\n this limits the number of items operated on from the previous group.\n\n Setting a limit on the initial search results may be useful when\n attempting to execute an aggregation on a sample of a large data set.\n\n ### Parameters\n\n - **offset**: Result offset from which to begin paging\n - **num**: Number of results to return\n\n\n Example of sorting the initial results:\n\n ```\n AggregateRequest('@sale_amount:[10000, inf]')\\\n .limit(0, 10)\\\n .group_by('@state', r.count())\n ```\n\n Will only group by the states found in the first 10 results of the\n query `@sale_amount:[10000, inf]`. On the other hand,\n\n ```\n AggregateRequest('@sale_amount:[10000, inf]')\\\n .limit(0, 1000)\\\n .group_by('@state', r.count()\\\n .limit(0, 10)\n ```\n\n Will group all the results matching the query, but only return the\n first 10 groups.\n\n If you only wish to return a *top-N* style query, consider using\n `sort_by()` instead."} {"_id": "q_6506", "text": "Add a sortby field to the query\n\n - **field** - the name of the field to sort by\n - **asc** - when `True`, sorting will be done in asceding order"} {"_id": "q_6507", "text": "Indicate that value is a numeric range"} {"_id": "q_6508", "text": "Bypass transformations.\n\n Parameters\n ----------\n jam : pyjams.JAMS\n A muda-enabled JAMS object\n\n Yields\n ------\n jam_out : pyjams.JAMS iterator\n The first result is `jam` (unmodified), by reference\n All subsequent results are generated by `transformer`"} {"_id": "q_6509", "text": "Transpose a chord label by some number of semitones\n\n Parameters\n ----------\n label : str\n A chord string\n\n n_semitones : float\n The number of semitones to move `label`\n\n Returns\n -------\n label_transpose : str\n The transposed chord label"} {"_id": "q_6510", "text": "Pack data into a jams sandbox.\n\n If not already present, this creates a `muda` field within `jam.sandbox`,\n along with `history`, `state`, and version arrays which are populated by\n deformation objects.\n\n Any additional fields can be added to the `muda` sandbox by supplying\n keyword arguments.\n\n Parameters\n ----------\n jam : jams.JAMS\n A JAMS object\n\n Returns\n -------\n jam : jams.JAMS\n The updated JAMS object\n\n Examples\n --------\n >>> jam = jams.JAMS()\n >>> muda.jam_pack(jam, my_data=dict(foo=5, bar=None))\n >>> jam.sandbox\n \n >>> jam.sandbox.muda\n \n >>> jam.sandbox.muda.my_data\n {'foo': 5, 'bar': None}"} {"_id": "q_6511", "text": "Save a muda jam to disk\n\n Parameters\n ----------\n filename_audio: str\n The path to store the audio file\n\n filename_jam: str\n The path to store the jams object\n\n strict: bool\n Strict safety checking for jams output\n\n fmt : str\n Output format parameter for `jams.JAMS.save`\n\n kwargs\n Additional parameters to `soundfile.write`"} {"_id": "q_6512", "text": "Reconstruct a transformation or pipeline given a parameter dump."} {"_id": "q_6513", "text": "Serialize a transformation object or pipeline.\n\n Parameters\n ----------\n transform : BaseTransform or Pipeline\n The transformation object to be serialized\n\n kwargs\n Additional keyword arguments to `jsonpickle.encode()`\n\n Returns\n -------\n json_str : str\n A JSON encoding of the transformation\n\n See Also\n --------\n deserialize\n\n Examples\n --------\n >>> D = muda.deformers.TimeStretch(rate=1.5)\n >>> muda.serialize(D)\n '{\"params\": {\"rate\": 1.5},\n \"__class__\": {\"py/type\": \"muda.deformers.time.TimeStretch\"}}'"} {"_id": "q_6514", "text": "Construct a muda transformation from a JSON encoded string.\n\n Parameters\n ----------\n encoded : str\n JSON encoding of the transformation or pipeline\n\n kwargs\n Additional keyword arguments to `jsonpickle.decode()`\n\n Returns\n -------\n obj\n The transformation\n\n See Also\n --------\n serialize\n\n Examples\n --------\n >>> D = muda.deformers.TimeStretch(rate=1.5)\n >>> D_serial = muda.serialize(D)\n >>> D2 = muda.deserialize(D_serial)\n >>> D2\n TimeStretch(rate=1.5)"} {"_id": "q_6515", "text": "Pretty print the dictionary 'params'\n\n Parameters\n ----------\n params: dict\n The dictionary to pretty print\n\n offset: int\n The offset in characters to add at the begin of each line.\n\n printer:\n The function to convert entries to strings, typically\n the builtin str or repr"} {"_id": "q_6516", "text": "Apply the transformation to audio and annotations.\n\n The input jam is copied and modified, and returned\n contained in a list.\n\n Parameters\n ----------\n jam : jams.JAMS\n A single jam object to modify\n\n Returns\n -------\n jam_list : list\n A length-1 list containing `jam` after transformation\n\n See also\n --------\n core.load_jam_audio"} {"_id": "q_6517", "text": "Iterative transformation generator\n\n Applies the deformation to an input jams object.\n\n This generates a sequence of deformed output JAMS.\n\n Parameters\n ----------\n jam : jams.JAMS\n The jam to transform\n\n Examples\n --------\n >>> for jam_out in deformer.transform(jam_in):\n ... process(jam_out)"} {"_id": "q_6518", "text": "A recursive transformation pipeline"} {"_id": "q_6519", "text": "Calculate the indices at which to sample a fragment of audio from a file.\n\n Parameters\n ----------\n filename : str\n Path to the input file\n\n n_samples : int > 0\n The number of samples to load\n\n sr : int > 0\n The target sampling rate\n\n Returns\n -------\n start : int\n The sample index from `filename` at which the audio fragment starts\n stop : int\n The sample index from `filename` at which the audio fragment stops (e.g. y = audio[start:stop])"} {"_id": "q_6520", "text": "Slice a fragment of audio from a file.\n\n This uses pysoundfile to efficiently seek without\n loading the entire stream.\n\n Parameters\n ----------\n filename : str\n Path to the input file\n\n start : int\n The sample index of `filename` at which the audio fragment should start\n\n stop : int\n The sample index of `filename` at which the audio fragment should stop (e.g. y = audio[start:stop])\n\n n_samples : int > 0\n The number of samples to load\n\n sr : int > 0\n The target sampling rate\n\n mono : bool\n Ensure monophonic audio\n\n Returns\n -------\n y : np.ndarray [shape=(n_samples,)]\n A fragment of audio sampled from `filename`\n\n Raises\n ------\n ValueError\n If the source file is shorter than the requested length"} {"_id": "q_6521", "text": "Normalize `path`.\n\n All remote paths are absolute."} {"_id": "q_6522", "text": "Returns either the md5 or sha256 hash of a file at `file_path`.\n \n md5 is the default hash_type as it is faster than sha256\n\n The default block size is 64 kb, which appears to be one of a few command\n choices according to https://stackoverflow.com/a/44873382/2680. The code\n below is an extension of the example presented in that post."} {"_id": "q_6523", "text": "Iterate over all storages for this projects."} {"_id": "q_6524", "text": "Store a new file at `path` in this storage.\n\n The contents of the file descriptor `fp` (opened in 'rb' mode)\n will be uploaded to `path` which is the full path at\n which to store the file.\n\n To force overwrite of an existing file, set `force=True`.\n To overwrite an existing file only if the files differ, set `update=True`"} {"_id": "q_6525", "text": "Copy data from file-like object fsrc to file-like object fdst\n\n This is like shutil.copyfileobj but with a progressbar."} {"_id": "q_6526", "text": "Write contents of this file to a local file.\n\n Pass in a filepointer `fp` that has been opened for writing in\n binary mode."} {"_id": "q_6527", "text": "Remove this file from the remote storage."} {"_id": "q_6528", "text": "Update the remote file from a local file.\n\n Pass in a filepointer `fp` that has been opened for writing in\n binary mode."} {"_id": "q_6529", "text": "Iterate over all children of `kind`\n\n Yield an instance of `klass` when a child is of type `kind`. Uses\n `recurse` as the path of attributes in the JSON returned from `url`\n to find more children."} {"_id": "q_6530", "text": "Initialize or edit an existing .osfcli.config file."} {"_id": "q_6531", "text": "Login user for protected API calls."} {"_id": "q_6532", "text": "Fetch project `project_id`."} {"_id": "q_6533", "text": "Extract JSON from response if `status_code` matches."} {"_id": "q_6534", "text": "Follow the 'next' link on paginated results."} {"_id": "q_6535", "text": "Lookup crscode on spatialreference.org and return in specified format.\n\n Arguments:\n\n - *codetype*: \"epsg\", \"esri\", or \"sr-org\".\n - *code*: The code.\n - *format*: The crs format of the returned string. One of \"ogcwkt\", \"esriwkt\", or \"proj4\", but also several others...\n\n Returns:\n\n - Crs string in the specified format."} {"_id": "q_6536", "text": "Returns the crs object from a string interpreted as a specified format, located at a given url site.\n\n Arguments:\n\n - *url*: The url where the crs string is to be read from. \n - *format* (optional): Which format to parse the crs string as. One of \"ogc wkt\", \"esri wkt\", or \"proj4\".\n If None, tries to autodetect the format for you (default).\n\n Returns:\n\n - CRS object."} {"_id": "q_6537", "text": "Returns the crs object from a file, with the format determined from the filename extension.\n\n Arguments:\n\n - *filepath*: filepath to be loaded, including extension."} {"_id": "q_6538", "text": "Load crs object from epsg code, via spatialreference.org.\n Parses based on the proj4 representation.\n\n Arguments:\n\n - *code*: The EPSG code as an integer.\n\n Returns:\n\n - A CS instance of the indicated type."} {"_id": "q_6539", "text": "Load crs object from sr-org code, via spatialreference.org.\n Parses based on the proj4 representation.\n\n Arguments:\n\n - *code*: The SR-ORG code as an integer.\n\n Returns:\n\n - A CS instance of the indicated type."} {"_id": "q_6540", "text": "Write the raw header content to the out stream\n\n Parameters:\n ----------\n out : {file object}\n The output stream"} {"_id": "q_6541", "text": "Instantiate a RawVLR by reading the content from the\n data stream\n\n Parameters:\n ----------\n data_stream : {file object}\n The input stream\n Returns\n -------\n RawVLR\n The RawVLR read"} {"_id": "q_6542", "text": "Parses the GeoTiff VLRs information into nicer structs"} {"_id": "q_6543", "text": "Returns the signedness foe the given type index\n\n Parameters\n ----------\n type_index: int\n index of the type as defined in the LAS Specification\n\n Returns\n -------\n DimensionSignedness,\n the enum variant"} {"_id": "q_6544", "text": "Construct a new PackedPointRecord from an existing one with the ability to change\n to point format while doing so"} {"_id": "q_6545", "text": "Tries to copy the values of the current dimensions from other_record"} {"_id": "q_6546", "text": "Appends zeros to the points stored if the value we are trying to\n fit is bigger"} {"_id": "q_6547", "text": "Returns all the dimensions names, including the names of sub_fields\n and their corresponding packed fields"} {"_id": "q_6548", "text": "Creates a new point record with all dimensions initialized to zero\n\n Parameters\n ----------\n point_format_id: int\n The point format id the point record should have\n point_count : int\n The number of point the point record should have\n\n Returns\n -------\n PackedPointRecord"} {"_id": "q_6549", "text": "Construct the point record by reading and decompressing the points data from\n the input buffer"} {"_id": "q_6550", "text": "Returns the scaled z positions of the points as doubles"} {"_id": "q_6551", "text": "Adds a new extra dimension to the point record\n\n Parameters\n ----------\n name: str\n the name of the dimension\n type: str\n type of the dimension (eg 'uint8')\n description: str, optional\n a small description of the dimension"} {"_id": "q_6552", "text": "writes the data to a stream\n\n Parameters\n ----------\n out_stream: file object\n the destination stream, implementing the write method\n do_compress: bool, optional, default False\n Flag to indicate if you want the date to be compressed"} {"_id": "q_6553", "text": "Writes the las data into a file\n\n Parameters\n ----------\n filename : str\n The file where the data should be written.\n do_compress: bool, optional, default None\n if None the extension of the filename will be used\n to determine if the data should be compressed\n otherwise the do_compress flag indicate if the data should be compressed"} {"_id": "q_6554", "text": "Writes to a stream or file\n\n When destination is a string, it will be interpreted as the path were the file should be written to,\n also if do_compress is None, the compression will be guessed from the file extension:\n\n - .laz -> compressed\n - .las -> uncompressed\n\n .. note::\n\n This means that you could do something like:\n # Create .laz but not compressed\n\n las.write('out.laz', do_compress=False)\n\n # Create .las but compressed\n\n las.write('out.las', do_compress=True)\n\n While it should not confuse Las/Laz readers, it will confuse humans so avoid doing it\n\n\n Parameters\n ----------\n destination: str or file object\n filename or stream to write to\n do_compress: bool, optional\n Flags to indicate if you want to compress the data"} {"_id": "q_6555", "text": "Builds the dict mapping point format id to numpy.dtype\n In the dtypes, bit fields are still packed, and need to be unpacked each time\n you want to access them"} {"_id": "q_6556", "text": "Tries to find a matching point format id for the input numpy dtype\n To match, the input dtype has to be 100% equal to a point format dtype\n so all names & dimensions types must match\n\n Parameters:\n ----------\n dtype : numpy.dtype\n The input dtype\n unpacked : bool, optional\n [description] (the default is False, which [default_description])\n\n Raises\n ------\n errors.IncompatibleDataFormat\n If No compatible point format was found\n\n Returns\n -------\n int\n The compatible point format found"} {"_id": "q_6557", "text": "Returns the minimum file version that supports the given point_format_id"} {"_id": "q_6558", "text": "Returns the list of vlrs of the requested type\n Always returns a list even if there is only one VLR of type vlr_type.\n\n >>> import pylas\n >>> las = pylas.read(\"pylastests/extrabytes.las\")\n >>> las.vlrs\n []\n >>> las.vlrs.get(\"WktCoordinateSystemVlr\")\n []\n >>> las.vlrs.get(\"WktCoordinateSystemVlr\")[0]\n Traceback (most recent call last):\n IndexError: list index out of range\n >>> las.vlrs.get('ExtraBytesVlr')\n []\n >>> las.vlrs.get('ExtraBytesVlr')[0]\n \n\n\n Parameters\n ----------\n vlr_type: str\n the class name of the vlr\n\n Returns\n -------\n :py:class:`list`\n a List of vlrs matching the user_id and records_ids"} {"_id": "q_6559", "text": "Returns the list of vlrs of the requested type\n The difference with get is that the returned vlrs will be removed from the list\n\n Parameters\n ----------\n vlr_type: str\n the class name of the vlr\n\n Returns\n -------\n list\n a List of vlrs matching the user_id and records_ids"} {"_id": "q_6560", "text": "Returns true if all the files have the same points format id"} {"_id": "q_6561", "text": "Returns true if all the files have the same numpy datatype"} {"_id": "q_6562", "text": "Reads the 4 first bytes of the stream to check that is LASF"} {"_id": "q_6563", "text": "Reads and return the vlrs of the file"} {"_id": "q_6564", "text": "reads the compressed point record"} {"_id": "q_6565", "text": "Reads the EVLRs of the file, will fail if the file version\n does not support evlrs"} {"_id": "q_6566", "text": "Helper function to warn about unknown bytes found in the file"} {"_id": "q_6567", "text": "Opens and reads the header of the las content in the source\n\n >>> with open_las('pylastests/simple.las') as f:\n ... print(f.header.point_format_id)\n 3\n\n\n >>> f = open('pylastests/simple.las', mode='rb')\n >>> with open_las(f, closefd=False) as flas:\n ... print(flas.header)\n \n >>> f.closed\n False\n\n >>> f = open('pylastests/simple.las', mode='rb')\n >>> with open_las(f) as flas:\n ... las = flas.read()\n >>> f.closed\n True\n\n Parameters\n ----------\n source : str or io.BytesIO\n if source is a str it must be a filename\n a stream if a file object with the methods read, seek, tell\n\n closefd: bool\n Whether the stream/file object shall be closed, this only work\n when using open_las in a with statement. An exception is raised if\n closefd is specified and the source is a filename\n\n\n Returns\n -------\n pylas.lasreader.LasReader"} {"_id": "q_6568", "text": "Entry point for reading las data in pylas\n\n Reads the whole file into memory.\n\n >>> las = read_las(\"pylastests/simple.las\")\n >>> las.classification\n array([1, 1, 1, ..., 1, 1, 1], dtype=uint8)\n\n Parameters\n ----------\n source : str or io.BytesIO\n The source to read data from\n\n closefd: bool\n if True and the source is a stream, the function will close it\n after it is done reading\n\n\n Returns\n -------\n pylas.lasdatas.base.LasBase\n The object you can interact with to get access to the LAS points & VLRs"} {"_id": "q_6569", "text": "Creates a File from an existing header,\n allocating the array of point according to the provided header.\n The input header is copied.\n\n\n Parameters\n ----------\n header : existing header to be used to create the file\n\n Returns\n -------\n pylas.lasdatas.base.LasBase"} {"_id": "q_6570", "text": "Merges multiple las files into one\n\n merged = merge_las(las_1, las_2)\n merged = merge_las([las_1, las_2, las_3])\n\n Parameters\n ----------\n las_files: Iterable of LasData or LasData\n\n Returns\n -------\n pylas.lasdatas.base.LasBase\n The result of the merging"} {"_id": "q_6571", "text": "writes the given las into memory using BytesIO and \n reads it again, returning the newly read file.\n\n Mostly used for testing purposes, without having to write to disk"} {"_id": "q_6572", "text": "Returns the creation date stored in the las file\n\n Returns\n -------\n datetime.date"} {"_id": "q_6573", "text": "Sets de minimum values of x, y, z as a numpy array"} {"_id": "q_6574", "text": "Sets de maximum values of x, y, z as a numpy array"} {"_id": "q_6575", "text": "Returns the scaling values of x, y, z as a numpy array"} {"_id": "q_6576", "text": "Returns the offsets values of x, y, z as a numpy array"} {"_id": "q_6577", "text": "seeks to the position of the las version header fields\n in the stream and returns it as a str\n\n Parameters\n ----------\n stream io.BytesIO\n\n Returns\n -------\n str\n file version read from the stream"} {"_id": "q_6578", "text": "Converts a header to a another version\n\n Parameters\n ----------\n old_header: the old header instance\n new_version: float or str\n\n Returns\n -------\n The converted header\n\n\n >>> old_header = HeaderFactory.new(1.2)\n >>> HeaderFactory.convert_header(old_header, 1.4)\n \n\n >>> old_header = HeaderFactory.new('1.4')\n >>> HeaderFactory.convert_header(old_header, '1.2')\n "} {"_id": "q_6579", "text": "Packs a sub field's array into another array using a mask\n\n Parameters:\n ----------\n array : numpy.ndarray\n The array in which the sub field array will be packed into\n array_in : numpy.ndarray\n sub field array to pack\n mask : mask (ie: 0b00001111)\n Mask of the sub field\n inplace : {bool}, optional\n If true a new array is returned. (the default is False, which modifies the array in place)\n\n Raises\n ------\n OverflowError\n If the values contained in the sub field array are greater than its mask's number of bits\n allows"} {"_id": "q_6580", "text": "Returns a dict of the sub fields for this point format\n\n Returns\n -------\n Dict[str, Tuple[str, SubField]]\n maps a sub field name to its composed dimension with additional information"} {"_id": "q_6581", "text": "Returns the number of extra bytes"} {"_id": "q_6582", "text": "Returns True if the point format has waveform packet dimensions"} {"_id": "q_6583", "text": "Function to calculate checksum as per Satel manual."} {"_id": "q_6584", "text": "Verify checksum and strip header and footer of received frame."} {"_id": "q_6585", "text": "Add header, checksum and footer to command data."} {"_id": "q_6586", "text": "Start monitoring for interesting events."} {"_id": "q_6587", "text": "Send command to disarm."} {"_id": "q_6588", "text": "Send command to clear the alarm."} {"_id": "q_6589", "text": "Send output turn on command to the alarm."} {"_id": "q_6590", "text": "A workaround for Satel Integra disconnecting after 25s.\n\n Every interval it sends some random question to the device, ignoring\n answer - just to keep connection alive."} {"_id": "q_6591", "text": "Stop monitoring and close connection."} {"_id": "q_6592", "text": "Wrapper function for using SPI device drivers on systems like the\n Raspberry Pi and BeagleBone. This allows using any of the SPI drivers\n from a single entry point instead importing the driver for a specific\n LED type.\n\n Provides the same parameters of\n :py:class:`bibliopixel.drivers.SPI.SPIBase` as\n well as those below:\n\n :param ledtype: One of: LPD8806, WS2801, WS281X, or APA102"} {"_id": "q_6593", "text": "Defer an edit to run on the EditQueue.\n\n :param callable f: The function to be called\n :param tuple args: Positional arguments to the function\n :param tuple kwds: Keyword arguments to the function\n :throws queue.Full: if the queue is full"} {"_id": "q_6594", "text": "Get all the edits in the queue, then execute them.\n\n The algorithm gets all edits, and then executes all of them. It does\n *not* pull off one edit, execute, repeat until the queue is empty, and\n that means that the queue might not be empty at the end of\n ``run_edits``, because new edits might have entered the queue\n while the previous edits are being executed.\n\n This has the advantage that if edits enter the queue faster than they\n can be processed, ``get_and_run_edits`` won't go into an infinite loop,\n but rather the queue will grow unboundedly, which that can be\n detected, and mitigated and reported on - or if Queue.maxsize is\n set, ``bp`` will report a fairly clear error and just dump the edits\n on the ground."} {"_id": "q_6595", "text": "Returns details of either the first or specified device\n\n :param int id: Identifier of desired device. If not given, first device\n found will be returned\n\n :returns tuple: Device ID, Device Address, Firmware Version"} {"_id": "q_6596", "text": "SHOULD BE PRIVATE METHOD"} {"_id": "q_6597", "text": "Set device ID to new value.\n\n :param str dev: Serial device address/path\n :param id: Device ID to set"} {"_id": "q_6598", "text": "Return a named Palette, or None if no such name exists.\n\n If ``name`` is omitted, the default value is used."} {"_id": "q_6599", "text": "Draw a circle in an RGB color, with center x0, y0 and radius r."} {"_id": "q_6600", "text": "Draw a filled circle in an RGB color, with center x0, y0 and radius r."} {"_id": "q_6601", "text": "Draw a between x0, y0 and x1, y1 in an RGB color.\n\n :param colorFunc: a function that takes an integer from x0 to x1 and\n returns a color corresponding to that point\n :param aa: if True, use Bresenham's algorithm for line drawing;\n otherwise use Xiaolin Wu's algorithm"} {"_id": "q_6602", "text": "Draw line from point x0, y0 to x1, y1 using Bresenham's algorithm.\n\n Will draw beyond matrix bounds."} {"_id": "q_6603", "text": "Draw filled triangle with points x0,y0 - x1,y1 - x2,y2\n\n :param aa: if True, use Bresenham's algorithm for line drawing;\n otherwise use Xiaolin Wu's algorithm"} {"_id": "q_6604", "text": "Set the base project for routing."} {"_id": "q_6605", "text": "Set pixel to RGB color tuple"} {"_id": "q_6606", "text": "Get RGB color tuple of color at index pixel"} {"_id": "q_6607", "text": "Scale RGB tuple by level, 0 - 256"} {"_id": "q_6608", "text": "Save the description as a YML file. Prompt if no file given."} {"_id": "q_6609", "text": "Run a function, catch, report and discard exceptions"} {"_id": "q_6610", "text": "Receive a message from the input source and perhaps raise an Exception."} {"_id": "q_6611", "text": "APA102 & SK9822 support on-chip brightness control, allowing greater\n color depth.\n\n APA102 superimposes a 440Hz PWM on the 19kHz base PWM to control\n brightness. SK9822 uses a base 4.7kHz PWM but controls brightness with a\n variable current source.\n\n Because of this SK9822 will have much less flicker at lower levels.\n Either way, this option is better and faster than scaling in\n BiblioPixel."} {"_id": "q_6612", "text": "Return an independent copy of this layout with a completely separate\n color_list and no drivers."} {"_id": "q_6613", "text": "Set the internal colors starting at an optional offset.\n\n If `color_list` is a list or other 1-dimensional array, it is reshaped\n into an N x 3 list.\n\n If `color_list` too long it is truncated; if it is too short then only\n the initial colors are set."} {"_id": "q_6614", "text": "Fill the entire strip with HSV color tuple"} {"_id": "q_6615", "text": "Decorator for RestServer methods that take a single address"} {"_id": "q_6616", "text": "Decorator for RestServer methods that take multiple addresses"} {"_id": "q_6617", "text": "Advance a list of unique, ordered elements in-place, lexicographically\n increasing or backward, by rightmost or leftmost digit.\n\n Returns False if the permutation wrapped around - i.e. went from\n lexicographically greatest to least, and True in all other cases.\n\n If the length of the list is N, then this function will repeat values after\n N! steps, and will return False exactly once.\n\n See also https://stackoverflow.com/a/34325140/43839"} {"_id": "q_6618", "text": "For each row or column in cuts, read a list of its colors,\n apply the function to that list of colors, then write it back\n to the layout."} {"_id": "q_6619", "text": "Compose a sequence of events into one event.\n\n Arguments:\n events: a sequence of objects looking like threading.Event\n condition: a function taking a sequence of bools and returning a bool."} {"_id": "q_6620", "text": "Draws a filled circle at point x0,y0 with radius r and specified color"} {"_id": "q_6621", "text": "Draw rectangle with top-left corner at x,y, width w, height h,\n and corner radius r."} {"_id": "q_6622", "text": "Draw solid rectangle with top-left corner at x,y, width w, height h,\n and corner radius r"} {"_id": "q_6623", "text": "Draw triangle with points x0,y0 - x1,y1 - x2,y2"} {"_id": "q_6624", "text": "Use with caution!\n\n Directly set the pixel buffers.\n\n :param colors: A list of color tuples\n :param int pos: Position in color list to begin set operation."} {"_id": "q_6625", "text": "Return a list of Segments that evenly split the strip."} {"_id": "q_6626", "text": "Return a new segment starting right after self in the same buffer."} {"_id": "q_6627", "text": "Stop the builder if it's running."} {"_id": "q_6628", "text": "Open an instance of simpixel in the browser"} {"_id": "q_6629", "text": "Depth first recursion through a dictionary containing type constructors\n\n The arguments pre, post and children are independently either:\n\n * None, which means to do nothing\n * a string, which means to use the static class method of that name on the\n class being constructed, or\n * a callable, to be called at each recursion\n\n Arguments:\n\n dictionary -- a project dictionary or one of its subdictionaries\n pre -- called before children are visited node in the recursion\n post -- called after children are visited in the recursion\n python_path -- relative path to start resolving typenames"} {"_id": "q_6630", "text": "Tries to convert a value to a type constructor.\n\n If value is a string, then it used as the \"typename\" field.\n\n If the \"typename\" field exists, the symbol for that name is imported and\n added to the type constructor as a field \"datatype\".\n\n Throws:\n ImportError -- if \"typename\" is set but cannot be imported\n ValueError -- if \"typename\" is malformed"} {"_id": "q_6631", "text": "Fill a portion of a strip from start to stop by step with a given item.\n If stop is not given, it defaults to the length of the strip."} {"_id": "q_6632", "text": "Older animations in BPA and other areas use all sorts of different names for\n what we are now representing with palettes.\n\n This function mutates a kwds dictionary to remove these legacy fields and\n extract a palette from it, which it returns."} {"_id": "q_6633", "text": "Write a series of frames as a single animated GIF.\n\n :param str filename: the name of the GIF file to write\n\n :param list frames: a list of filenames, each of which represents a single\n frame of the animation. Each frame must have exactly the same\n dimensions, and the code has only been tested with .gif files.\n\n :param float fps:\n The number of frames per second.\n\n :param int loop:\n The number of iterations. Default 0 (meaning loop indefinitely).\n\n :param int palette:\n The number of colors to quantize the image to. Is rounded to\n the nearest power of two. Default 256."} {"_id": "q_6634", "text": "Loads not only JSON files but also YAML files ending in .yml.\n\n :param file: a filename or file handle to read from\n :returns: the data loaded from the JSON or YAML file\n :rtype: dict"} {"_id": "q_6635", "text": "Order colors by hue, saturation and value, in that order.\n\n Returns -1 if a < b, 0 if a == b and 1 if a < b."} {"_id": "q_6636", "text": "Update sections in a Project description"} {"_id": "q_6637", "text": "Construct an animation, set the runner, and add in the two\n \"reserved fields\" `name` and `data`."} {"_id": "q_6638", "text": "Return an image in the given mode."} {"_id": "q_6639", "text": "Given an animated GIF, return a list with a colorlist for each frame."} {"_id": "q_6640", "text": "Parse a string representing a time interval or duration into seconds,\n or raise an exception\n\n :param str s: a string representation of a time interval\n :raises ValueError: if ``s`` can't be interpreted as a duration"} {"_id": "q_6641", "text": "Stop the Runner if it's running.\n Called as a classmethod, stop the running instance if any."} {"_id": "q_6642", "text": "Display an image on a matrix."} {"_id": "q_6643", "text": "Every other column is indexed in reverse."} {"_id": "q_6644", "text": "Return a Palette but don't take into account Pallete Names."} {"_id": "q_6645", "text": "Helper method to generate X,Y coordinate maps for strips"} {"_id": "q_6646", "text": "Make an object from a symbol."} {"_id": "q_6647", "text": "For the duration of this context manager, put the PID for this process into\n `pid_filename`, and then remove the file at the end."} {"_id": "q_6648", "text": "Return an integer index or None"} {"_id": "q_6649", "text": "Returns a generator with the elements \"data\" taken by offset, restricted\n by self.begin and self.end, and padded on either end by `pad` to get\n back to the original length of `data`"} {"_id": "q_6650", "text": "Cleans up all sorts of special cases that humans want when entering\n an animation from a yaml file.\n\n 1. Loading it from a file\n 2. Using just a typename instead of a dict\n 3. A single dict representing an animation, with a run: section.\n 4. (Legacy) Having a dict with parallel elements run: and animation:\n 5. (Legacy) A tuple or list: (animation, run )"} {"_id": "q_6651", "text": "Give each animation a unique, mutable layout so they can run\n independently."} {"_id": "q_6652", "text": "If a project has a Curses driver, the section \"main\" in the section\n \"run\" must be \"bibliopixel.drivers.curses.Curses.main\"."} {"_id": "q_6653", "text": "Merge zero or more dictionaries representing projects with the default\n project dictionary and return the result"} {"_id": "q_6654", "text": "Guess the type of a file.\n\n If allow_directory is False, don't consider the possibility that the\n file is a directory."} {"_id": "q_6655", "text": "Get a notebook from the database."} {"_id": "q_6656", "text": "Apply _notebook_model_from_db or _file_model_from_db to each entry\n in file_records, depending on the result of `guess_type`."} {"_id": "q_6657", "text": "Build a directory model from database directory record."} {"_id": "q_6658", "text": "Save a notebook.\n\n Returns a validation message."} {"_id": "q_6659", "text": "Save a non-notebook file."} {"_id": "q_6660", "text": "Rename object from old_path to path.\n\n NOTE: This method is unfortunately named on the base class. It\n actually moves a file or a directory."} {"_id": "q_6661", "text": "Delete object corresponding to path."} {"_id": "q_6662", "text": "Add a new user if they don't already exist."} {"_id": "q_6663", "text": "Delete a user and all of their resources."} {"_id": "q_6664", "text": "Create a directory."} {"_id": "q_6665", "text": "Return a WHERE clause that matches entries in a directory.\n\n Parameterized on table because this clause is re-used between files and\n directories."} {"_id": "q_6666", "text": "Delete a directory."} {"_id": "q_6667", "text": "Internal implementation of dir_exists.\n\n Expects a db-style path name."} {"_id": "q_6668", "text": "Return files in a directory."} {"_id": "q_6669", "text": "Return subdirectories of a directory."} {"_id": "q_6670", "text": "Return a SELECT statement that returns the latest N versions of a file."} {"_id": "q_6671", "text": "Default fields returned by a file query."} {"_id": "q_6672", "text": "Get file data for the given user_id and path.\n\n Include content only if include_content=True."} {"_id": "q_6673", "text": "Get the value in the 'id' column for the file with the given\n user_id and path."} {"_id": "q_6674", "text": "Check if a file exists."} {"_id": "q_6675", "text": "Rename a directory."} {"_id": "q_6676", "text": "Save a file.\n\n TODO: Update-then-insert is probably cheaper than insert-then-update."} {"_id": "q_6677", "text": "Create a generator of decrypted files.\n\n Files are yielded in ascending order of their timestamp.\n\n This function selects all current notebooks (optionally, falling within a\n datetime range), decrypts them, and returns a generator yielding dicts,\n each containing a decoded notebook and metadata including the user,\n filepath, and timestamp.\n\n Parameters\n ----------\n engine : SQLAlchemy.engine\n Engine encapsulating database connections.\n crypto_factory : function[str -> Any]\n A function from user_id to an object providing the interface required\n by PostgresContentsManager.crypto. Results of this will be used for\n decryption of the selected notebooks.\n min_dt : datetime.datetime, optional\n Minimum last modified datetime at which a file will be included.\n max_dt : datetime.datetime, optional\n Last modified datetime at and after which a file will be excluded.\n logger : Logger, optional"} {"_id": "q_6678", "text": "Delete all database records for the given user_id."} {"_id": "q_6679", "text": "Re-encrypt a row from ``table`` with ``id`` of ``row_id``."} {"_id": "q_6680", "text": "Convert a secret key and a user ID into an encryption key to use with a\n ``cryptography.fernet.Fernet``.\n\n Taken from\n https://cryptography.io/en/latest/fernet/#using-passwords-with-fernet\n\n Parameters\n ----------\n password : unicode\n ascii-encodable key to derive\n user_id : unicode\n ascii-encodable user_id to use as salt"} {"_id": "q_6681", "text": "Derive a list of per-user Fernet keys from a list of master keys and a\n username.\n\n If a None is encountered in ``passwords``, it is forwarded.\n\n Parameters\n ----------\n passwords : list[unicode]\n List of ascii-encodable keys to derive.\n user_id : unicode or None\n ascii-encodable user_id to use as salt"} {"_id": "q_6682", "text": "Create and return a function suitable for passing as a crypto_factory to\n ``pgcontents.utils.sync.reencrypt_all_users``\n\n The factory here returns a ``FernetEncryption`` that uses a key derived\n from ``password`` and salted with the supplied user_id."} {"_id": "q_6683", "text": "Decorator memoizing a single-argument function"} {"_id": "q_6684", "text": "Get the name from a column-like SQLAlchemy expression.\n\n Works for Columns and Cast expressions."} {"_id": "q_6685", "text": "Convert a SQLAlchemy row that does not contain a 'content' field to a dict.\n\n If row is None, return None.\n\n Raises AssertionError if there is a field named 'content' in ``fields``."} {"_id": "q_6686", "text": "Create a checkpoint of the current state of a notebook\n\n Returns a checkpoint_id for the new checkpoint."} {"_id": "q_6687", "text": "Create a checkpoint of the current state of a file\n\n Returns a checkpoint_id for the new checkpoint."} {"_id": "q_6688", "text": "delete a checkpoint for a file"} {"_id": "q_6689", "text": "Get the content of a checkpoint."} {"_id": "q_6690", "text": "Return a list of checkpoints for a given file"} {"_id": "q_6691", "text": "Rename all checkpoints for old_path to new_path."} {"_id": "q_6692", "text": "Delete all checkpoints for the given path."} {"_id": "q_6693", "text": "Resolve a path based on a dictionary of manager prefixes.\n\n Returns a triple of (prefix, manager, manager_relative_path)."} {"_id": "q_6694", "text": "Prefix all path entries in model with the given prefix."} {"_id": "q_6695", "text": "Decorator for methods that accept path as a first argument."} {"_id": "q_6696", "text": "Parameterized decorator for methods that accept path as a second\n argument."} {"_id": "q_6697", "text": "Strip slashes from directories before updating."} {"_id": "q_6698", "text": "Resolve paths with '..' to normalized paths, raising an error if the final\n result is outside root."} {"_id": "q_6699", "text": "Decode base64 data of unknown format.\n\n Attempts to interpret data as utf-8, falling back to ascii on failure."} {"_id": "q_6700", "text": "Decode base64 content for a file.\n\n format:\n If 'text', the contents will be decoded as UTF-8.\n If 'base64', do nothing.\n If not specified, try to decode as UTF-8, and fall back to base64\n\n Returns a triple of decoded_content, format, and mimetype."} {"_id": "q_6701", "text": "Return an iterable of all prefix directories of path, descending from root."} {"_id": "q_6702", "text": "Create a user."} {"_id": "q_6703", "text": "Split an iterable of models into a list of file paths and a list of\n directory paths."} {"_id": "q_6704", "text": "Recursive helper for walk."} {"_id": "q_6705", "text": "Iterate over all files visible to ``mgr``."} {"_id": "q_6706", "text": "Iterate over the contents of all files visible to ``mgr``."} {"_id": "q_6707", "text": "Re-encrypt data for all users.\n\n This function is idempotent, meaning that it should be possible to apply\n the same re-encryption process multiple times without having any effect on\n the database. Idempotency is achieved by first attempting to decrypt with\n the old crypto and falling back to the new crypto on failure.\n\n An important consequence of this strategy is that **decrypting** a database\n is not supported with this function, because ``NoEncryption.decrypt``\n always succeeds. To decrypt an already-encrypted database, use\n ``unencrypt_all_users`` instead.\n\n It is, however, possible to perform an initial encryption of a database by\n passing a function returning a ``NoEncryption`` as ``old_crypto_factory``.\n\n Parameters\n ----------\n engine : SQLAlchemy.engine\n Engine encapsulating database connections.\n old_crypto_factory : function[str -> Any]\n A function from user_id to an object providing the interface required\n by PostgresContentsManager.crypto. Results of this will be used for\n decryption of existing database content.\n new_crypto_factory : function[str -> Any]\n A function from user_id to an object providing the interface required\n by PostgresContentsManager.crypto. Results of this will be used for\n re-encryption of database content.\n\n This **must not** return instances of ``NoEncryption``. Use\n ``unencrypt_all_users`` if you want to unencrypt a database.\n logger : logging.Logger, optional\n A logger to user during re-encryption.\n\n See Also\n --------\n reencrypt_user\n unencrypt_all_users"} {"_id": "q_6708", "text": "Re-encrypt all files and checkpoints for a single user."} {"_id": "q_6709", "text": "Unencrypt all files and checkpoints for a single user."} {"_id": "q_6710", "text": "Upgrade the given database to revision."} {"_id": "q_6711", "text": "Santizes the data for the given block.\n If block has a matching embed serializer, use the `to_internal_value` method."} {"_id": "q_6712", "text": "Queue an instance to be fetched from the database."} {"_id": "q_6713", "text": "Insert a fetched instance into embed block."} {"_id": "q_6714", "text": "Load data in bulk for each embed block."} {"_id": "q_6715", "text": "Perform validation of the widget data"} {"_id": "q_6716", "text": "Render HTML entry point for manager app."} {"_id": "q_6717", "text": "Excludes fields that are included in the queryparameters"} {"_id": "q_6718", "text": "Get the latest article with the given primary key."} {"_id": "q_6719", "text": "Optionally restricts the returned articles by filtering against a `topic`\n query parameter in the URL."} {"_id": "q_6720", "text": "Only display unpublished content to authenticated users, filter by\n query parameter if present."} {"_id": "q_6721", "text": "Overrides the default get_attribute method to convert None values to False."} {"_id": "q_6722", "text": "Checks that the given widget contains the required fields"} {"_id": "q_6723", "text": "Return True if id is a valid UUID, False otherwise."} {"_id": "q_6724", "text": "Raise a ValidationError if data does not match the author format."} {"_id": "q_6725", "text": "Save widget data for this zone."} {"_id": "q_6726", "text": "Renders the widget as HTML."} {"_id": "q_6727", "text": "Retrieves the settings for this integration as a dictionary.\n\n Removes all hidden fields if show_hidden=False"} {"_id": "q_6728", "text": "Receive OAuth callback request from Facebook."} {"_id": "q_6729", "text": "Updates settings for given integration."} {"_id": "q_6730", "text": "Handles requests to the user signup page."} {"_id": "q_6731", "text": "Renders the contents of the zone with given zone_id."} {"_id": "q_6732", "text": "Handles saving the featured image.\n\n If data is None, the featured image will be removed.\n\n `data` should be dictionary with the following format:\n {\n 'image_id': int,\n 'caption': str,\n 'credit': str\n }"} {"_id": "q_6733", "text": "Save the subsection to the parent article"} {"_id": "q_6734", "text": "Returns the file extension."} {"_id": "q_6735", "text": "Custom save method to process thumbnails and save image dimensions."} {"_id": "q_6736", "text": "Attempts to connect to the MySQL server.\n\n :return: Bound MySQL connection object if successful or ``None`` if\n unsuccessful."} {"_id": "q_6737", "text": "Copy the instance and make sure not to use a reference"} {"_id": "q_6738", "text": "Returns a generator for individual account transactions. The\n latest operation will be first. This call can be used in a\n ``for`` loop.\n\n :param int first: sequence number of the first\n transaction to return (*optional*)\n :param int last: sequence number of the last\n transaction to return (*optional*)\n :param int limit: limit number of transactions to\n return (*optional*)\n :param array only_ops: Limit generator by these\n operations (*optional*)\n :param array exclude_ops: Exclude these operations from\n generator (*optional*).\n\n ... note::\n only_ops and exclude_ops takes an array of strings:\n The full list of operation ID's can be found in\n operationids.py.\n Example: ['transfer', 'fill_order']"} {"_id": "q_6739", "text": "Upgrade account to life time member"} {"_id": "q_6740", "text": "Add an other account to the whitelist of this account"} {"_id": "q_6741", "text": "Remove an other account from any list of this account"} {"_id": "q_6742", "text": "Use to derive a number that allows to easily recover the\n public key from the signature"} {"_id": "q_6743", "text": "Returns a datetime of the block with the given block\n number.\n\n :param int block_num: Block number"} {"_id": "q_6744", "text": "Returns the timestamp of the block with the given block\n number.\n\n :param int block_num: Block number"} {"_id": "q_6745", "text": "Yields account names between start and stop.\n\n :param str start: Start at this account name\n :param str stop: Stop at this account name\n :param int steps: Obtain ``steps`` ret with a single call from RPC"} {"_id": "q_6746", "text": "Refresh the data from the API server"} {"_id": "q_6747", "text": "Is the store unlocked so that I can decrypt the content?"} {"_id": "q_6748", "text": "The password is used to encrypt this masterpassword. To\n decrypt the keys stored in the keys database, one must use\n BIP38, decrypt the masterpassword from the configuration\n store with the user password, and use the decrypted\n masterpassword to decrypt the BIP38 encrypted private keys\n from the keys storage!\n\n :param str password: Password to use for en-/de-cryption"} {"_id": "q_6749", "text": "Derive the checksum\n\n :param str s: Random string for which to derive the checksum"} {"_id": "q_6750", "text": "Change the password that allows to decrypt the master key"} {"_id": "q_6751", "text": "Decrypt the content according to BIP38\n\n :param str wif: Encrypted key"} {"_id": "q_6752", "text": "Encrypt the content according to BIP38\n\n :param str wif: Unencrypted key"} {"_id": "q_6753", "text": "Derive private key from the brain key and the current sequence\n number"} {"_id": "q_6754", "text": "Derive y point from x point"} {"_id": "q_6755", "text": "Return the point for the public key"} {"_id": "q_6756", "text": "Derive new public key from this key and a sha256 \"offset\""} {"_id": "q_6757", "text": "Derive uncompressed public key"} {"_id": "q_6758", "text": "Derive new private key from this private key and an arbitrary\n sequence number"} {"_id": "q_6759", "text": "Derive new private key from this key and a sha256 \"offset\""} {"_id": "q_6760", "text": "Claim a balance from the genesis block\n\n :param str balance_id: The identifier that identifies the balance\n to claim (1.15.x)\n :param str account: (optional) the account that owns the bet\n (defaults to ``default_account``)"} {"_id": "q_6761", "text": "This method will initialize ``SharedInstance.instance`` and return it.\n The purpose of this method is to have offer single default\n instance that can be reused by multiple classes."} {"_id": "q_6762", "text": "This allows to set a config that will be used when calling\n ``shared_blockchain_instance`` and allows to define the configuration\n without requiring to actually create an instance"} {"_id": "q_6763", "text": "Find the next url in the list"} {"_id": "q_6764", "text": "reset the failed connection counters"} {"_id": "q_6765", "text": "Is the key `key` available?"} {"_id": "q_6766", "text": "returns all items off the store as tuples"} {"_id": "q_6767", "text": "Return the key if exists or a default value\n\n :param str value: Value\n :param str default: Default value if key not present"} {"_id": "q_6768", "text": "Delete a key from the store\n\n :param str value: Value"} {"_id": "q_6769", "text": "Check if the database table exists"} {"_id": "q_6770", "text": "Create the new table in the SQLite database"} {"_id": "q_6771", "text": "Returns an instance of base \"Operations\" for further processing"} {"_id": "q_6772", "text": "Try to obtain the wif key from the wallet by telling which account\n and permission is supposed to sign the transaction"} {"_id": "q_6773", "text": "Add a wif that should be used for signing of the transaction."} {"_id": "q_6774", "text": "Auxiliary method to obtain the required fees for a set of\n operations. Requires a websocket connection to a witness node!"} {"_id": "q_6775", "text": "Verify the authority of the signed transaction"} {"_id": "q_6776", "text": "Broadcast a transaction to the blockchain network\n\n :param tx tx: Signed transaction to broadcast"} {"_id": "q_6777", "text": "Clear the transaction builder and start from scratch"} {"_id": "q_6778", "text": "Returns the price instance so that the base asset is ``base``.\n\n Note: This makes a copy of the object!"} {"_id": "q_6779", "text": "Returns the price instance so that the quote asset is ``quote``.\n\n Note: This makes a copy of the object!"} {"_id": "q_6780", "text": "This method obtains the required private keys if present in\n the wallet, finalizes the transaction, signs it and\n broadacasts it\n\n :param operation ops: The operation (or list of operaions) to\n broadcast\n :param operation account: The account that authorizes the\n operation\n :param string permission: The required permission for\n signing (active, owner, posting)\n :param object append_to: This allows to provide an instance of\n ProposalsBuilder (see :func:`new_proposal`) or\n TransactionBuilder (see :func:`new_tx()`) to specify\n where to put a specific operation.\n\n ... note:: ``append_to`` is exposed to every method used in the\n this class\n\n ... note::\n\n If ``ops`` is a list of operation, they all need to be\n signable by the same key! Thus, you cannot combine ops\n that require active permission with ops that require\n posting permission. Neither can you use different\n accounts for different operations!\n\n ... note:: This uses ``txbuffer`` as instance of\n :class:`transactionbuilder.TransactionBuilder`.\n You may want to use your own txbuffer"} {"_id": "q_6781", "text": "Broadcast a transaction to the Blockchain\n\n :param tx tx: Signed transaction to broadcast"} {"_id": "q_6782", "text": "Let's obtain a new txbuffer\n\n :returns int txid: id of the new txbuffer"} {"_id": "q_6783", "text": "The transaction id of this transaction"} {"_id": "q_6784", "text": "Sign the transaction with the provided private keys.\n\n :param array wifkeys: Array of wif keys\n :param str chain: identifier for the chain"} {"_id": "q_6785", "text": "Unlock the wallet database"} {"_id": "q_6786", "text": "Create a new wallet database"} {"_id": "q_6787", "text": "Add a private key to the wallet database"} {"_id": "q_6788", "text": "Remove all keys associated with a given account"} {"_id": "q_6789", "text": "Obtain owner Memo Key for an account from the wallet database"} {"_id": "q_6790", "text": "Obtain owner Active Key for an account from the wallet database"} {"_id": "q_6791", "text": "Obtain the first account name from public key"} {"_id": "q_6792", "text": "Get key type"} {"_id": "q_6793", "text": "Return all accounts installed in the wallet database"} {"_id": "q_6794", "text": "Encrypt a memo\n\n :param str message: clear text memo message\n :returns: encrypted message\n :rtype: str"} {"_id": "q_6795", "text": "Decrypt a message\n\n :param dict message: encrypted memo message\n :returns: decrypted message\n :rtype: str"} {"_id": "q_6796", "text": "Derive the share secret between ``priv`` and ``pub``\n\n :param `Base58` priv: Private Key\n :param `Base58` pub: Public Key\n :return: Shared secret\n :rtype: hex\n\n The shared secret is generated such that::\n\n Pub(Alice) * Priv(Bob) = Pub(Bob) * Priv(Alice)"} {"_id": "q_6797", "text": "Initialize AES instance\n\n :param hex shared_secret: Shared Secret to use as encryption key\n :param int nonce: Random nonce\n :return: AES instance\n :rtype: AES"} {"_id": "q_6798", "text": "Encode a message with a shared secret between Alice and Bob\n\n :param PrivateKey priv: Private Key (of Alice)\n :param PublicKey pub: Public Key (of Bob)\n :param int nonce: Random nonce\n :param str message: Memo message\n :return: Encrypted message\n :rtype: hex"} {"_id": "q_6799", "text": "Decode a message with a shared secret between Alice and Bob\n\n :param PrivateKey priv: Private Key (of Bob)\n :param PublicKey pub: Public Key (of Alice)\n :param int nonce: Nonce used for Encryption\n :param bytes message: Encrypted Memo message\n :return: Decrypted message\n :rtype: str\n :raise ValueError: if message cannot be decoded as valid UTF-8\n string"} {"_id": "q_6800", "text": "Send IPMI 'command' via ipmitool"} {"_id": "q_6801", "text": "Find the given 'pattern' in 'content"} {"_id": "q_6802", "text": "Cat file and return content"} {"_id": "q_6803", "text": "Get chunk meta of NVMe device"} {"_id": "q_6804", "text": "Get sizeof DescriptorTable"} {"_id": "q_6805", "text": "Verify LNVM variables and construct exported variables"} {"_id": "q_6806", "text": "Compare of two Buffer item"} {"_id": "q_6807", "text": "Copy stream to buffer"} {"_id": "q_6808", "text": "Write buffer to file"} {"_id": "q_6809", "text": "Read file to buffer"} {"_id": "q_6810", "text": "230v power on"} {"_id": "q_6811", "text": "Get chunk information"} {"_id": "q_6812", "text": "Verify BLOCK variables and construct exported variables"} {"_id": "q_6813", "text": "Execute a script or testcase"} {"_id": "q_6814", "text": "Setup test-hooks\n @returns dict of hook filepaths {\"enter\": [], \"exit\": []}"} {"_id": "q_6815", "text": "Dump the given trun to file"} {"_id": "q_6816", "text": "Print essential info on"} {"_id": "q_6817", "text": "Create and initialize a testcase"} {"_id": "q_6818", "text": "Triggers when exiting the given testsuite"} {"_id": "q_6819", "text": "Creates and initialized a TESTSUITE struct and site-effects such as creating\n output directories and forwarding initialization of testcases"} {"_id": "q_6820", "text": "setup res_root and aux_root, log info and run tcase-enter-hooks\n\n @returns 0 when all hooks succeed, some value othervise"} {"_id": "q_6821", "text": "Triggers when exiting the given testrun"} {"_id": "q_6822", "text": "Triggers when entering the given testrun"} {"_id": "q_6823", "text": "Setup the testrunner data-structure, embedding the parsed environment\n variables and command-line arguments and continues with setup for testplans,\n testsuites, and testcases"} {"_id": "q_6824", "text": "CIJ Test Runner main entry point"} {"_id": "q_6825", "text": "Get chunk meta table"} {"_id": "q_6826", "text": "Generic address to device address"} {"_id": "q_6827", "text": "Start DMESG job in thread"} {"_id": "q_6828", "text": "Terminate DMESG job"} {"_id": "q_6829", "text": "generate rater pic"} {"_id": "q_6830", "text": "round the data"} {"_id": "q_6831", "text": "Verify PCI variables and construct exported variables"} {"_id": "q_6832", "text": "Print, emphasized 'good', the given 'txt' message"} {"_id": "q_6833", "text": "Define the list of 'exported' variables with 'prefix' with values from 'env'"} {"_id": "q_6834", "text": "Get-log-page chunk information\n\n If the pugrp and punit is set, then provide report only for that pugrp/punit\n\n @returns the first chunk in the given state if one exists, None otherwise"} {"_id": "q_6835", "text": "Get a chunk-descriptor for the first chunk in the given state.\n\n If the pugrp and punit is set, then search only that pugrp/punit\n\n @returns the first chunk in the given state if one exists, None otherwise"} {"_id": "q_6836", "text": "Kill all of FIO processes"} {"_id": "q_6837", "text": "Get parameter of FIO"} {"_id": "q_6838", "text": "Run FIO job in thread"} {"_id": "q_6839", "text": "Run FIO job"} {"_id": "q_6840", "text": "Parse descriptions from the the given tcase"} {"_id": "q_6841", "text": "Returns content of the given 'fpath' with HTML annotations for syntax\n highlighting"} {"_id": "q_6842", "text": "Perform postprocessing of the given test run"} {"_id": "q_6843", "text": "Replace all absolute paths to \"re-home\" it"} {"_id": "q_6844", "text": "Main entry point"} {"_id": "q_6845", "text": "Wait util target connected"} {"_id": "q_6846", "text": "Factory method for the assertion builder with value to be tested and optional description."} {"_id": "q_6847", "text": "Asserts that val is equal to other."} {"_id": "q_6848", "text": "Asserts that val is not equal to other."} {"_id": "q_6849", "text": "Asserts that the val is not identical to other, via 'is' compare."} {"_id": "q_6850", "text": "Asserts that val is of the given type."} {"_id": "q_6851", "text": "Asserts that val is the given length."} {"_id": "q_6852", "text": "Asserts that val does not contain the given item or items."} {"_id": "q_6853", "text": "Asserts that val is iterable and does not contain any duplicate items."} {"_id": "q_6854", "text": "Asserts that val is empty."} {"_id": "q_6855", "text": "Asserts that val is not empty."} {"_id": "q_6856", "text": "Asserts that val is numeric and is less than other."} {"_id": "q_6857", "text": "Asserts that val is numeric and is between low and high."} {"_id": "q_6858", "text": "Asserts that val is numeric and is close to other within tolerance."} {"_id": "q_6859", "text": "Asserts that val is case-insensitive equal to other."} {"_id": "q_6860", "text": "Asserts that val is string or iterable and ends with suffix."} {"_id": "q_6861", "text": "Asserts that val is string and matches regex pattern."} {"_id": "q_6862", "text": "Asserts that val is non-empty string and all characters are alphabetic."} {"_id": "q_6863", "text": "Asserts that val is non-empty string and all characters are digits."} {"_id": "q_6864", "text": "Asserts that val is non-empty string and all characters are lowercase."} {"_id": "q_6865", "text": "Asserts that val is non-empty string and all characters are uppercase."} {"_id": "q_6866", "text": "Asserts that val is a unicode string."} {"_id": "q_6867", "text": "Asserts that val is iterable and a subset of the given superset or flattened superset if multiple supersets are given."} {"_id": "q_6868", "text": "Asserts that val is a dict and contains the given value or values."} {"_id": "q_6869", "text": "Asserts that val is a dict and contains the given entry or entries."} {"_id": "q_6870", "text": "Asserts that val is a date and is before other date."} {"_id": "q_6871", "text": "Asserts that val is a path and that it exists."} {"_id": "q_6872", "text": "Asserts that val is an existing path to a file."} {"_id": "q_6873", "text": "Asserts that val is an existing path to a directory."} {"_id": "q_6874", "text": "Asserts that val is an existing path to a file and that file is named filename."} {"_id": "q_6875", "text": "Asserts that val is an existing path to a file and that file is a child of parent."} {"_id": "q_6876", "text": "Asserts that val is callable and that when called raises the given error."} {"_id": "q_6877", "text": "Asserts the val callable when invoked with the given args and kwargs raises the expected exception."} {"_id": "q_6878", "text": "Helper to convert the given args and kwargs into a string."} {"_id": "q_6879", "text": "Generate CSV file for training and testing data\n\n Input\n =====\n best_path: str, path to BEST folder which contains unzipped subfolder\n 'article', 'encyclopedia', 'news', 'novel'\n\n cleaned_data: str, path to output folder, the cleaned data will be saved\n in the given folder name where training set will be stored in `train` folder\n and testing set will be stored on `test` folder\n\n create_val: boolean, True or False, if True, divide training set into training set and\n validation set in `val` folder"} {"_id": "q_6880", "text": "Transform processed path into feature matrix and output array\n\n Input\n =====\n best_processed_path: str, path to processed BEST dataset\n\n option: str, 'train' or 'test'"} {"_id": "q_6881", "text": "Given path to processed BEST dataset,\n train CNN model for words beginning alongside with\n character label encoder and character type label encoder\n\n Input\n =====\n best_processed_path: str, path to processed BEST dataset\n weight_path: str, path to weight path file\n verbose: int, verbost option for training Keras model\n\n Output\n ======\n model: keras model, keras model for tokenize prediction"} {"_id": "q_6882", "text": "Tokenize given Thai text string\n\n Input\n =====\n text: str, Thai text string\n custom_dict: str (or list), path to customized dictionary file\n It allows the function not to tokenize given dictionary wrongly.\n The file should contain custom words separated by line.\n Alternatively, you can provide list of custom words too.\n\n Output\n ======\n tokens: list, list of tokenized words\n\n Example\n =======\n >> deepcut.tokenize('\u0e15\u0e31\u0e14\u0e04\u0e33\u0e44\u0e14\u0e49\u0e14\u0e35\u0e21\u0e32\u0e01')\n >> ['\u0e15\u0e31\u0e14\u0e04\u0e33','\u0e44\u0e14\u0e49','\u0e14\u0e35','\u0e21\u0e32\u0e01']"} {"_id": "q_6883", "text": "Create feature array of character and surrounding characters"} {"_id": "q_6884", "text": "Given input dataframe, create feature dataframe of shifted characters"} {"_id": "q_6885", "text": "Wraps a fileobj in a bandwidth limited stream wrapper\n\n :type fileobj: file-like obj\n :param fileobj: The file-like obj to wrap\n\n :type transfer_coordinator: s3transfer.futures.TransferCoordinator\n param transfer_coordinator: The coordinator for the general transfer\n that the wrapped stream is a part of\n\n :type enabled: boolean\n :param enabled: Whether bandwidth limiting should be enabled to start"} {"_id": "q_6886", "text": "Read a specified amount\n\n Reads will only be throttled if bandwidth limiting is enabled."} {"_id": "q_6887", "text": "Consume an a requested amount\n\n :type amt: int\n :param amt: The amount of bytes to request to consume\n\n :type request_token: RequestToken\n :param request_token: The token associated to the consumption\n request that is used to identify the request. So if a\n RequestExceededException is raised the token should be used\n in subsequent retry consume() request.\n\n :raises RequestExceededException: If the consumption amount would\n exceed the maximum allocated bandwidth\n\n :rtype: int\n :returns: The amount consumed"} {"_id": "q_6888", "text": "Schedules a wait time to be able to consume an amount\n\n :type amt: int\n :param amt: The amount of bytes scheduled to be consumed\n\n :type token: RequestToken\n :param token: The token associated to the consumption\n request that is used to identify the request.\n\n :type time_to_consume: float\n :param time_to_consume: The desired time it should take for that\n specific request amount to be consumed in regardless of previously\n scheduled consumption requests\n\n :rtype: float\n :returns: The amount of time to wait for the specific request before\n actually consuming the specified amount."} {"_id": "q_6889", "text": "Get the projected rate using a provided amount and time\n\n :type amt: int\n :param amt: The proposed amount to consume\n\n :type time_at_consumption: float\n :param time_at_consumption: The proposed time to consume at\n\n :rtype: float\n :returns: The consumption rate if that amt and time were consumed"} {"_id": "q_6890", "text": "Record the consumption rate based off amount and time point\n\n :type amt: int\n :param amt: The amount that got consumed\n\n :type time_at_consumption: float\n :param time_at_consumption: The time at which the amount was consumed"} {"_id": "q_6891", "text": "Downloads the object's contents to a file\n\n :type bucket: str\n :param bucket: The name of the bucket to download from\n\n :type key: str\n :param key: The name of the key to download from\n\n :type filename: str\n :param filename: The name of a file to download to.\n\n :type extra_args: dict\n :param extra_args: Extra arguments that may be passed to the\n client operation\n\n :type expected_size: int\n :param expected_size: The expected size in bytes of the download. If\n provided, the downloader will not call HeadObject to determine the\n object's size and use the provided value instead. The size is\n needed to determine whether to do a multipart download.\n\n :rtype: s3transfer.futures.TransferFuture\n :returns: Transfer future representing the download"} {"_id": "q_6892", "text": "Poll for the result of a transfer\n\n :param transfer_id: Unique identifier for the transfer\n :return: If the transfer succeeded, it will return the result. If the\n transfer failed, it will raise the exception associated to the\n failure."} {"_id": "q_6893", "text": "Decrement the count by one"} {"_id": "q_6894", "text": "Finalize the counter\n\n Once finalized, the counter never be incremented and the callback\n can be invoked once the count reaches zero"} {"_id": "q_6895", "text": "Checks to see if a file is a special UNIX file.\n\n It checks if the file is a character special device, block special\n device, FIFO, or socket.\n\n :param filename: Name of the file\n\n :returns: True if the file is a special file. False, if is not."} {"_id": "q_6896", "text": "Get a chunksize close to current that fits within all S3 limits.\n\n :type current_chunksize: int\n :param current_chunksize: The currently configured chunksize.\n\n :type file_size: int or None\n :param file_size: The size of the file to upload. This might be None\n if the object being transferred has an unknown size.\n\n :returns: A valid chunksize that fits within configured limits."} {"_id": "q_6897", "text": "Queue IO write for submission to the IO executor.\n\n This method accepts an IO executor and information about the\n downloaded data, and handles submitting this to the IO executor.\n\n This method may defer submission to the IO executor if necessary."} {"_id": "q_6898", "text": "Retrieves a class for managing output for a download\n\n :type transfer_future: s3transfer.futures.TransferFuture\n :param transfer_future: The transfer future for the request\n\n :type osutil: s3transfer.utils.OSUtils\n :param osutil: The os utility associated to the transfer\n\n :rtype: class of DownloadOutputManager\n :returns: The appropriate class to use for managing a specific type of\n input for downloads."} {"_id": "q_6899", "text": "Downloads an object and places content into io queue\n\n :param client: The client to use when calling GetObject\n :param bucket: The bucket to download from\n :param key: The key to download from\n :param fileobj: The file handle to write content to\n :param exta_args: Any extra arguements to include in GetObject request\n :param callbacks: List of progress callbacks to invoke on download\n :param max_attempts: The number of retries to do when downloading\n :param download_output_manager: The download output manager associated\n with the current download.\n :param io_chunksize: The size of each io chunk to read from the\n download stream and queue in the io queue.\n :param start_index: The location in the file to start writing the\n content of the key to.\n :param bandwidth_limiter: The bandwidth limiter to use when throttling\n the downloading of data in streams."} {"_id": "q_6900", "text": "Pulls off an io queue to write contents to a file\n\n :param fileobj: The file handle to write content to\n :param data: The data to write\n :param offset: The offset to write the data to."} {"_id": "q_6901", "text": "Request any available writes given new incoming data.\n\n You call this method by providing new data along with the\n offset associated with the data. If that new data unlocks\n any contiguous writes that can now be submitted, this\n method will return all applicable writes.\n\n This is done with 1 method call so you don't have to\n make two method calls (put(), get()) which acquires a lock\n each method call."} {"_id": "q_6902", "text": "Backwards compat function to determine if a fileobj is seekable\n\n :param fileobj: The file-like object to determine if seekable\n\n :returns: True, if seekable. False, otherwise."} {"_id": "q_6903", "text": "Downloads a file from S3\n\n :type bucket: str\n :param bucket: The name of the bucket to download from\n\n :type key: str\n :param key: The name of the key to download from\n\n :type fileobj: str or seekable file-like object\n :param fileobj: The name of a file to download or a seekable file-like\n object to download. It is recommended to use a filename because\n file-like objects may result in higher memory usage.\n\n :type extra_args: dict\n :param extra_args: Extra arguments that may be passed to the\n client operation\n\n :type subscribers: list(s3transfer.subscribers.BaseSubscriber)\n :param subscribers: The list of subscribers to be invoked in the\n order provided based on the event emit during the process of\n the transfer request.\n\n :rtype: s3transfer.futures.TransferFuture\n :returns: Transfer future representing the download"} {"_id": "q_6904", "text": "Copies a file in S3\n\n :type copy_source: dict\n :param copy_source: The name of the source bucket, key name of the\n source object, and optional version ID of the source object. The\n dictionary format is:\n ``{'Bucket': 'bucket', 'Key': 'key', 'VersionId': 'id'}``. Note\n that the ``VersionId`` key is optional and may be omitted.\n\n :type bucket: str\n :param bucket: The name of the bucket to copy to\n\n :type key: str\n :param key: The name of the key to copy to\n\n :type extra_args: dict\n :param extra_args: Extra arguments that may be passed to the\n client operation\n\n :type subscribers: a list of subscribers\n :param subscribers: The list of subscribers to be invoked in the\n order provided based on the event emit during the process of\n the transfer request.\n\n :type source_client: botocore or boto3 Client\n :param source_client: The client to be used for operation that\n may happen at the source object. For example, this client is\n used for the head_object that determines the size of the copy.\n If no client is provided, the transfer manager's client is used\n as the client for the source object.\n\n :rtype: s3transfer.futures.TransferFuture\n :returns: Transfer future representing the copy"} {"_id": "q_6905", "text": "Delete an S3 object.\n\n :type bucket: str\n :param bucket: The name of the bucket.\n\n :type key: str\n :param key: The name of the S3 object to delete.\n\n :type extra_args: dict\n :param extra_args: Extra arguments that may be passed to the\n DeleteObject call.\n\n :type subscribers: list\n :param subscribers: A list of subscribers to be invoked during the\n process of the transfer request. Note that the ``on_progress``\n callback is not invoked during object deletion.\n\n :rtype: s3transfer.futures.TransferFuture\n :return: Transfer future representing the deletion."} {"_id": "q_6906", "text": "Shutdown the TransferManager\n\n It will wait till all transfers complete before it completely shuts\n down.\n\n :type cancel: boolean\n :param cancel: If True, calls TransferFuture.cancel() for\n all in-progress in transfers. This is useful if you want the\n shutdown to happen quicker.\n\n :type cancel_msg: str\n :param cancel_msg: The message to specify if canceling all in-progress\n transfers."} {"_id": "q_6907", "text": "Cancels all inprogress transfers\n\n This cancels the inprogress transfers by calling cancel() on all\n tracked transfer coordinators.\n\n :param msg: The message to pass on to each transfer coordinator that\n gets cancelled.\n\n :param exc_type: The type of exception to set for the cancellation"} {"_id": "q_6908", "text": "Wait until there are no more inprogress transfers\n\n This will not stop when failures are encountered and not propogate any\n of these errors from failed transfers, but it can be interrupted with\n a KeyboardInterrupt."} {"_id": "q_6909", "text": "Retrieves a class for managing input for an upload based on file type\n\n :type transfer_future: s3transfer.futures.TransferFuture\n :param transfer_future: The transfer future for the request\n\n :rtype: class of UploadInputManager\n :returns: The appropriate class to use for managing a specific type of\n input for uploads."} {"_id": "q_6910", "text": "Sets the exception on the future."} {"_id": "q_6911", "text": "Set a result for the TransferFuture\n\n Implies that the TransferFuture succeeded. This will always set a\n result because it is invoked on the final task where there is only\n ever one final task and it is ran at the very end of a transfer\n process. So if a result is being set for this final task, the transfer\n succeeded even if something came a long and canceled the transfer\n on the final task."} {"_id": "q_6912", "text": "Set an exception for the TransferFuture\n\n Implies the TransferFuture failed.\n\n :param exception: The exception that cause the transfer to fail.\n :param override: If True, override any existing state."} {"_id": "q_6913", "text": "Cancels the TransferFuture\n\n :param msg: The message to attach to the cancellation\n :param exc_type: The type of exception to set for the cancellation"} {"_id": "q_6914", "text": "Submits a task to a provided executor\n\n :type executor: s3transfer.futures.BoundedExecutor\n :param executor: The executor to submit the callable to\n\n :type task: s3transfer.tasks.Task\n :param task: The task to submit to the executor\n\n :type tag: s3transfer.futures.TaskTag\n :param tag: A tag to associate to the submitted task\n\n :rtype: concurrent.futures.Future\n :returns: A future representing the submitted task"} {"_id": "q_6915", "text": "Add a done callback to be invoked when transfer is done"} {"_id": "q_6916", "text": "Adds a callback to call upon failure"} {"_id": "q_6917", "text": "Announce that future is done running and run associated callbacks\n\n This will run any failure cleanups if the transfer failed if not\n they have not been run, allows the result() to be unblocked, and will\n run any done callbacks associated to the TransferFuture if they have\n not already been ran."} {"_id": "q_6918", "text": "Submit a task to complete\n\n :type task: s3transfer.tasks.Task\n :param task: The task to run __call__ on\n\n\n :type tag: s3transfer.futures.TaskTag\n :param tag: An optional tag to associate to the task. This\n is used to override which semaphore to use.\n\n :type block: boolean\n :param block: True if to wait till it is possible to submit a task.\n False, if not to wait and raise an error if not able to submit\n a task.\n\n :returns: The future assocaited to the submitted task"} {"_id": "q_6919", "text": "Upload a file to an S3 object.\n\n Variants have also been injected into S3 client, Bucket and Object.\n You don't have to use S3Transfer.upload_file() directly."} {"_id": "q_6920", "text": "Download an S3 object to a file.\n\n Variants have also been injected into S3 client, Bucket and Object.\n You don't have to use S3Transfer.download_file() directly."} {"_id": "q_6921", "text": "Find functions with step decorator in parsed file"} {"_id": "q_6922", "text": "Get the arguments passed to step decorators\n converted to python objects."} {"_id": "q_6923", "text": "Find the step with old_text and change it to new_text.The step function\n parameters are also changed according to move_param_from_idx.\n Each entry in this list should specify parameter position from old."} {"_id": "q_6924", "text": "Find functions with step decorator in parsed file."} {"_id": "q_6925", "text": "Get arguments passed to step decorators converted to python objects."} {"_id": "q_6926", "text": "Find the step with old_text and change it to new_text.\n The step function parameters are also changed according\n to move_param_from_idx. Each entry in this list should\n specify parameter position from old"} {"_id": "q_6927", "text": "Select default parser for loading and refactoring steps. Passing `redbaron` as argument\n will select the old paring engine from v0.3.3\n\n Replacing the redbaron parser was necessary to support Python 3 syntax. We have tried our\n best to make sure there is no user impact on users. However, there may be regressions with\n new parser backend.\n\n To revert to the old parser implementation, add `GETGAUGE_USE_0_3_3_PARSER=true` property\n to the `python.properties` file in the `/env/default directory.\n\n This property along with the redbaron parser will be removed in future releases."} {"_id": "q_6928", "text": "List team memberships for a team, by ID.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all team memberships returned by\n the query. The generator will automatically request additional 'pages'\n of responses from Webex as needed until all responses have been\n returned. The container makes the generator safe for reuse. A new API\n call will be made, using the same parameters that were specified when\n the generator was created, every time a new iterator is requested from\n the container.\n\n Args:\n teamId(basestring): List team memberships for a team, by ID.\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the team memberships returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6929", "text": "Add someone to a team by Person ID or email address.\n\n Add someone to a team by Person ID or email address; optionally making\n them a moderator.\n\n Args:\n teamId(basestring): The team ID.\n personId(basestring): The person ID.\n personEmail(basestring): The email address of the person.\n isModerator(bool): Set to True to make the person a team moderator.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n TeamMembership: A TeamMembership object with the details of the\n created team membership.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6930", "text": "Update a team membership, by ID.\n\n Args:\n membershipId(basestring): The team membership ID.\n isModerator(bool): Set to True to make the person a team moderator.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n TeamMembership: A TeamMembership object with the updated Webex\n Teams team-membership details.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6931", "text": "Delete a team membership, by ID.\n\n Args:\n membershipId(basestring): The team membership ID.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6932", "text": "Get a cat fact from catfact.ninja and return it as a string.\n\n Functions for Soundhound, Google, IBM Watson, or other APIs can be added\n to create the desired functionality into this bot."} {"_id": "q_6933", "text": "Respond to inbound webhook JSON HTTP POSTs from Webex Teams."} {"_id": "q_6934", "text": "List room memberships.\n\n By default, lists memberships for rooms to which the authenticated user\n belongs.\n\n Use query parameters to filter the response.\n\n Use `roomId` to list memberships for a room, by ID.\n\n Use either `personId` or `personEmail` to filter the results.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all memberships returned by the\n query. The generator will automatically request additional 'pages' of\n responses from Webex as needed until all responses have been returned.\n The container makes the generator safe for reuse. A new API call will\n be made, using the same parameters that were specified when the\n generator was created, every time a new iterator is requested from the\n container.\n\n Args:\n roomId(basestring): Limit results to a specific room, by ID.\n personId(basestring): Limit results to a specific person, by ID.\n personEmail(basestring): Limit results to a specific person, by\n email address.\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the memberships returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6935", "text": "Delete a membership, by ID.\n\n Args:\n membershipId(basestring): The membership ID.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6936", "text": "Check to see if string is an validly-formatted web url."} {"_id": "q_6937", "text": "Open the file and return an EncodableFile tuple."} {"_id": "q_6938", "text": "Object is an instance of one of the acceptable types or None.\n\n Args:\n o: The object to be inspected.\n acceptable_types: A type or tuple of acceptable types.\n may_be_none(bool): Whether or not the object may be None.\n\n Raises:\n TypeError: If the object is None and may_be_none=False, or if the\n object is not an instance of one of the acceptable types."} {"_id": "q_6939", "text": "Check response code against the expected code; raise ApiError.\n\n Checks the requests.response.status_code against the provided expected\n response code (erc), and raises a ApiError if they do not match.\n\n Args:\n response(requests.response): The response object returned by a request\n using the requests package.\n expected_response_code(int): The expected response code (HTTP response\n code).\n\n Raises:\n ApiError: If the requests.response.status_code does not match the\n provided expected response code (erc)."} {"_id": "q_6940", "text": "Given a dictionary or JSON string; return a dictionary.\n\n Args:\n json_data(dict, str): Input JSON object.\n\n Returns:\n A Python dictionary with the contents of the JSON object.\n\n Raises:\n TypeError: If the input object is not a dictionary or string."} {"_id": "q_6941", "text": "strptime with the Webex Teams DateTime format as the default."} {"_id": "q_6942", "text": "List rooms.\n\n By default, lists rooms to which the authenticated user belongs.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all rooms returned by the\n query. The generator will automatically request additional 'pages' of\n responses from Webex as needed until all responses have been returned.\n The container makes the generator safe for reuse. A new API call will\n be made, using the same parameters that were specified when the\n generator was created, every time a new iterator is requested from the\n container.\n\n Args:\n teamId(basestring): Limit the rooms to those associated with a\n team, by ID.\n type(basestring): 'direct' returns all 1-to-1 rooms. `group`\n returns all group rooms. If not specified or values not\n matched, will return all room types.\n sortBy(basestring): Sort results by room ID (`id`), most recent\n activity (`lastactivity`), or most recently created\n (`created`).\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the rooms returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6943", "text": "Creation date and time in ISO8601 format."} {"_id": "q_6944", "text": "Attempt to get the access token from the environment.\n\n Try using the current and legacy environment variables. If the access token\n is found in a legacy environment variable, raise a deprecation warning.\n\n Returns:\n The access token found in the environment (str), or None."} {"_id": "q_6945", "text": "Create a webhook.\n\n Args:\n name(basestring): A user-friendly name for this webhook.\n targetUrl(basestring): The URL that receives POST requests for\n each event.\n resource(basestring): The resource type for the webhook.\n event(basestring): The event type for the webhook.\n filter(basestring): The filter that defines the webhook scope.\n secret(basestring): The secret used to generate payload signature.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n Webhook: A Webhook object with the details of the created webhook.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6946", "text": "Update the HTTP headers used for requests in this session.\n\n Note: Updates provided by the dictionary passed as the `headers`\n parameter to this method are merged into the session headers by adding\n new key-value pairs and/or updating the values of existing keys. The\n session headers are not replaced by the provided dictionary.\n\n Args:\n headers(dict): Updates to the current session headers."} {"_id": "q_6947", "text": "Given a relative or absolute URL; return an absolute URL.\n\n Args:\n url(basestring): A relative or absolute URL.\n\n Returns:\n str: An absolute URL."} {"_id": "q_6948", "text": "Abstract base method for making requests to the Webex Teams APIs.\n\n This base method:\n * Expands the API endpoint URL to an absolute URL\n * Makes the actual HTTP request to the API endpoint\n * Provides support for Webex Teams rate-limiting\n * Inspects response codes and raises exceptions as appropriate\n\n Args:\n method(basestring): The request-method type ('GET', 'POST', etc.).\n url(basestring): The URL of the API endpoint to be called.\n erc(int): The expected response code that should be returned by the\n Webex Teams API endpoint to indicate success.\n **kwargs: Passed on to the requests package.\n\n Raises:\n ApiError: If anything other than the expected response code is\n returned by the Webex Teams API endpoint."} {"_id": "q_6949", "text": "Sends a GET request.\n\n Args:\n url(basestring): The URL of the API endpoint.\n params(dict): The parameters for the HTTP GET request.\n **kwargs:\n erc(int): The expected (success) response code for the request.\n others: Passed on to the requests package.\n\n Raises:\n ApiError: If anything other than the expected response code is\n returned by the Webex Teams API endpoint."} {"_id": "q_6950", "text": "Return a generator that GETs and yields pages of data.\n\n Provides native support for RFC5988 Web Linking.\n\n Args:\n url(basestring): The URL of the API endpoint.\n params(dict): The parameters for the HTTP GET request.\n **kwargs:\n erc(int): The expected (success) response code for the request.\n others: Passed on to the requests package.\n\n Raises:\n ApiError: If anything other than the expected response code is\n returned by the Webex Teams API endpoint."} {"_id": "q_6951", "text": "Sends a DELETE request.\n\n Args:\n url(basestring): The URL of the API endpoint.\n **kwargs:\n erc(int): The expected (success) response code for the request.\n others: Passed on to the requests package.\n\n Raises:\n ApiError: If anything other than the expected response code is\n returned by the Webex Teams API endpoint."} {"_id": "q_6952", "text": "Create a new guest issuer using the provided issuer token.\n\n This function returns a guest issuer with an api access token.\n\n Args:\n subject(basestring): Unique and public identifier\n displayName(basestring): Display Name of the guest user\n issuerToken(basestring): Issuer token from developer hub\n expiration(basestring): Expiration time as a unix timestamp\n secret(basestring): The secret used to sign your guest issuers\n\n Returns:\n GuestIssuerToken: A Guest Issuer with a valid access token.\n\n Raises:\n TypeError: If the parameter types are incorrect\n ApiError: If the webex teams cloud returns an error."} {"_id": "q_6953", "text": "Lists messages in a room.\n\n Each message will include content attachments if present.\n\n The list API sorts the messages in descending order by creation date.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all messages returned by the\n query. The generator will automatically request additional 'pages' of\n responses from Webex as needed until all responses have been returned.\n The container makes the generator safe for reuse. A new API call will\n be made, using the same parameters that were specified when the\n generator was created, every time a new iterator is requested from the\n container.\n\n Args:\n roomId(basestring): List messages for a room, by ID.\n mentionedPeople(basestring): List messages where the caller is\n mentioned by specifying \"me\" or the caller `personId`.\n before(basestring): List messages sent before a date and time, in\n ISO8601 format.\n beforeMessage(basestring): List messages sent before a message,\n by ID.\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the messages returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6954", "text": "Post a message, and optionally a attachment, to a room.\n\n The files parameter is a list, which accepts multiple values to allow\n for future expansion, but currently only one file may be included with\n the message.\n\n Args:\n roomId(basestring): The room ID.\n toPersonId(basestring): The ID of the recipient when sending a\n private 1:1 message.\n toPersonEmail(basestring): The email address of the recipient when\n sending a private 1:1 message.\n text(basestring): The message, in plain text. If `markdown` is\n specified this parameter may be optionally used to provide\n alternate text for UI clients that do not support rich text.\n markdown(basestring): The message, in markdown format.\n files(`list`): A list of public URL(s) or local path(s) to files to\n be posted into the room. Only one file is allowed per message.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n Message: A Message object with the details of the created message.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n ValueError: If the files parameter is a list of length > 1, or if\n the string in the list (the only element in the list) does not\n contain a valid URL or path to a local file."} {"_id": "q_6955", "text": "Delete a message.\n\n Args:\n messageId(basestring): The ID of the message to be deleted.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6956", "text": "Create a new user account for a given organization\n\n Only an admin can create a new user account.\n\n Args:\n emails(`list`): Email address(es) of the person (list of strings).\n displayName(basestring): Full name of the person.\n firstName(basestring): First name of the person.\n lastName(basestring): Last name of the person.\n avatar(basestring): URL to the person's avatar in PNG format.\n orgId(basestring): ID of the organization to which this\n person belongs.\n roles(`list`): Roles of the person (list of strings containing\n the role IDs to be assigned to the person).\n licenses(`list`): Licenses allocated to the person (list of\n strings - containing the license IDs to be allocated to the\n person).\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n Person: A Person object with the details of the created person.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6957", "text": "Get a person's details, by ID.\n\n Args:\n personId(basestring): The ID of the person to be retrieved.\n\n Returns:\n Person: A Person object with the details of the requested person.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6958", "text": "Update details for a person, by ID.\n\n Only an admin can update a person's details.\n\n Email addresses for a person cannot be changed via the Webex Teams API.\n\n Include all details for the person. This action expects all user\n details to be present in the request. A common approach is to first GET\n the person's details, make changes, then PUT both the changed and\n unchanged values.\n\n Args:\n personId(basestring): The person ID.\n emails(`list`): Email address(es) of the person (list of strings).\n displayName(basestring): Full name of the person.\n firstName(basestring): First name of the person.\n lastName(basestring): Last name of the person.\n avatar(basestring): URL to the person's avatar in PNG format.\n orgId(basestring): ID of the organization to which this\n person belongs.\n roles(`list`): Roles of the person (list of strings containing\n the role IDs to be assigned to the person).\n licenses(`list`): Licenses allocated to the person (list of\n strings - containing the license IDs to be allocated to the\n person).\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n Person: A Person object with the updated details.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6959", "text": "Remove a person from the system.\n\n Only an admin can remove a person.\n\n Args:\n personId(basestring): The ID of the person to be deleted.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6960", "text": "List teams to which the authenticated user belongs.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all teams returned by the\n query. The generator will automatically request additional 'pages' of\n responses from Webex as needed until all responses have been returned.\n The container makes the generator safe for reuse. A new API call will\n be made, using the same parameters that were specified when the\n generator was created, every time a new iterator is requested from the\n container.\n\n Args:\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the teams returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6961", "text": "Update details for a team, by ID.\n\n Args:\n teamId(basestring): The team ID.\n name(basestring): A user-friendly name for the team.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n Team: A Team object with the updated Webex Teams team details.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6962", "text": "List events.\n\n List events in your organization. Several query parameters are\n available to filter the response.\n\n Note: `from` is a keyword in Python and may not be used as a variable\n name, so we had to use `_from` instead.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all events returned by the\n query. The generator will automatically request additional 'pages' of\n responses from Wevex as needed until all responses have been returned.\n The container makes the generator safe for reuse. A new API call will\n be made, using the same parameters that were specified when the\n generator was created, every time a new iterator is requested from the\n container.\n\n Args:\n resource(basestring): Limit results to a specific resource type.\n Possible values: \"messages\", \"memberships\".\n type(basestring): Limit results to a specific event type. Possible\n values: \"created\", \"updated\", \"deleted\".\n actorId(basestring): Limit results to events performed by this\n person, by ID.\n _from(basestring): Limit results to events which occurred after a\n date and time, in ISO8601 format (yyyy-MM-dd'T'HH:mm:ss.SSSZ).\n to(basestring): Limit results to events which occurred before a\n date and time, in ISO8601 format (yyyy-MM-dd'T'HH:mm:ss.SSSZ).\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the events returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"_id": "q_6963", "text": "Respond to inbound webhook JSON HTTP POST from Webex Teams."} {"_id": "q_6964", "text": "Get the ngrok public HTTP URL from the local client API."} {"_id": "q_6965", "text": "Create a Webex Teams webhook pointing to the public ngrok URL."} {"_id": "q_6966", "text": "Return all rows from a cursor as a dict."} {"_id": "q_6967", "text": "Parse a received datetime into a timezone-aware, Python datetime object.\n\n Arguments:\n datetime_string: A string to be parsed.\n datetime_format: A datetime format string to be used for parsing"} {"_id": "q_6968", "text": "Connect to the REST API, authenticating with a JWT for the current user."} {"_id": "q_6969", "text": "Return redirect to embargo error page if the given user is blocked."} {"_id": "q_6970", "text": "Sort the course mode dictionaries by slug according to the COURSE_MODE_SORT_ORDER constant.\n\n Arguments:\n modes (list): A list of course mode dictionaries.\n Returns:\n list: A list with the course modes dictionaries sorted by slug."} {"_id": "q_6971", "text": "Query the Enrollment API to see whether a course run has a given course mode available.\n\n Arguments:\n course_run_id (str): The string value of the course run's unique identifier\n\n Returns:\n bool: Whether the course run has the given mode avaialble for enrollment."} {"_id": "q_6972", "text": "Call the enrollment API to enroll the user in the course specified by course_id.\n\n Args:\n username (str): The username by which the user goes on the OpenEdX platform\n course_id (str): The string value of the course's unique identifier\n mode (str): The enrollment mode which should be used for the enrollment\n cohort (str): Add the user to this named cohort\n\n Returns:\n dict: A dictionary containing details of the enrollment, including course details, mode, username, etc."} {"_id": "q_6973", "text": "Query the enrollment API to get information about a single course enrollment.\n\n Args:\n username (str): The username by which the user goes on the OpenEdX platform\n course_id (str): The string value of the course's unique identifier\n\n Returns:\n dict: A dictionary containing details of the enrollment, including course details, mode, username, etc."} {"_id": "q_6974", "text": "Query the enrollment API and determine if a learner is enrolled in a course run.\n\n Args:\n username (str): The username by which the user goes on the OpenEdX platform\n course_run_id (str): The string value of the course's unique identifier\n\n Returns:\n bool: Indicating whether the user is enrolled in the course run. Returns False under any errors."} {"_id": "q_6975", "text": "Return a Course Discovery API client setup with authentication for the specified user."} {"_id": "q_6976", "text": "Return specified course catalog.\n\n Returns:\n dict: catalog details if it is available for the user."} {"_id": "q_6977", "text": "Return paginated response for all catalog courses.\n\n Returns:\n dict: API response with links to next and previous pages."} {"_id": "q_6978", "text": "Return a paginated list of course catalogs, including name and ID.\n\n Returns:\n dict: Paginated response containing catalogs available for the user."} {"_id": "q_6979", "text": "Return the courses included in a single course catalog by ID.\n\n Args:\n catalog_id (int): The catalog ID we want to retrieve.\n\n Returns:\n list: Courses of the catalog in question"} {"_id": "q_6980", "text": "Return single program by UUID, or None if not found.\n\n Arguments:\n program_uuid(string): Program UUID in string form\n\n Returns:\n dict: Program data provided by Course Catalog API"} {"_id": "q_6981", "text": "Get a program type by its slug.\n\n Arguments:\n slug (str): The slug to identify the program type.\n\n Returns:\n dict: A program type object."} {"_id": "q_6982", "text": "Find common course modes for a set of course runs.\n\n This function essentially returns an intersection of types of seats available\n for each course run.\n\n Arguments:\n course_run_ids(Iterable[str]): Target Course run IDs.\n\n Returns:\n set: course modes found in all given course runs\n\n Examples:\n # run1 has prof and audit, run 2 has the same\n get_common_course_modes(['course-v1:run1', 'course-v1:run2'])\n {'prof', 'audit'}\n\n # run1 has prof and audit, run 2 has only prof\n get_common_course_modes(['course-v1:run1', 'course-v1:run2'])\n {'prof'}\n\n # run1 has prof and audit, run 2 honor\n get_common_course_modes(['course-v1:run1', 'course-v1:run2'])\n {}\n\n # run1 has nothing, run2 has prof\n get_common_course_modes(['course-v1:run1', 'course-v1:run2'])\n {}\n\n # run1 has prof and audit, run 2 prof, run3 has audit\n get_common_course_modes(['course-v1:run1', 'course-v1:run2', 'course-v1:run3'])\n {}\n\n # run1 has nothing, run 2 prof, run3 has prof\n get_common_course_modes(['course-v1:run1', 'course-v1:run2', 'course-v1:run3'])\n {}"} {"_id": "q_6983", "text": "Determine if the given course or course run ID is contained in the catalog with the given ID.\n\n Args:\n catalog_id (int): The ID of the catalog\n course_id (str): The ID of the course or course run\n\n Returns:\n bool: Whether the course or course run is contained in the given catalog"} {"_id": "q_6984", "text": "Load data from API client.\n\n Arguments:\n resource(string): type of resource to load\n default(any): value to return if API query returned empty result. Sensible values: [], {}, None etc.\n\n Returns:\n dict: Deserialized response from Course Catalog API"} {"_id": "q_6985", "text": "Return all content metadata contained in the catalogs associated with the EnterpriseCustomer.\n\n Arguments:\n enterprise_customer (EnterpriseCustomer): The EnterpriseCustomer to return content metadata for.\n\n Returns:\n list: List of dicts containing content metadata."} {"_id": "q_6986", "text": "Return items that need to be created, updated, and deleted along with the\n current ContentMetadataItemTransmissions."} {"_id": "q_6987", "text": "Serialize content metadata items for a create transmission to the integrated channel."} {"_id": "q_6988", "text": "Transmit content metadata update to integrated channel."} {"_id": "q_6989", "text": "Transmit content metadata deletion to integrated channel."} {"_id": "q_6990", "text": "Return the ContentMetadataItemTransmision models for previously\n transmitted content metadata items."} {"_id": "q_6991", "text": "Update ContentMetadataItemTransmision models for the given content metadata items."} {"_id": "q_6992", "text": "Flag a method as deprecated.\n\n :param extra: Extra text you'd like to display after the default text."} {"_id": "q_6993", "text": "View decorator for allowing authenticated user with valid enterprise UUID.\n\n This decorator requires enterprise identifier as a parameter\n `enterprise_uuid`.\n\n This decorator will throw 404 if no kwarg `enterprise_uuid` is provided to\n the decorated view .\n\n If there is no enterprise in database against the kwarg `enterprise_uuid`\n or if the user is not authenticated then it will redirect the user to the\n enterprise-linked SSO login page.\n\n Usage::\n @enterprise_login_required()\n def my_view(request, enterprise_uuid):\n # Some functionality ...\n\n OR\n\n class MyView(View):\n ...\n @method_decorator(enterprise_login_required)\n def get(self, request, enterprise_uuid):\n # Some functionality ..."} {"_id": "q_6994", "text": "Verify that the username has a matching user, and that the user has an associated EnterpriseCustomerUser."} {"_id": "q_6995", "text": "Save the model with the found EnterpriseCustomerUser."} {"_id": "q_6996", "text": "Serialize the EnterpriseCustomerCatalog object.\n\n Arguments:\n instance (EnterpriseCustomerCatalog): The EnterpriseCustomerCatalog to serialize.\n\n Returns:\n dict: The EnterpriseCustomerCatalog converted to a dict."} {"_id": "q_6997", "text": "Return the enterprise related django groups that this user is a part of."} {"_id": "q_6998", "text": "Verify that the username has a matching user."} {"_id": "q_6999", "text": "Save the EnterpriseCustomerUser."} {"_id": "q_7000", "text": "Return the updated course data dictionary.\n\n Arguments:\n instance (dict): The course data.\n\n Returns:\n dict: The updated course data."} {"_id": "q_7001", "text": "Return the updated course run data dictionary.\n\n Arguments:\n instance (dict): The course run data.\n\n Returns:\n dict: The updated course run data."} {"_id": "q_7002", "text": "Return the updated program data dictionary.\n\n Arguments:\n instance (dict): The program data.\n\n Returns:\n dict: The updated program data."} {"_id": "q_7003", "text": "This implements the same relevant logic as ListSerializer except that if one or more items fail validation,\n processing for other items that did not fail will continue."} {"_id": "q_7004", "text": "This selectively calls the child create method based on whether or not validation failed for each payload."} {"_id": "q_7005", "text": "This selectively calls to_representation on each result that was processed by create."} {"_id": "q_7006", "text": "Perform the enrollment for existing enterprise customer users, or create the pending objects for new users."} {"_id": "q_7007", "text": "Validates the lms_user_id, if is given, to see if there is an existing EnterpriseCustomerUser for it."} {"_id": "q_7008", "text": "Validates the tpa_user_id, if is given, to see if there is an existing EnterpriseCustomerUser for it.\n\n It first uses the third party auth api to find the associated username to do the lookup."} {"_id": "q_7009", "text": "Validates the user_email, if given, to see if an existing EnterpriseCustomerUser exists for it.\n\n If it does not, it does not fail validation, unlike for the other field validation methods above."} {"_id": "q_7010", "text": "Validates that the course run id is part of the Enterprise Customer's catalog."} {"_id": "q_7011", "text": "Update pagination links in course catalog data and return DRF Response.\n\n Arguments:\n data (dict): Dictionary containing catalog courses.\n request (HttpRequest): Current request object.\n\n Returns:\n (Response): DRF response object containing pagination links."} {"_id": "q_7012", "text": "Delete the `role_based_access_control` switch."} {"_id": "q_7013", "text": "Send a completion status call to SAP SuccessFactors using the client.\n\n Args:\n payload: The learner completion data payload to send to SAP SuccessFactors"} {"_id": "q_7014", "text": "Modify throttling for service users.\n\n Updates throttling rate if the request is coming from the service user, and\n defaults to UserRateThrottle's configured setting otherwise.\n\n Updated throttling rate comes from `DEFAULT_THROTTLE_RATES` key in `REST_FRAMEWORK`\n setting. service user throttling is specified in `DEFAULT_THROTTLE_RATES` by `service_user` key\n\n Example Setting:\n ```\n REST_FRAMEWORK = {\n ...\n 'DEFAULT_THROTTLE_RATES': {\n ...\n 'service_user': '50/day'\n }\n }\n ```"} {"_id": "q_7015", "text": "This method adds enterprise-specific metadata for each course.\n\n We are adding following field in all the courses.\n tpa_hint: a string for identifying Identity Provider.\n enterprise_id: the UUID of the enterprise\n **kwargs: any additional data one would like to add on a per-use basis.\n\n Arguments:\n enterprise_customer: The customer whose data will be used to fill the enterprise context.\n course_container_key: The key used to find the container for courses in the serializer's data dictionary."} {"_id": "q_7016", "text": "Update course metadata of the given course and return updated course.\n\n Arguments:\n course (dict): Course Metadata returned by course catalog API\n enterprise_customer (EnterpriseCustomer): enterprise customer instance.\n enterprise_context (dict): Enterprise context to be added to course runs and URLs..\n\n Returns:\n (dict): Updated course metadata"} {"_id": "q_7017", "text": "Collect learner data for the ``EnterpriseCustomer`` where data sharing consent is granted.\n\n Yields a learner data object for each enrollment, containing:\n\n * ``enterprise_enrollment``: ``EnterpriseCourseEnrollment`` object.\n * ``completed_date``: datetime instance containing the course/enrollment completion date; None if not complete.\n \"Course completion\" occurs for instructor-paced courses when course certificates are issued, and\n for self-paced courses, when the course end date is passed, or when the learner achieves a passing grade.\n * ``grade``: string grade recorded for the learner in the course."} {"_id": "q_7018", "text": "Generate a learner data transmission audit with fields properly filled in."} {"_id": "q_7019", "text": "Get enterprise user id from user object.\n\n Arguments:\n obj (User): Django User object\n\n Returns:\n (int): Primary Key identifier for enterprise user object."} {"_id": "q_7020", "text": "Get enterprise SSO UID.\n\n Arguments:\n obj (User): Django User object\n\n Returns:\n (str): string containing UUID for enterprise customer's Identity Provider."} {"_id": "q_7021", "text": "Remove content metadata items from the `items_to_create`, `items_to_update`, `items_to_delete` dicts.\n\n Arguments:\n failed_items (list): Failed Items to be removed.\n items_to_create (dict): dict containing the items created successfully.\n items_to_update (dict): dict containing the items updated successfully.\n items_to_delete (dict): dict containing the items deleted successfully."} {"_id": "q_7022", "text": "Parse and validate arguments for send_course_enrollments command.\n\n Arguments:\n *args: Positional arguments passed to the command\n **options: optional arguments passed to the command\n\n Returns:\n A tuple containing parsed values for\n 1. days (int): Integer showing number of days to lookup enterprise enrollments,\n course completion etc and send to xAPI LRS\n 2. enterprise_customer_uuid (EnterpriseCustomer): Enterprise Customer if present then\n send xAPI statements just for this enterprise."} {"_id": "q_7023", "text": "Send xAPI statements."} {"_id": "q_7024", "text": "Django template tag that returns course information to display in a modal.\n\n You may pass in a particular course if you like. Otherwise, the modal will look for course context\n within the parent context.\n\n Usage:\n {% course_modal %}\n {% course_modal course %}"} {"_id": "q_7025", "text": "Django template filter that returns an anchor with attributes useful for course modal selection.\n\n General Usage:\n {{ link_text|link_to_modal:index }}\n\n Examples:\n {{ course_title|link_to_modal:forloop.counter0 }}\n {{ course_title|link_to_modal:3 }}\n {{ view_details_text|link_to_modal:0 }}"} {"_id": "q_7026", "text": "Populates the ``DataSharingConsent`` model with the ``enterprise`` application's consent data.\n\n Consent data from the ``enterprise`` application come from the ``EnterpriseCourseEnrollment`` model."} {"_id": "q_7027", "text": "Send a completion status payload to the Degreed Completion Status endpoint\n\n Args:\n user_id: Unused.\n payload: JSON encoded object (serialized from DegreedLearnerDataTransmissionAudit)\n containing completion status fields per Degreed documentation.\n\n Returns:\n A tuple containing the status code and the body of the response.\n Raises:\n HTTPError: if we received a failure response code from Degreed"} {"_id": "q_7028", "text": "Delete a completion status previously sent to the Degreed Completion Status endpoint\n\n Args:\n user_id: Unused.\n payload: JSON encoded object (serialized from DegreedLearnerDataTransmissionAudit)\n containing the required completion status fields for deletion per Degreed documentation.\n\n Returns:\n A tuple containing the status code and the body of the response.\n Raises:\n HTTPError: if we received a failure response code from Degreed"} {"_id": "q_7029", "text": "Make a DELETE request using the session object to a Degreed endpoint.\n\n Args:\n url (str): The url to send a DELETE request to.\n data (str): The json encoded payload to DELETE.\n scope (str): Must be one of the scopes Degreed expects:\n - `CONTENT_PROVIDER_SCOPE`\n - `COMPLETION_PROVIDER_SCOPE`"} {"_id": "q_7030", "text": "Instantiate a new session object for use in connecting with Degreed"} {"_id": "q_7031", "text": "Return whether or not the specified content is available to the EnterpriseCustomer.\n\n Multiple course_run_ids and/or program_uuids query parameters can be sent to this view to check\n for their existence in the EnterpriseCustomerCatalogs associated with this EnterpriseCustomer.\n At least one course run key or program UUID value must be included in the request."} {"_id": "q_7032", "text": "Returns the list of enterprise customers the user has a specified group permission access to."} {"_id": "q_7033", "text": "Retrieve the list of entitlements available to this learner.\n\n Only those entitlements are returned that satisfy enterprise customer's data sharing setting.\n\n Arguments:\n request (HttpRequest): Reference to in-progress request instance.\n pk (Int): Primary key value of the selected enterprise learner.\n\n Returns:\n (HttpResponse): Response object containing a list of learner's entitlements."} {"_id": "q_7034", "text": "Return whether or not the EnterpriseCustomerCatalog contains the specified content.\n\n Multiple course_run_ids and/or program_uuids query parameters can be sent to this view to check\n for their existence in the EnterpriseCustomerCatalog. At least one course run key\n or program UUID value must be included in the request."} {"_id": "q_7035", "text": "Return the metadata for the specified course.\n\n The course needs to be included in the specified EnterpriseCustomerCatalog\n in order for metadata to be returned from this endpoint."} {"_id": "q_7036", "text": "DRF view to get catalog details.\n\n Arguments:\n request (HttpRequest): Current request\n pk (int): Course catalog identifier\n\n Returns:\n (Response): DRF response object containing course catalogs."} {"_id": "q_7037", "text": "Gets ``email``, ``enterprise_name``, and ``number_of_codes``,\n which are the relevant parameters for this API endpoint.\n\n :param request: The request to this endpoint.\n :return: The ``email``, ``enterprise_name``, and ``number_of_codes`` from the request."} {"_id": "q_7038", "text": "Get a user-friendly message indicating a missing parameter for the API endpoint."} {"_id": "q_7039", "text": "Return the title of the content item."} {"_id": "q_7040", "text": "Return the description of the content item."} {"_id": "q_7041", "text": "Return the image URI of the content item."} {"_id": "q_7042", "text": "Return the content metadata item launch points.\n\n SAPSF allows you to transmit an arry of content launch points which\n are meant to represent sections of a content item which a learner can\n launch into from SAPSF. Currently, we only provide a single launch\n point for a content item."} {"_id": "q_7043", "text": "Return the title of the courserun content item."} {"_id": "q_7044", "text": "Return the schedule of the courseun content item."} {"_id": "q_7045", "text": "Return the id for the given content_metadata_item, `uuid` for programs or `key` for other content"} {"_id": "q_7046", "text": "Convert an ISO-8601 datetime string to a Unix epoch timestamp in some magnitude.\n\n By default, returns seconds."} {"_id": "q_7047", "text": "Yield successive n-sized chunks from dictionary."} {"_id": "q_7048", "text": "Convert a datetime.timedelta object or a regular number to a custom-formatted string.\n\n This function works like the strftime() method works for datetime.datetime\n objects.\n\n The fmt argument allows custom formatting to be specified. Fields can\n include seconds, minutes, hours, days, and weeks. Each field is optional.\n\n Arguments:\n tdelta (datetime.timedelta, int): time delta object containing the duration or an integer\n to go with the input_type.\n fmt (str): Expected format of the time delta. place holders can only be one of the following.\n 1. D to extract days from time delta\n 2. H to extract hours from time delta\n 3. M to extract months from time delta\n 4. S to extract seconds from timedelta\n input_type (str): The input_type argument allows tdelta to be a regular number instead of the\n default, which is a datetime.timedelta object.\n Valid input_type strings:\n 1. 's', 'seconds',\n 2. 'm', 'minutes',\n 3. 'h', 'hours',\n 4. 'd', 'days',\n 5. 'w', 'weeks'\n Returns:\n (str): timedelta object interpolated into a string following the given format.\n\n Examples:\n '{D:02}d {H:02}h {M:02}m {S:02}s' --> '05d 08h 04m 02s' (default)\n '{W}w {D}d {H}:{M:02}:{S:02}' --> '4w 5d 8:04:02'\n '{D:2}d {H:2}:{M:02}:{S:02}' --> ' 5d 8:04:02'\n '{H}h {S}s' --> '72h 800s'"} {"_id": "q_7049", "text": "Return the transformed version of the course description.\n\n We choose one value out of the course's full description, short description, and title\n depending on availability and length limits."} {"_id": "q_7050", "text": "Delete the file if it already exist and returns the enterprise customer logo image path.\n\n Arguments:\n instance (:class:`.EnterpriseCustomerBrandingConfiguration`): EnterpriseCustomerBrandingConfiguration object\n filename (str): file to upload\n\n Returns:\n path: path of image file e.g. enterprise/branding//_logo..lower()"} {"_id": "q_7051", "text": "Return link by email."} {"_id": "q_7052", "text": "Unlink user email from Enterprise Customer.\n\n If :class:`django.contrib.auth.models.User` instance with specified email does not exist,\n :class:`.PendingEnterpriseCustomerUser` instance is deleted instead.\n\n Raises EnterpriseCustomerUser.DoesNotExist if instance of :class:`django.contrib.auth.models.User` with\n specified email exists and corresponding :class:`.EnterpriseCustomerUser` instance does not.\n\n Raises PendingEnterpriseCustomerUser.DoesNotExist exception if instance of\n :class:`django.contrib.auth.models.User` with specified email exists and corresponding\n :class:`.PendingEnterpriseCustomerUser` instance does not."} {"_id": "q_7053", "text": "Get the data sharing consent object associated with a certain user, enterprise customer, and other scope.\n\n :param username: The user that grants consent\n :param enterprise_customer_uuid: The consent requester\n :param course_id (optional): A course ID to which consent may be related\n :param program_uuid (optional): A program to which consent may be related\n :return: The data sharing consent object, or None if the enterprise customer for the given UUID does not exist."} {"_id": "q_7054", "text": "Get the data sharing consent object associated with a certain user of a customer for a course.\n\n :param username: The user that grants consent.\n :param course_id: The course for which consent is granted.\n :param enterprise_customer_uuid: The consent requester.\n :return: The data sharing consent object"} {"_id": "q_7055", "text": "Send xAPI statement for course enrollment.\n\n Arguments:\n lrs_configuration (XAPILRSConfiguration): XAPILRSConfiguration instance where to send statements.\n course_enrollment (CourseEnrollment): Course enrollment object."} {"_id": "q_7056", "text": "Send xAPI statement for course completion.\n\n Arguments:\n lrs_configuration (XAPILRSConfiguration): XAPILRSConfiguration instance where to send statements.\n user (User): Django User object.\n course_overview (CourseOverview): Course over view object containing course details.\n course_grade (CourseGrade): course grade object."} {"_id": "q_7057", "text": "Return the exported and transformed content metadata as a dictionary."} {"_id": "q_7058", "text": "Transform the provided content metadata item to the schema expected by the integrated channel."} {"_id": "q_7059", "text": "Perform other one-time initialization steps."} {"_id": "q_7060", "text": "Get actor for the statement."} {"_id": "q_7061", "text": "Parse csv file and return a stream of dictionaries representing each row.\n\n First line of CSV file must contain column headers.\n\n Arguments:\n file_stream: input file\n expected_columns (set[unicode]): columns that are expected to be present\n\n Yields:\n dict: CSV line parsed into a dictionary."} {"_id": "q_7062", "text": "Validate email to be linked to Enterprise Customer.\n\n Performs two checks:\n * Checks that email is valid\n * Checks that it is not already linked to any Enterprise Customer\n\n Arguments:\n email (str): user email to link\n raw_email (str): raw value as it was passed by user - used in error message.\n message_template (str): Validation error template string.\n ignore_existing (bool): If True to skip the check for an existing Enterprise Customer\n\n Raises:\n ValidationError: if email is invalid or already linked to Enterprise Customer.\n\n Returns:\n bool: Whether or not there is an existing record with the same email address."} {"_id": "q_7063", "text": "Return course runs from program data.\n\n Arguments:\n program(dict): Program data from Course Catalog API\n\n Returns:\n set: course runs in given program"} {"_id": "q_7064", "text": "Get the earliest date that one of the courses in the program was available.\n For the sake of emails to new learners, we treat this as the program start date.\n\n Arguemnts:\n program (dict): Program data from Course Catalog API\n\n returns:\n datetime.datetime: The date and time at which the first course started"} {"_id": "q_7065", "text": "Returns paginated list.\n\n Arguments:\n object_list (QuerySet): A list of records to be paginated.\n page (int): Current page number.\n page_size (int): Number of records displayed in each paginated set.\n show_all (bool): Whether to show all records.\n\n Adopted from django/contrib/admin/templatetags/admin_list.py\n https://github.com/django/django/blob/1.11.1/django/contrib/admin/templatetags/admin_list.py#L50"} {"_id": "q_7066", "text": "Clean email form field\n\n Returns:\n str: the cleaned value, converted to an email address (or an empty string)"} {"_id": "q_7067", "text": "Clean program.\n\n Try obtaining program treating form value as program UUID or title.\n\n Returns:\n dict: Program information if program found"} {"_id": "q_7068", "text": "Clean the notify_on_enrollment field."} {"_id": "q_7069", "text": "Verify that the selected mode is valid for the given course ."} {"_id": "q_7070", "text": "Verify that selected mode is available for program and all courses in the program"} {"_id": "q_7071", "text": "Retrieve a list of catalog ID and name pairs.\n\n Once retrieved, these name pairs can be used directly as a value\n for the `choices` argument to a ChoiceField."} {"_id": "q_7072", "text": "Clean form fields prior to database entry.\n\n In this case, the major cleaning operation is substituting a None value for a blank\n value in the Catalog field."} {"_id": "q_7073", "text": "Final validations of model fields.\n\n 1. Validate that selected site for enterprise customer matches with the selected identity provider's site."} {"_id": "q_7074", "text": "Ensure that all necessary resources to render the view are present."} {"_id": "q_7075", "text": "Get the set of variables that are needed by default across views."} {"_id": "q_7076", "text": "Return a dict having course or program specific keys for data sharing consent page."} {"_id": "q_7077", "text": "Process the above form."} {"_id": "q_7078", "text": "Handle the enrollment of enterprise learner in the provided course.\n\n Based on `enterprise_uuid` in URL, the view will decide which\n enterprise customer's course enrollment record should be created.\n\n Depending on the value of query parameter `course_mode` then learner\n will be either redirected to LMS dashboard for audit modes or\n redirected to ecommerce basket flow for payment of premium modes."} {"_id": "q_7079", "text": "Set the final discounted price on each premium mode."} {"_id": "q_7080", "text": "Return the available course modes for the course run.\n\n The provided EnterpriseCustomerCatalog is used to filter and order the\n course modes returned using the EnterpriseCustomerCatalog's\n field \"enabled_course_modes\"."} {"_id": "q_7081", "text": "Extend a course with more details needed for the program landing page.\n\n In particular, we add the following:\n\n * `course_image_uri`\n * `course_title`\n * `course_level_type`\n * `course_short_description`\n * `course_full_description`\n * `course_effort`\n * `expected_learning_items`\n * `staff`"} {"_id": "q_7082", "text": "User is requesting a course, we need to translate that into the current course run.\n\n :param user:\n :param enterprise_customer:\n :param course_key:\n :return: course_run_id"} {"_id": "q_7083", "text": "Return whether a request is eligible for direct audit enrollment for a particular enterprise customer.\n\n 'resource_id' can be either course_run_id or program_uuid.\n We check for the following criteria:\n - The `audit` query parameter.\n - The user's being routed to the course enrollment landing page.\n - The customer's catalog contains the course in question.\n - The audit track is an available mode for the course."} {"_id": "q_7084", "text": "Redirects to the appropriate view depending on where the user came from."} {"_id": "q_7085", "text": "Run some custom GET logic for Enterprise workflows before routing the user through existing views.\n\n In particular, before routing to existing views:\n - If the requested resource is a course, find the current course run for that course,\n and make that course run the requested resource instead.\n - Look to see whether a request is eligible for direct audit enrollment, and if so, directly enroll the user."} {"_id": "q_7086", "text": "Run some custom POST logic for Enterprise workflows before routing the user through existing views."} {"_id": "q_7087", "text": "Task to send learner data to each linked integrated channel.\n\n Arguments:\n username (str): The username of the User to be used for making API requests for learner data.\n channel_code (str): Capitalized identifier for the integrated channel\n channel_pk (str): Primary key for identifying integrated channel"} {"_id": "q_7088", "text": "Task to unlink inactive learners of provided integrated channel.\n\n Arguments:\n channel_code (str): Capitalized identifier for the integrated channel\n channel_pk (str): Primary key for identifying integrated channel"} {"_id": "q_7089", "text": "Handle User model changes - checks if pending enterprise customer user record exists and upgrades it to actual link.\n\n If there are pending enrollments attached to the PendingEnterpriseCustomerUser, then this signal also takes the\n newly-created users and enrolls them in the relevant courses."} {"_id": "q_7090", "text": "Set default value for `EnterpriseCustomerCatalog.content_filter` if not already set."} {"_id": "q_7091", "text": "Assign an enterprise learner role to EnterpriseCustomerUser whenever a new record is created."} {"_id": "q_7092", "text": "Ensure at least one of the specified query parameters are included in the request.\n\n This decorator checks for the existence of at least one of the specified query\n parameters and passes the values as function parameters to the decorated view.\n If none of the specified query parameters are included in the request, a\n ValidationError is raised.\n\n Usage::\n @require_at_least_one_query_parameter('program_uuids', 'course_run_ids')\n def my_view(request, program_uuids, course_run_ids):\n # Some functionality ..."} {"_id": "q_7093", "text": "Assigns enterprise role to users."} {"_id": "q_7094", "text": "Entry point for managment command execution."} {"_id": "q_7095", "text": "Perform the linking of user in the process of logging to the Enterprise Customer.\n\n Args:\n backend: The class handling the SSO interaction (SAML, OAuth, etc)\n user: The user object in the process of being logged in with\n **kwargs: Any remaining pipeline variables"} {"_id": "q_7096", "text": "Find the LMS user from the LMS model `UserSocialAuth`.\n\n Arguments:\n tpa_provider (third_party_auth.provider): third party auth provider object\n tpa_username (str): Username returned by the third party auth"} {"_id": "q_7097", "text": "Instantiate a new session object for use in connecting with SAP SuccessFactors"} {"_id": "q_7098", "text": "Make a post request using the session object to a SuccessFactors endpoint.\n\n Args:\n url (str): The url to post to.\n payload (str): The json encoded payload to post."} {"_id": "q_7099", "text": "Make recursive GET calls to traverse the paginated API response for search students."} {"_id": "q_7100", "text": "Filter only for the user's ID if non-staff."} {"_id": "q_7101", "text": "Send a completion status call to the integrated channel using the client.\n\n Args:\n payload: The learner completion data payload to send to the integrated channel.\n kwargs: Contains integrated channel-specific information for customized transmission variables.\n - app_label: The app label of the integrated channel for whom to store learner data records for.\n - model_name: The name of the specific learner data record model to use.\n - remote_user_id: The remote ID field name of the learner on the audit model."} {"_id": "q_7102", "text": "Validate that a particular image extension."} {"_id": "q_7103", "text": "Get the enterprise customer id given an enterprise customer catalog id."} {"_id": "q_7104", "text": "Run sphinx-apidoc after Sphinx initialization.\n\n Read the Docs won't run tox or custom shell commands, so we need this to\n avoid checking in the generated reStructuredText files."} {"_id": "q_7105", "text": "Returns the enterprise customer requested for the given uuid, None if not.\n\n Raises CommandError if uuid is invalid."} {"_id": "q_7106", "text": "Assemble a list of integrated channel classes to transmit to.\n\n If a valid channel type was provided, use it.\n\n Otherwise, use all the available channel types."} {"_id": "q_7107", "text": "Get the contents of a file listing the requirements"} {"_id": "q_7108", "text": "Iterate over each learner data record and transmit it to the integrated channel."} {"_id": "q_7109", "text": "Transmit content metadata to integrated channel."} {"_id": "q_7110", "text": "Return a DegreedLearnerDataTransmissionAudit with the given enrollment and course completion data.\n\n If completed_date is None, then course completion has not been met.\n\n If no remote ID can be found, return None."} {"_id": "q_7111", "text": "Render the given template with the stock data."} {"_id": "q_7112", "text": "Build common admin context."} {"_id": "q_7113", "text": "Handle GET request - render \"Transmit courses metadata\" form.\n\n Arguments:\n request (django.http.request.HttpRequest): Request instance\n enterprise_customer_uuid (str): Enterprise Customer UUID\n\n Returns:\n django.http.response.HttpResponse: HttpResponse"} {"_id": "q_7114", "text": "Get the list of PendingEnterpriseCustomerUsers we want to render.\n\n Args:\n search_keyword (str): The keyword to search for in pending users' email addresses.\n customer_uuid (str): A unique identifier to filter down to only pending users\n linked to a particular EnterpriseCustomer."} {"_id": "q_7115", "text": "Link single user by email or username.\n\n Arguments:\n enterprise_customer (EnterpriseCustomer): learners will be linked to this Enterprise Customer instance\n manage_learners_form (ManageLearnersForm): bound ManageLearners form instance"} {"_id": "q_7116", "text": "Bulk link users by email.\n\n Arguments:\n enterprise_customer (EnterpriseCustomer): learners will be linked to this Enterprise Customer instance\n manage_learners_form (ManageLearnersForm): bound ManageLearners form instance\n request (django.http.request.HttpRequest): HTTP Request instance\n email_list (iterable): A list of pre-processed email addresses to handle using the form"} {"_id": "q_7117", "text": "Query the enrollment API and determine if a learner is enrolled in a given course run track.\n\n Args:\n user: The user whose enrollment needs to be checked\n course_mode: The mode with which the enrollment should be checked\n course_id: course id of the course where enrollment should be checked.\n\n Returns:\n Boolean: Whether or not enrollment exists"} {"_id": "q_7118", "text": "Accept a list of emails, and separate them into users that exist on OpenEdX and users who don't.\n\n Args:\n emails: An iterable of email addresses to split between existing and nonexisting\n\n Returns:\n users: Queryset of users who exist in the OpenEdX platform and who were in the list of email addresses\n missing_emails: List of unique emails which were in the original list, but do not yet exist as users"} {"_id": "q_7119", "text": "Enroll existing users in all courses in a program, and create pending enrollments for nonexisting users.\n\n Args:\n enterprise_customer: The EnterpriseCustomer which is sponsoring the enrollment\n program_details: The details of the program in which we're enrolling\n course_mode (str): The mode with which we're enrolling in the program\n emails: An iterable of email addresses which need to be enrolled\n\n Returns:\n successes: A list of users who were successfully enrolled in all courses of the program\n pending: A list of PendingEnterpriseCustomerUsers who were successfully linked and had\n pending enrollments created for them in the database\n failures: A list of users who could not be enrolled in the program"} {"_id": "q_7120", "text": "Enroll existing users in a course, and create a pending enrollment for nonexisting users.\n\n Args:\n enterprise_customer: The EnterpriseCustomer which is sponsoring the enrollment\n course_id (str): The unique identifier of the course in which we're enrolling\n course_mode (str): The mode with which we're enrolling in the course\n emails: An iterable of email addresses which need to be enrolled\n\n Returns:\n successes: A list of users who were successfully enrolled in the course\n pending: A list of PendingEnterpriseCustomerUsers who were successfully linked and had\n pending enrollments created for them in the database\n failures: A list of users who could not be enrolled in the course"} {"_id": "q_7121", "text": "Deduplicate any outgoing message requests, and send the remainder.\n\n Args:\n http_request: The HTTP request in whose response we want to embed the messages\n message_requests: A list of undeduplicated messages in the form of tuples of message type\n and text- for example, ('error', 'Something went wrong')"} {"_id": "q_7122", "text": "Create message for the users who were not able to be enrolled in a course or program.\n\n Args:\n users: An iterable of users who were not successfully enrolled\n enrolled_in (str): A string identifier for the course or program with which enrollment was attempted\n\n Returns:\n tuple: A 2-tuple containing a message type and message text"} {"_id": "q_7123", "text": "Enroll the users with the given email addresses to the courses specified, either specifically or by program.\n\n Args:\n cls (type): The EnterpriseCustomerManageLearnersView class itself\n request: The HTTP request the enrollment is being created by\n enterprise_customer: The instance of EnterpriseCustomer whose attached users we're enrolling\n emails: An iterable of strings containing email addresses to enroll in a course\n mode: The enrollment mode the users will be enrolled in the course with\n course_id: The ID of the course in which we want to enroll\n program_details: Details about a program in which we want to enroll\n notify: Whether to notify (by email) the users that have been enrolled"} {"_id": "q_7124", "text": "Handle DELETE request - handle unlinking learner.\n\n Arguments:\n request (django.http.request.HttpRequest): Request instance\n customer_uuid (str): Enterprise Customer UUID\n\n Returns:\n django.http.response.HttpResponse: HttpResponse"} {"_id": "q_7125", "text": "Build a ProxyDataSharingConsent using the details of the received consent records."} {"_id": "q_7126", "text": "Commit a real ``DataSharingConsent`` object to the database, mirroring current field settings.\n\n :return: A ``DataSharingConsent`` object if validation is successful, otherwise ``None``."} {"_id": "q_7127", "text": "Get course completions via PersistentCourseGrade for all the learners of given enterprise customer.\n\n Arguments:\n enterprise_customer (EnterpriseCustomer): Include Course enrollments for learners\n of this enterprise customer.\n days (int): Include course enrollment of this number of days.\n\n Returns:\n (list): A list of PersistentCourseGrade objects."} {"_id": "q_7128", "text": "Prefetch Users from the list of user_ids present in the persistent_course_grades.\n\n Arguments:\n persistent_course_grades (list): A list of PersistentCourseGrade.\n\n Returns:\n (dict): A dictionary containing user_id to user mapping."} {"_id": "q_7129", "text": "Get Identity Provider with given id.\n\n Return:\n Instance of ProviderConfig or None."} {"_id": "q_7130", "text": "Get template of catalog admin url.\n\n URL template will contain a placeholder '{catalog_id}' for catalog id.\n Arguments:\n mode e.g. change/add.\n\n Returns:\n A string containing template for catalog url.\n\n Example:\n >>> get_catalog_admin_url_template('change')\n \"http://localhost:18381/admin/catalogs/catalog/{catalog_id}/change/\""} {"_id": "q_7131", "text": "Create HTML and plaintext message bodies for a notification.\n\n We receive a context with data we can use to render, as well as an optional site\n template configration - if we don't get a template configuration, we'll use the\n standard, built-in template.\n\n Arguments:\n template_context (dict): A set of data to render\n template_configuration: A database-backed object with templates\n stored that can be used to render a notification."} {"_id": "q_7132", "text": "Get a subject line for a notification email.\n\n The method is designed to fail in a \"smart\" way; if we can't render a\n database-backed subject line template, then we'll fall back to a template\n saved in the Django settings; if we can't render _that_ one, then we'll\n fall through to a friendly string written into the code.\n\n One example of a failure case in which we want to fall back to a stock template\n would be if a site admin entered a subject line string that contained a template\n tag that wasn't available, causing a KeyError to be raised.\n\n Arguments:\n course_name (str): Course name to be rendered into the string\n template_configuration: A database-backed object with a stored subject line template"} {"_id": "q_7133", "text": "Send an email notifying a user about their enrollment in a course.\n\n Arguments:\n user: Either a User object or a PendingEnterpriseCustomerUser that we can use\n to get details for the email\n enrolled_in (dict): The dictionary contains details of the enrollable object\n (either course or program) that the user enrolled in. This MUST contain\n a `name` key, and MAY contain the other following keys:\n - url: A human-friendly link to the enrollable's home page\n - type: Either `course` or `program` at present\n - branding: A special name for what the enrollable \"is\"; for example,\n \"MicroMasters\" would be the branding for a \"MicroMasters Program\"\n - start: A datetime object indicating when the enrollable will be available.\n enterprise_customer: The EnterpriseCustomer that the enrollment was created using.\n email_connection: An existing Django email connection that can be used without\n creating a new connection for each individual message"} {"_id": "q_7134", "text": "Get the ``EnterpriseCustomer`` instance associated with ``uuid``.\n\n :param uuid: The universally unique ID of the enterprise customer.\n :return: The ``EnterpriseCustomer`` instance, or ``None`` if it doesn't exist."} {"_id": "q_7135", "text": "Return track selection url for the given course.\n\n Arguments:\n course_run (dict): A dictionary containing course run metadata.\n query_parameters (dict): A dictionary containing query parameters to be added to course selection url.\n\n Raises:\n (KeyError): Raised when course run dict does not have 'key' key.\n\n Returns:\n (str): Course track selection url."} {"_id": "q_7136", "text": "Given an EnterpriseCustomer UUID, return the corresponding EnterpriseCustomer or raise a 404.\n\n Arguments:\n enterprise_uuid (str): The UUID (in string form) of the EnterpriseCustomer to fetch.\n\n Returns:\n (EnterpriseCustomer): The EnterpriseCustomer given the UUID."} {"_id": "q_7137", "text": "Get MD5 encoded cache key for given arguments.\n\n Here is the format of key before MD5 encryption.\n key1:value1__key2:value2 ...\n\n Example:\n >>> get_cache_key(site_domain=\"example.com\", resource=\"enterprise\")\n # Here is key format for above call\n # \"site_domain:example.com__resource:enterprise\"\n a54349175618ff1659dee0978e3149ca\n\n Arguments:\n **kwargs: Key word arguments that need to be present in cache key.\n\n Returns:\n An MD5 encoded key uniquely identified by the key word arguments."} {"_id": "q_7138", "text": "Traverse a paginated API response.\n\n Extracts and concatenates \"results\" (list of dict) returned by DRF-powered\n APIs.\n\n Arguments:\n response (Dict): Current response dict from service API\n endpoint (slumber Resource object): slumber Resource object from edx-rest-api-client\n\n Returns:\n list of dict."} {"_id": "q_7139", "text": "Return grammatically correct, translated text based off of a minimum and maximum value.\n\n Example:\n min = 1, max = 1, singular = '{} hour required for this course', plural = '{} hours required for this course'\n output = '1 hour required for this course'\n\n min = 2, max = 2, singular = '{} hour required for this course', plural = '{} hours required for this course'\n output = '2 hours required for this course'\n\n min = 2, max = 4, range_text = '{}-{} hours required for this course'\n output = '2-4 hours required for this course'\n\n min = None, max = 2, plural = '{} hours required for this course'\n output = '2 hours required for this course'\n\n Expects ``range_text`` to already have a translation function called on it.\n\n Returns:\n ``None`` if both of the input values are ``None``.\n ``singular`` formatted if both are equal or one of the inputs, but not both, are ``None``, and the value is 1.\n ``plural`` formatted if both are equal or one of its inputs, but not both, are ``None``, and the value is > 1.\n ``range_text`` formatted if min != max and both are valid values."} {"_id": "q_7140", "text": "Format the price to have the appropriate currency and digits..\n\n :param price: The price amount.\n :param currency: The currency for the price.\n :return: A formatted price string, i.e. '$10', '$10.52'."} {"_id": "q_7141", "text": "Get the site configuration value for a key, unless a site configuration does not exist for that site.\n\n Useful for testing when no Site Configuration exists in edx-enterprise or if a site in LMS doesn't have\n a configuration tied to it.\n\n :param site: A Site model object\n :param key: The name of the value to retrieve\n :param default: The default response if there's no key in site config or settings\n :return: The value located at that key in the site configuration or settings file."} {"_id": "q_7142", "text": "Get a configuration value, or fall back to ``default`` if it doesn't exist.\n\n Also takes a `type` argument to guide which particular upstream method to use when trying to retrieve a value.\n Current types include:\n - `url` to specifically get a URL."} {"_id": "q_7143", "text": "Emit a track event for enterprise course enrollment."} {"_id": "q_7144", "text": "Return true if the course run is enrollable, false otherwise.\n\n We look for the following criteria:\n - end is greater than now OR null\n - enrollment_start is less than now OR null\n - enrollment_end is greater than now OR null"} {"_id": "q_7145", "text": "Return true if the course run has a verified seat with an unexpired upgrade deadline, false otherwise."} {"_id": "q_7146", "text": "Return course run with start date closest to now."} {"_id": "q_7147", "text": "Return the current course run on the following conditions.\n\n - If user has active course runs (already enrolled) then return course run with closest start date\n Otherwise it will check the following logic:\n - Course run is enrollable (see is_course_run_enrollable)\n - Course run has a verified seat and the upgrade deadline has not expired.\n - Course run start date is closer to now than any other enrollable/upgradeable course runs.\n - If no enrollable/upgradeable course runs, return course run with most recent start date."} {"_id": "q_7148", "text": "LRS client instance to be used for sending statements."} {"_id": "q_7149", "text": "Save xAPI statement.\n\n Arguments:\n statement (EnterpriseStatement): xAPI Statement to send to the LRS.\n\n Raises:\n ClientError: If xAPI statement fails to save."} {"_id": "q_7150", "text": "Check that if request user has implicit access to `ENTERPRISE_DASHBOARD_ADMIN_ROLE` feature role.\n\n Returns:\n boolean: whether the request user has access or not"} {"_id": "q_7151", "text": "Check that if request user has implicit access to `ENTERPRISE_CATALOG_ADMIN_ROLE` feature role.\n\n Returns:\n boolean: whether the request user has access or not"} {"_id": "q_7152", "text": "Check that if request user has implicit access to `ENTERPRISE_ENROLLMENT_API_ADMIN_ROLE` feature role.\n\n Returns:\n boolean: whether the request user has access or not"} {"_id": "q_7153", "text": "Instance is EnterpriseCustomer. Return e-commerce coupon urls."} {"_id": "q_7154", "text": "Return an export csv action.\n\n Arguments:\n description (string): action description\n fields ([string]): list of model fields to include\n header (bool): whether or not to output the column names as the first row"} {"_id": "q_7155", "text": "Return the action method to clear the catalog ID for a EnterpriseCustomer."} {"_id": "q_7156", "text": "Get information about maps of the robots.\n\n :return:"} {"_id": "q_7157", "text": "Get information about robots connected to account.\n\n :return:"} {"_id": "q_7158", "text": "Get information about persistent maps of the robots.\n\n :return:"} {"_id": "q_7159", "text": "Calculates the distance between two points on earth."} {"_id": "q_7160", "text": "Takes a graph and returns an adjacency list.\n\n Parameters\n ----------\n g : :any:`networkx.DiGraph`, :any:`networkx.Graph`, etc.\n Any object that networkx can turn into a\n :any:`DiGraph`.\n return_dict_of_dict : bool (optional, default: ``True``)\n Specifies whether this function will return a dict of dicts\n or a dict of lists.\n\n Returns\n -------\n adj : dict\n An adjacency representation of graph as a dictionary of\n dictionaries, where a key is the vertex index for a vertex\n ``v`` and the values are :class:`dicts<.dict>` with keys for\n the vertex index and values as edge properties.\n\n Examples\n --------\n >>> import queueing_tool as qt\n >>> import networkx as nx\n >>> adj = {0: [1, 2], 1: [0], 2: [0, 3], 3: [2]}\n >>> g = nx.DiGraph(adj)\n >>> qt.graph2dict(g, return_dict_of_dict=True)\n ... # doctest: +NORMALIZE_WHITESPACE\n {0: {1: {}, 2: {}},\n 1: {0: {}},\n 2: {0: {}, 3: {}},\n 3: {2: {}}}\n >>> qt.graph2dict(g, return_dict_of_dict=False)\n {0: [1, 2], 1: [0], 2: [0, 3], 3: [2]}"} {"_id": "q_7161", "text": "Takes a dictionary based representation of an adjacency list\n and returns a dict of dicts based representation."} {"_id": "q_7162", "text": "Takes an adjacency list, dict, or matrix and returns a graph.\n\n The purpose of this function is take an adjacency list (or matrix)\n and return a :class:`.QueueNetworkDiGraph` that can be used with a\n :class:`.QueueNetwork` instance. The Graph returned has the\n ``edge_type`` edge property set for each edge. Note that the graph may\n be altered.\n\n Parameters\n ----------\n adjacency : dict or :class:`~numpy.ndarray`\n An adjacency list as either a dict, or an adjacency matrix.\n adjust : int ``{1, 2}`` (optional, default: 1)\n Specifies what to do when the graph has terminal vertices\n (nodes with no out-edges). Note that if ``adjust`` is not 2\n then it is assumed to be 1. There are two choices:\n\n * ``adjust = 1``: A loop is added to each terminal node in the\n graph, and their ``edge_type`` of that loop is set to 0.\n * ``adjust = 2``: All edges leading to terminal nodes have\n their ``edge_type`` set to 0.\n\n **kwargs :\n Unused.\n\n Returns\n -------\n out : :any:`networkx.DiGraph`\n A directed graph with the ``edge_type`` edge property.\n\n Raises\n ------\n TypeError\n Is raised if ``adjacency`` is not a dict or\n :class:`~numpy.ndarray`.\n\n Examples\n --------\n If terminal nodes are such that all in-edges have edge type ``0``\n then nothing is changed. However, if a node is a terminal node then\n a loop is added with edge type 0.\n\n >>> import queueing_tool as qt\n >>> adj = {\n ... 0: {1: {}},\n ... 1: {2: {},\n ... 3: {}},\n ... 3: {0: {}}}\n >>> eTy = {0: {1: 1}, 1: {2: 2, 3: 4}, 3: {0: 1}}\n >>> # A loop will be added to vertex 2\n >>> g = qt.adjacency2graph(adj, edge_type=eTy)\n >>> ans = qt.graph2dict(g)\n >>> sorted(ans.items()) # doctest: +NORMALIZE_WHITESPACE\n [(0, {1: {'edge_type': 1}}),\n (1, {2: {'edge_type': 2}, 3: {'edge_type': 4}}), \n (2, {2: {'edge_type': 0}}),\n (3, {0: {'edge_type': 1}})]\n\n You can use a dict of lists to represent the adjacency list.\n\n >>> adj = {0 : [1], 1: [2, 3], 3: [0]}\n >>> g = qt.adjacency2graph(adj, edge_type=eTy)\n >>> ans = qt.graph2dict(g)\n >>> sorted(ans.items()) # doctest: +NORMALIZE_WHITESPACE\n [(0, {1: {'edge_type': 1}}),\n (1, {2: {'edge_type': 2}, 3: {'edge_type': 4}}),\n (2, {2: {'edge_type': 0}}),\n (3, {0: {'edge_type': 1}})]\n\n Alternatively, you could have this function adjust the edges that\n lead to terminal vertices by changing their edge type to 0:\n\n >>> # The graph is unaltered\n >>> g = qt.adjacency2graph(adj, edge_type=eTy, adjust=2)\n >>> ans = qt.graph2dict(g)\n >>> sorted(ans.items()) # doctest: +NORMALIZE_WHITESPACE\n [(0, {1: {'edge_type': 1}}),\n (1, {2: {'edge_type': 0}, 3: {'edge_type': 4}}),\n (2, {}),\n (3, {0: {'edge_type': 1}})]"} {"_id": "q_7163", "text": "Returns all edges with the specified edge type.\n\n Parameters\n ----------\n edge_type : int\n An integer specifying what type of edges to return.\n\n Returns\n -------\n out : list of 2-tuples\n A list of 2-tuples representing the edges in the graph\n with the specified edge type.\n\n Examples\n --------\n Lets get type 2 edges from the following graph\n\n >>> import queueing_tool as qt\n >>> adjacency = {\n ... 0: {1: {'edge_type': 2}},\n ... 1: {2: {'edge_type': 1},\n ... 3: {'edge_type': 4}},\n ... 2: {0: {'edge_type': 2}},\n ... 3: {3: {'edge_type': 0}}\n ... }\n >>> G = qt.QueueNetworkDiGraph(adjacency)\n >>> ans = G.get_edge_type(2)\n >>> ans.sort()\n >>> ans\n [(0, 1), (2, 0)]"} {"_id": "q_7164", "text": "Returns the arguments used when plotting.\n\n Takes any keyword arguments for\n :class:`~matplotlib.collections.LineCollection` and\n :meth:`~matplotlib.axes.Axes.scatter` and returns two\n dictionaries with all the defaults set.\n\n Parameters\n ----------\n line_kwargs : dict (optional, default: ``None``)\n Any keyword arguments accepted by\n :class:`~matplotlib.collections.LineCollection`.\n scatter_kwargs : dict (optional, default: ``None``)\n Any keyword arguments accepted by\n :meth:`~matplotlib.axes.Axes.scatter`.\n\n Returns\n -------\n tuple\n A 2-tuple of dicts. The first entry is the keyword\n arguments for\n :class:`~matplotlib.collections.LineCollection` and the\n second is the keyword args for\n :meth:`~matplotlib.axes.Axes.scatter`.\n\n Notes\n -----\n If a specific keyword argument is not passed then the defaults\n are used."} {"_id": "q_7165", "text": "A function that returns the arrival time of the next arrival for\n a Poisson random measure.\n\n Parameters\n ----------\n t : float\n The start time from which to simulate the next arrival time.\n rate : function\n The *intensity function* for the measure, where ``rate(t)`` is\n the expected arrival rate at time ``t``.\n rate_max : float\n The maximum value of the ``rate`` function.\n\n Returns\n -------\n out : float\n The time of the next arrival.\n\n Notes\n -----\n This function returns the time of the next arrival, where the\n distribution of the number of arrivals between times :math:`t` and\n :math:`t+s` is Poisson with mean\n\n .. math::\n\n \\int_{t}^{t+s} dx \\, r(x)\n\n where :math:`r(t)` is the supplied ``rate`` function. This function\n can only simulate processes that have bounded intensity functions.\n See chapter 6 of [3]_ for more on the mathematics behind Poisson\n random measures; the book's publisher, Springer, has that chapter\n available online for free at (`pdf`_\\).\n\n A Poisson random measure is sometimes called a non-homogeneous\n Poisson process. A Poisson process is a special type of Poisson\n random measure.\n\n .. _pdf: http://www.springer.com/cda/content/document/\\\n cda_downloaddocument/9780387878584-c1.pdf\n\n Examples\n --------\n Suppose you wanted to model the arrival process as a Poisson\n random measure with rate function :math:`r(t) = 2 + \\sin( 2\\pi t)`.\n Then you could do so as follows:\n\n >>> import queueing_tool as qt\n >>> import numpy as np\n >>> np.random.seed(10)\n >>> rate = lambda t: 2 + np.sin(2 * np.pi * t)\n >>> arr_f = lambda t: qt.poisson_random_measure(t, rate, 3)\n >>> arr_f(1) # doctest: +ELLIPSIS\n 1.491...\n\n References\n ----------\n .. [3] Cinlar, Erhan. *Probability and stochastics*. Graduate Texts in\\\n Mathematics. Vol. 261. Springer, New York, 2011.\\\n :doi:`10.1007/978-0-387-87859-1`"} {"_id": "q_7166", "text": "Returns a color for the queue.\n\n Parameters\n ----------\n which : int (optional, default: ``0``)\n Specifies the type of color to return.\n\n Returns\n -------\n color : list\n Returns a RGBA color that is represented as a list with 4\n entries where each entry can be any floating point number\n between 0 and 1.\n\n * If ``which`` is 1 then it returns the color of the edge\n as if it were a self loop. This is specified in\n ``colors['edge_loop_color']``.\n * If ``which`` is 2 then it returns the color of the vertex\n pen color (defined as color/vertex_color in\n :meth:`.QueueNetworkDiGraph.graph_draw`). This is\n specified in ``colors['vertex_color']``.\n * If ``which`` is anything else, then it returns the a\n shade of the edge that is proportional to the number of\n agents in the system -- which includes those being\n servered and those waiting to be served. More agents\n correspond to darker edge colors. Uses\n ``colors['vertex_fill_color']`` if the queue sits on a\n loop, and ``colors['edge_color']`` otherwise."} {"_id": "q_7167", "text": "Returns an integer representing whether the next event is\n an arrival, a departure, or nothing.\n\n Returns\n -------\n out : int\n An integer representing whether the next event is an\n arrival or a departure: ``1`` corresponds to an arrival,\n ``2`` corresponds to a departure, and ``0`` corresponds to\n nothing scheduled to occur."} {"_id": "q_7168", "text": "Resets the queue to its initial state.\n\n The attributes ``t``, ``num_events``, ``num_agents`` are set to\n zero, :meth:`.reset_colors` is called, and the\n :meth:`.QueueServer.clear` method is called for each queue in\n the network.\n\n Notes\n -----\n ``QueueNetwork`` must be re-initialized before any simulations\n can run."} {"_id": "q_7169", "text": "Clears data from all queues.\n\n If none of the parameters are given then every queue's data is\n cleared.\n\n Parameters\n ----------\n queues : int or an iterable of int (optional)\n The edge index (or an iterable of edge indices) identifying\n the :class:`QueueServer(s)<.QueueServer>` whose data will\n be cleared.\n edge : 2-tuple of int or *array_like* (optional)\n Explicitly specify which queues' data to clear. Must be\n either:\n\n * A 2-tuple of the edge's source and target vertex\n indices, or\n * An iterable of 2-tuples of the edge's source and\n target vertex indices.\n\n edge_type : int or an iterable of int (optional)\n A integer, or a collection of integers identifying which\n edge types will have their data cleared."} {"_id": "q_7170", "text": "Returns a deep copy of itself."} {"_id": "q_7171", "text": "Draws the network. The coloring of the network corresponds\n to the number of agents at each queue.\n\n Parameters\n ----------\n update_colors : ``bool`` (optional, default: ``True``).\n Specifies whether all the colors are updated.\n line_kwargs : dict (optional, default: None)\n Any keyword arguments accepted by\n :class:`~matplotlib.collections.LineCollection`\n scatter_kwargs : dict (optional, default: None)\n Any keyword arguments accepted by\n :meth:`~matplotlib.axes.Axes.scatter`.\n bgcolor : list (optional, keyword only)\n A list with 4 floats representing a RGBA color. The\n default is defined in ``self.colors['bgcolor']``.\n figsize : tuple (optional, keyword only, default: ``(7, 7)``)\n The width and height of the canvas in inches.\n **kwargs\n Any parameters to pass to\n :meth:`.QueueNetworkDiGraph.draw_graph`.\n\n Notes\n -----\n This method relies heavily on\n :meth:`.QueueNetworkDiGraph.draw_graph`. Also, there is a\n parameter that sets the background color of the canvas, which\n is the ``bgcolor`` parameter.\n\n Examples\n --------\n To draw the current state of the network, call:\n\n >>> import queueing_tool as qt\n >>> g = qt.generate_pagerank_graph(100, seed=13)\n >>> net = qt.QueueNetwork(g, seed=13)\n >>> net.initialize(100)\n >>> net.simulate(1200)\n >>> net.draw() # doctest: +SKIP\n\n If you specify a file name and location, the drawing will be\n saved to disk. For example, to save the drawing to the current\n working directory do the following:\n\n >>> net.draw(fname=\"state.png\", scatter_kwargs={'s': 40}) # doctest: +SKIP\n\n .. figure:: current_state1.png\n :align: center\n\n The shade of each edge depicts how many agents are located at\n the corresponding queue. The shade of each vertex is determined\n by the total number of inbound agents. Although loops are not\n visible by default, the vertex that corresponds to a loop shows\n how many agents are in that loop.\n\n There are several additional parameters that can be passed --\n all :meth:`.QueueNetworkDiGraph.draw_graph` parameters are\n valid. For example, to show the edges as dashed lines do the\n following.\n\n >>> net.draw(line_kwargs={'linestyle': 'dashed'}) # doctest: +SKIP"} {"_id": "q_7172", "text": "Gets data from queues and organizes it by agent.\n\n If none of the parameters are given then data from every\n :class:`.QueueServer` is retrieved.\n\n Parameters\n ----------\n queues : int or *array_like* (optional)\n The edge index (or an iterable of edge indices) identifying\n the :class:`QueueServer(s)<.QueueServer>` whose data will\n be retrieved.\n edge : 2-tuple of int or *array_like* (optional)\n Explicitly specify which queues to retrieve agent data\n from. Must be either:\n\n * A 2-tuple of the edge's source and target vertex\n indices, or\n * An iterable of 2-tuples of the edge's source and\n target vertex indices.\n\n edge_type : int or an iterable of int (optional)\n A integer, or a collection of integers identifying which\n edge types to retrieve agent data from.\n return_header : bool (optonal, default: False)\n Determines whether the column headers are returned.\n\n Returns\n -------\n dict\n Returns a ``dict`` where the keys are the\n :class:`Agent's<.Agent>` ``agent_id`` and the values are\n :class:`ndarrays<~numpy.ndarray>` for that\n :class:`Agent's<.Agent>` data. The columns of this array\n are as follows:\n\n * First: The arrival time of an agent.\n * Second: The service start time of an agent.\n * Third: The departure time of an agent.\n * Fourth: The length of the queue upon the agents arrival.\n * Fifth: The total number of :class:`Agents<.Agent>` in the\n :class:`.QueueServer`.\n * Sixth: the :class:`QueueServer's<.QueueServer>` id\n (its edge index).\n\n headers : str (optional)\n A comma seperated string of the column headers. Returns\n ``'arrival,service,departure,num_queued,num_total,q_id'``"} {"_id": "q_7173", "text": "Prepares the ``QueueNetwork`` for simulation.\n\n Each :class:`.QueueServer` in the network starts inactive,\n which means they do not accept arrivals from outside the\n network, and they have no agents in their system. This method\n sets queues to active, which then allows agents to arrive from\n outside the network.\n\n Parameters\n ----------\n nActive : int (optional, default: ``1``)\n The number of queues to set as active. The queues are\n selected randomly.\n queues : int *array_like* (optional)\n The edge index (or an iterable of edge indices) identifying\n the :class:`QueueServer(s)<.QueueServer>` to make active by.\n edges : 2-tuple of int or *array_like* (optional)\n Explicitly specify which queues to make active. Must be\n either:\n\n * A 2-tuple of the edge's source and target vertex\n indices, or\n * An iterable of 2-tuples of the edge's source and\n target vertex indices.\n\n edge_type : int or an iterable of int (optional)\n A integer, or a collection of integers identifying which\n edge types will be set active.\n\n Raises\n ------\n ValueError\n If ``queues``, ``egdes``, and ``edge_type`` are all ``None``\n and ``nActive`` is an integer less than 1\n :exc:`~ValueError` is raised.\n TypeError\n If ``queues``, ``egdes``, and ``edge_type`` are all ``None``\n and ``nActive`` is not an integer then a :exc:`~TypeError`\n is raised.\n QueueingToolError\n Raised if all the queues specified are\n :class:`NullQueues<.NullQueue>`.\n\n Notes\n -----\n :class:`NullQueues<.NullQueue>` cannot be activated, and are\n sifted out if they are specified. More specifically, every edge\n with edge type 0 is sifted out."} {"_id": "q_7174", "text": "Returns whether the next event is an arrival or a departure\n and the queue the event is accuring at.\n\n Returns\n -------\n des : str\n Indicates whether the next event is an arrival, a\n departure, or nothing; returns ``'Arrival'``,\n ``'Departure'``, or ``'Nothing'``.\n edge : int or ``None``\n The edge index of the edge that this event will occur at.\n If there are no events then ``None`` is returned."} {"_id": "q_7175", "text": "Change the routing transitions probabilities for the\n network.\n\n Parameters\n ----------\n mat : dict or :class:`~numpy.ndarray`\n A transition routing matrix or transition dictionary. If\n passed a dictionary, the keys are source vertex indices and\n the values are dictionaries with target vertex indicies\n as the keys and the probabilities of routing from the\n source to the target as the values.\n\n Raises\n ------\n ValueError\n A :exc:`.ValueError` is raised if: the keys in the dict\n don't match with a vertex index in the graph; or if the\n :class:`~numpy.ndarray` is passed with the wrong shape,\n must be (``num_vertices``, ``num_vertices``); or the values\n passed are not probabilities (for each vertex they are\n positive and sum to 1);\n TypeError\n A :exc:`.TypeError` is raised if mat is not a dict or\n :class:`~numpy.ndarray`.\n\n Examples\n --------\n The default transition matrix is every out edge being equally\n likely:\n\n >>> import queueing_tool as qt\n >>> adjacency = {\n ... 0: [2],\n ... 1: [2, 3],\n ... 2: [0, 1, 2, 4],\n ... 3: [1],\n ... 4: [2],\n ... }\n >>> g = qt.adjacency2graph(adjacency)\n >>> net = qt.QueueNetwork(g)\n >>> net.transitions(False) # doctest: +ELLIPSIS\n ... # doctest: +NORMALIZE_WHITESPACE\n {0: {2: 1.0},\n 1: {2: 0.5, 3: 0.5},\n 2: {0: 0.25, 1: 0.25, 2: 0.25, 4: 0.25},\n 3: {1: 1.0},\n 4: {2: 1.0}}\n\n If you want to change only one vertex's transition\n probabilities, you can do so with the following:\n\n >>> net.set_transitions({1 : {2: 0.75, 3: 0.25}})\n >>> net.transitions(False) # doctest: +ELLIPSIS\n ... # doctest: +NORMALIZE_WHITESPACE\n {0: {2: 1.0},\n 1: {2: 0.75, 3: 0.25},\n 2: {0: 0.25, 1: 0.25, 2: 0.25, 4: 0.25},\n 3: {1: 1.0},\n 4: {2: 1.0}}\n\n One can generate a transition matrix using\n :func:`.generate_transition_matrix`. You can change all\n transition probabilities with an :class:`~numpy.ndarray`:\n\n >>> mat = qt.generate_transition_matrix(g, seed=10)\n >>> net.set_transitions(mat)\n >>> net.transitions(False) # doctest: +ELLIPSIS\n ... # doctest: +NORMALIZE_WHITESPACE\n {0: {2: 1.0},\n 1: {2: 0.962..., 3: 0.037...},\n 2: {0: 0.301..., 1: 0.353..., 2: 0.235..., 4: 0.108...},\n 3: {1: 1.0},\n 4: {2: 1.0}}\n\n See Also\n --------\n :meth:`.transitions` : Return the current routing\n probabilities.\n :func:`.generate_transition_matrix` : Generate a random routing\n matrix."} {"_id": "q_7176", "text": "Draws the network, highlighting queues of a certain type.\n\n The colored vertices represent self loops of type ``edge_type``.\n Dark edges represent queues of type ``edge_type``.\n\n Parameters\n ----------\n edge_type : int\n The type of vertices and edges to be shown.\n **kwargs\n Any additional parameters to pass to :meth:`.draw`, and\n :meth:`.QueueNetworkDiGraph.draw_graph`\n\n Notes\n -----\n The colors are defined by the class attribute ``colors``. The\n relevant colors are ``vertex_active``, ``vertex_inactive``,\n ``vertex_highlight``, ``edge_active``, and ``edge_inactive``.\n\n Examples\n --------\n The following code highlights all edges with edge type ``2``.\n If the edge is a loop then the vertex is highlighted as well.\n In this case all edges with edge type ``2`` happen to be loops.\n\n >>> import queueing_tool as qt\n >>> g = qt.generate_pagerank_graph(100, seed=13)\n >>> net = qt.QueueNetwork(g, seed=13)\n >>> fname = 'edge_type_2.png'\n >>> net.show_type(2, fname=fname) # doctest: +SKIP\n\n .. figure:: edge_type_2-1.png\n :align: center"} {"_id": "q_7177", "text": "Simulates the network forward.\n\n Simulates either a specific number of events or for a specified\n amount of simulation time.\n\n Parameters\n ----------\n n : int (optional, default: 1)\n The number of events to simulate. If ``t`` is not given\n then this parameter is used.\n t : float (optional)\n The amount of simulation time to simulate forward. If\n given, ``t`` is used instead of ``n``.\n\n Raises\n ------\n QueueingToolError\n Will raise a :exc:`.QueueingToolError` if the\n ``QueueNetwork`` has not been initialized. Call\n :meth:`.initialize` before calling this method.\n\n Examples\n --------\n Let ``net`` denote your instance of a ``QueueNetwork``. Before\n you simulate, you need to initialize the network, which allows\n arrivals from outside the network. To initialize with 2 (random\n chosen) edges accepting arrivals run:\n\n >>> import queueing_tool as qt\n >>> g = qt.generate_pagerank_graph(100, seed=50)\n >>> net = qt.QueueNetwork(g, seed=50)\n >>> net.initialize(2)\n\n To simulate the network 50000 events run:\n\n >>> net.num_events\n 0\n >>> net.simulate(50000)\n >>> net.num_events\n 50000\n\n To simulate the network for at least 75 simulation time units\n run:\n\n >>> t0 = net.current_time\n >>> net.simulate(t=75)\n >>> t1 = net.current_time\n >>> t1 - t0 # doctest: +ELLIPSIS\n 75..."} {"_id": "q_7178", "text": "Tells the queues to collect data on agents' arrival, service\n start, and departure times.\n\n If none of the parameters are given then every\n :class:`.QueueServer` will start collecting data.\n\n Parameters\n ----------\n queues : :any:`int`, *array_like* (optional)\n The edge index (or an iterable of edge indices) identifying\n the :class:`QueueServer(s)<.QueueServer>` that will start\n collecting data.\n edge : 2-tuple of int or *array_like* (optional)\n Explicitly specify which queues will collect data. Must be\n either:\n\n * A 2-tuple of the edge's source and target vertex\n indices, or\n * An iterable of 2-tuples of the edge's source and\n target vertex indices.\n\n edge_type : int or an iterable of int (optional)\n A integer, or a collection of integers identifying which\n edge types will be set active."} {"_id": "q_7179", "text": "Tells the queues to stop collecting data on agents.\n\n If none of the parameters are given then every\n :class:`.QueueServer` will stop collecting data.\n\n Parameters\n ----------\n queues : int, *array_like* (optional)\n The edge index (or an iterable of edge indices) identifying\n the :class:`QueueServer(s)<.QueueServer>` that will stop\n collecting data.\n edge : 2-tuple of int or *array_like* (optional)\n Explicitly specify which queues will stop collecting data.\n Must be either:\n\n * A 2-tuple of the edge's source and target vertex\n indices, or\n * An iterable of 2-tuples of the edge's source and\n target vertex indices.\n\n edge_type : int or an iterable of int (optional)\n A integer, or a collection of integers identifying which\n edge types will stop collecting data."} {"_id": "q_7180", "text": "Returns the routing probabilities for each vertex in the\n graph.\n\n Parameters\n ----------\n return_matrix : bool (optional, the default is ``True``)\n Specifies whether an :class:`~numpy.ndarray` is returned.\n If ``False``, a dict is returned instead.\n\n Returns\n -------\n out : a dict or :class:`~numpy.ndarray`\n The transition probabilities for each vertex in the graph.\n If ``out`` is an :class:`~numpy.ndarray`, then\n ``out[v, u]`` returns the probability of a transition from\n vertex ``v`` to vertex ``u``. If ``out`` is a dict\n then ``out_edge[v][u]`` is the probability of moving from\n vertex ``v`` to the vertex ``u``.\n\n Examples\n --------\n Lets change the routing probabilities:\n\n >>> import queueing_tool as qt\n >>> import networkx as nx\n >>> g = nx.sedgewick_maze_graph()\n >>> net = qt.QueueNetwork(g)\n\n Below is an adjacency list for the graph ``g``.\n\n >>> ans = qt.graph2dict(g, False)\n >>> {k: sorted(v) for k, v in ans.items()}\n ... # doctest: +NORMALIZE_WHITESPACE\n {0: [2, 5, 7],\n 1: [7],\n 2: [0, 6],\n 3: [4, 5],\n 4: [3, 5, 6, 7],\n 5: [0, 3, 4],\n 6: [2, 4],\n 7: [0, 1, 4]}\n\n The default transition matrix is every out edge being equally\n likely:\n\n >>> net.transitions(False) # doctest: +ELLIPSIS\n ... # doctest: +NORMALIZE_WHITESPACE\n {0: {2: 0.333..., 5: 0.333..., 7: 0.333...},\n 1: {7: 1.0},\n 2: {0: 0.5, 6: 0.5},\n 3: {4: 0.5, 5: 0.5},\n 4: {3: 0.25, 5: 0.25, 6: 0.25, 7: 0.25},\n 5: {0: 0.333..., 3: 0.333..., 4: 0.333...},\n 6: {2: 0.5, 4: 0.5},\n 7: {0: 0.333..., 1: 0.333..., 4: 0.333...}}\n\n Now we will generate a random routing matrix:\n\n >>> mat = qt.generate_transition_matrix(g, seed=96)\n >>> net.set_transitions(mat)\n >>> net.transitions(False) # doctest: +ELLIPSIS\n ... # doctest: +NORMALIZE_WHITESPACE\n {0: {2: 0.112..., 5: 0.466..., 7: 0.420...},\n 1: {7: 1.0},\n 2: {0: 0.561..., 6: 0.438...},\n 3: {4: 0.545..., 5: 0.454...},\n 4: {3: 0.374..., 5: 0.381..., 6: 0.026..., 7: 0.217...},\n 5: {0: 0.265..., 3: 0.460..., 4: 0.274...},\n 6: {2: 0.673..., 4: 0.326...},\n 7: {0: 0.033..., 1: 0.336..., 4: 0.630...}}\n\n What this shows is the following: when an :class:`.Agent` is at\n vertex ``2`` they will transition to vertex ``0`` with\n probability ``0.561`` and route to vertex ``6`` probability\n ``0.438``, when at vertex ``6`` they will transition back to\n vertex ``2`` with probability ``0.673`` and route vertex ``4``\n probability ``0.326``, etc."} {"_id": "q_7181", "text": "Returns the number of elements in the set that ``s`` belongs to.\n\n Parameters\n ----------\n s : object\n An object\n\n Returns\n -------\n out : int\n The number of elements in the set that ``s`` belongs to."} {"_id": "q_7182", "text": "Locates the leader of the set to which the element ``s`` belongs.\n\n Parameters\n ----------\n s : object\n An object that the ``UnionFind`` contains.\n\n Returns\n -------\n object\n The leader of the set that contains ``s``."} {"_id": "q_7183", "text": "Merges the set that contains ``a`` with the set that contains ``b``.\n\n Parameters\n ----------\n a, b : objects\n Two objects whose sets are to be merged."} {"_id": "q_7184", "text": "Generates a random transition matrix for the graph ``g``.\n\n Parameters\n ----------\n g : :any:`networkx.DiGraph`, :class:`numpy.ndarray`, dict, etc.\n Any object that :any:`DiGraph` accepts.\n seed : int (optional)\n An integer used to initialize numpy's psuedo-random number\n generator.\n\n Returns\n -------\n mat : :class:`~numpy.ndarray`\n Returns a transition matrix where ``mat[i, j]`` is the\n probability of transitioning from vertex ``i`` to vertex ``j``.\n If there is no edge connecting vertex ``i`` to vertex ``j``\n then ``mat[i, j] = 0``."} {"_id": "q_7185", "text": "Creates a random graph where the vertex types are\n selected using their pagerank.\n\n Calls :func:`.minimal_random_graph` and then\n :func:`.set_types_rank` where the ``rank`` keyword argument\n is given by :func:`networkx.pagerank`.\n\n Parameters\n ----------\n num_vertices : int (optional, the default is 250)\n The number of vertices in the graph.\n **kwargs :\n Any parameters to send to :func:`.minimal_random_graph` or\n :func:`.set_types_rank`.\n\n Returns\n -------\n :class:`.QueueNetworkDiGraph`\n A graph with a ``pos`` vertex property and the ``edge_type``\n edge property.\n\n Notes\n -----\n This function sets the edge types of a graph to be either 1, 2, or\n 3. It sets the vertices to type 2 by selecting the top\n ``pType2 * g.number_of_nodes()`` vertices given by the\n :func:`~networkx.pagerank` of the graph. A loop is added\n to all vertices identified this way (if one does not exist\n already). It then randomly sets vertices close to the type 2\n vertices as type 3, and adds loops to these vertices as well. These\n loops then have edge types that correspond to the vertices type.\n The rest of the edges are set to type 1."} {"_id": "q_7186", "text": "Yield all of the documentation for trait definitions on a class object."} {"_id": "q_7187", "text": "Add lines to the block."} {"_id": "q_7188", "text": "We are transitioning from a noncomment to a comment."} {"_id": "q_7189", "text": "Possibly add a new comment.\n\n Only adds a new comment if this comment is the only thing on the line.\n Otherwise, it extends the noncomment block."} {"_id": "q_7190", "text": "Make the index mapping lines of actual code to their associated\n prefix comments."} {"_id": "q_7191", "text": "Read complete DSMR telegram's from the serial interface and parse it\n into CosemObject's and MbusObject's\n\n :rtype: generator"} {"_id": "q_7192", "text": "Creates a DSMR asyncio protocol."} {"_id": "q_7193", "text": "Creates a DSMR asyncio protocol coroutine using serial port."} {"_id": "q_7194", "text": "Add incoming data to buffer."} {"_id": "q_7195", "text": "Send off parsed telegram to handling callback."} {"_id": "q_7196", "text": "Parse telegram from string to dict.\n\n The telegram str type makes python 2.x integration easier.\n\n :param str telegram_data: full telegram from start ('/') to checksum\n ('!ABCD') including line endings in between the telegram's lines\n :rtype: dict\n :returns: Shortened example:\n {\n ..\n r'\\d-\\d:96\\.1\\.1.+?\\r\\n': , # EQUIPMENT_IDENTIFIER\n r'\\d-\\d:1\\.8\\.1.+?\\r\\n': , # ELECTRICITY_USED_TARIFF_1\n r'\\d-\\d:24\\.3\\.0.+?\\r\\n.+?\\r\\n': , # GAS_METER_READING\n ..\n }\n :raises ParseError:\n :raises InvalidChecksumError:"} {"_id": "q_7197", "text": "Loads config from string or dict"} {"_id": "q_7198", "text": "Converts the configuration dictionary into the corresponding configuration format\n\n :param files: whether to include \"additional files\" in the output or not;\n defaults to ``True``\n :returns: string with output"} {"_id": "q_7199", "text": "Returns a ``BytesIO`` instance representing an in-memory tar.gz archive\n containing the native router configuration.\n\n :returns: in-memory tar.gz archive, instance of ``BytesIO``"} {"_id": "q_7200", "text": "Adds a single file in tarfile instance.\n\n :param tar: tarfile instance\n :param name: string representing filename or path\n :param contents: string representing file contents\n :param mode: string representing file mode, defaults to 644\n :returns: None"} {"_id": "q_7201", "text": "Parses a native configuration and converts\n it to a NetJSON configuration dictionary"} {"_id": "q_7202", "text": "Merges ``list2`` on top of ``list1``.\n\n If both lists contain dictionaries which have keys specified\n in ``identifiers`` which have equal values, those dicts will\n be merged (dicts in ``list2`` will override dicts in ``list1``).\n The remaining elements will be summed in order to create a list\n which contains elements of both lists.\n\n :param list1: ``list`` from template\n :param list2: ``list`` from config\n :param identifiers: ``list`` or ``None``\n :returns: merged ``list``"} {"_id": "q_7203", "text": "Evaluates variables in ``data``\n\n :param data: data structure containing variables, may be\n ``str``, ``dict`` or ``list``\n :param context: ``dict`` containing variables\n :returns: modified data structure"} {"_id": "q_7204", "text": "Looks for a key in a dictionary, if found returns\n a deepcopied value, otherwise returns default value"} {"_id": "q_7205", "text": "Loops over item and performs type casting\n according to supplied schema fragment"} {"_id": "q_7206", "text": "generates install.sh and adds it to included files"} {"_id": "q_7207", "text": "generates tc_script.sh and adds it to included files"} {"_id": "q_7208", "text": "Renders configuration by using the jinja2 templating engine"} {"_id": "q_7209", "text": "converts NetJSON address to\n UCI intermediate data structure"} {"_id": "q_7210", "text": "converts NetJSON interface to\n UCI intermediate data structure"} {"_id": "q_7211", "text": "deletes NetJSON address keys"} {"_id": "q_7212", "text": "converts NetJSON bridge to\n UCI intermediate data structure"} {"_id": "q_7213", "text": "determines UCI interface \"proto\" option"} {"_id": "q_7214", "text": "determines UCI interface \"dns\" option"} {"_id": "q_7215", "text": "only for mac80211 driver"} {"_id": "q_7216", "text": "determines NetJSON protocol radio attribute"} {"_id": "q_7217", "text": "Returns a configuration dictionary representing an OpenVPN client configuration\n that is compatible with the passed server configuration.\n\n :param host: remote VPN server\n :param server: dictionary representing a single OpenVPN server configuration\n :param ca_path: optional string representing path to CA, will consequently add\n a file in the resulting configuration dictionary\n :param ca_contents: optional string representing contents of CA file\n :param cert_path: optional string representing path to certificate, will consequently add\n a file in the resulting configuration dictionary\n :param cert_contents: optional string representing contents of cert file\n :param key_path: optional string representing path to key, will consequently add\n a file in the resulting configuration dictionary\n :param key_contents: optional string representing contents of key file\n :returns: dictionary representing a single OpenVPN client configuration"} {"_id": "q_7218", "text": "returns a list of NetJSON extra files for automatically generated clients\n produces side effects in ``client`` dictionary"} {"_id": "q_7219", "text": "parse requirements.txt, ignore links, exclude comments"} {"_id": "q_7220", "text": "Get all facts of this node. Additional arguments may also be\n specified that will be passed to the query function."} {"_id": "q_7221", "text": "Get a single fact from this node."} {"_id": "q_7222", "text": "Get all resources of this node or all resources of the specified\n type. Additional arguments may also be specified that will be passed\n to the query function."} {"_id": "q_7223", "text": "Get all reports for this node. Additional arguments may also be\n specified that will be passed to the query function."} {"_id": "q_7224", "text": "A base_url that will be used to construct the final\n URL we're going to query against.\n\n :returns: A URL of the form: ``proto://host:port``.\n :rtype: :obj:`string`"} {"_id": "q_7225", "text": "The complete URL we will end up querying. Depending on the\n endpoint we pass in this will result in different URL's with\n different prefixes.\n\n :param endpoint: The PuppetDB API endpoint we want to query.\n :type endpoint: :obj:`string`\n :param path: An additional path if we don't wish to query the\\\n bare endpoint.\n :type path: :obj:`string`\n\n :returns: A URL constructed from :func:`base_url` with the\\\n apropraite API version/prefix and the rest of the path added\\\n to it.\n :rtype: :obj:`string`"} {"_id": "q_7226", "text": "Query for nodes by either name or query. If both aren't\n provided this will return a list of all nodes. This method\n also fetches the nodes status and event counts of the latest\n report from puppetdb.\n\n :param with_status: (optional) include the node status in the\\\n returned nodes\n :type with_status: :bool:\n :param unreported: (optional) amount of hours when a node gets\n marked as unreported\n :type unreported: :obj:`None` or integer\n :param \\*\\*kwargs: The rest of the keyword arguments are passed\n to the _query function\n\n :returns: A generator yieling Nodes.\n :rtype: :class:`pypuppetdb.types.Node`"} {"_id": "q_7227", "text": "Gets a single node from PuppetDB.\n\n :param name: The name of the node search.\n :type name: :obj:`string`\n\n :return: An instance of Node\n :rtype: :class:`pypuppetdb.types.Node`"} {"_id": "q_7228", "text": "Get the available catalog for a given node.\n\n :param node: (Required) The name of the PuppetDB node.\n :type: :obj:`string`\n\n :returns: An instance of Catalog\n :rtype: :class:`pypuppetdb.types.Catalog`"} {"_id": "q_7229", "text": "Connect with PuppetDB. This will return an object allowing you\n to query the API through its methods.\n\n :param host: (Default: 'localhost;) Hostname or IP of PuppetDB.\n :type host: :obj:`string`\n\n :param port: (Default: '8080') Port on which to talk to PuppetDB.\n :type port: :obj:`int`\n\n :param ssl_verify: (optional) Verify PuppetDB server certificate.\n :type ssl_verify: :obj:`bool` or :obj:`string` True, False or filesystem \\\n path to CA certificate.\n\n :param ssl_key: (optional) Path to our client secret key.\n :type ssl_key: :obj:`None` or :obj:`string` representing a filesystem\\\n path.\n\n :param ssl_cert: (optional) Path to our client certificate.\n :type ssl_cert: :obj:`None` or :obj:`string` representing a filesystem\\\n path.\n\n :param timeout: (Default: 10) Number of seconds to wait for a response.\n :type timeout: :obj:`int`\n\n :param protocol: (optional) Explicitly specify the protocol to be used\n (especially handy when using HTTPS with ssl_verify=False and\n without certs)\n :type protocol: :obj:`None` or :obj:`string`\n\n :param url_path: (Default: '/') The URL path where PuppetDB is served\n :type url_path: :obj:`None` or :obj:`string`\n\n :param username: (optional) The username to use for HTTP basic\n authentication\n :type username: :obj:`None` or :obj:`string`\n\n :param password: (optional) The password to use for HTTP basic\n authentication\n :type password: :obj:`None` or :obj:`string`\n\n :param token: (optional) The x-auth token to use for X-Authentication\n :type token: :obj:`None` or :obj:`string`"} {"_id": "q_7230", "text": "The Master has been started from the command line. Execute ad-hoc tests if desired."} {"_id": "q_7231", "text": "Direct operate a set of commands\n\n :param command_set: set of command headers\n :param callback: callback that will be invoked upon completion or failure\n :param config: optional configuration that controls normal callbacks and allows the user to be specified for SA"} {"_id": "q_7232", "text": "Select and operate a single command\n\n :param command: command to operate\n :param index: index of the command\n :param callback: callback that will be invoked upon completion or failure\n :param config: optional configuration that controls normal callbacks and allows the user to be specified for SA"} {"_id": "q_7233", "text": "Select and operate a set of commands\n\n :param command_set: set of command headers\n :param callback: callback that will be invoked upon completion or failure\n :param config: optional configuration that controls normal callbacks and allows the user to be specified for SA"} {"_id": "q_7234", "text": "The Outstation has been started from the command line. Execute ad-hoc tests if desired."} {"_id": "q_7235", "text": "The Master sent an Operate command to the Outstation. Handle it.\n\n :param command: ControlRelayOutputBlock,\n AnalogOutputInt16, AnalogOutputInt32, AnalogOutputFloat32, or AnalogOutputDouble64.\n :param index: int\n :param op_type: OperateType\n :return: CommandStatus"} {"_id": "q_7236", "text": "Create Bloomberg connection\n\n Returns:\n (Bloomberg connection, if connection is new)"} {"_id": "q_7237", "text": "Stop and destroy Bloomberg connection"} {"_id": "q_7238", "text": "Parse markdown as description"} {"_id": "q_7239", "text": "Standardized earning outputs and add percentage by each blocks\n\n Args:\n data: earning data block\n header: earning headers\n\n Returns:\n pd.DataFrame\n\n Examples:\n >>> format_earning(\n ... data=pd.read_pickle('xbbg/tests/data/sample_earning.pkl'),\n ... header=pd.read_pickle('xbbg/tests/data/sample_earning_header.pkl')\n ... ).round(2)\n level fy2017 fy2017_pct\n Asia-Pacific 1.0 3540.0 66.43\n \u00a0\u00a0\u00a0China 2.0 1747.0 49.35\n \u00a0\u00a0\u00a0Japan 2.0 1242.0 35.08\n \u00a0\u00a0\u00a0Singapore 2.0 551.0 15.56\n United States 1.0 1364.0 25.60\n Europe 1.0 263.0 4.94\n Other Countries 1.0 162.0 3.04"} {"_id": "q_7240", "text": "Format `pdblp` outputs to column-based results\n\n Args:\n data: `pdblp` result\n source: `bdp` or `bds`\n col_maps: rename columns with these mappings\n\n Returns:\n pd.DataFrame\n\n Examples:\n >>> format_output(\n ... data=pd.read_pickle('xbbg/tests/data/sample_bdp.pkl'),\n ... source='bdp'\n ... ).reset_index()\n ticker name\n 0 QQQ US Equity INVESCO QQQ TRUST SERIES 1\n 1 SPY US Equity SPDR S&P 500 ETF TRUST\n >>> format_output(\n ... data=pd.read_pickle('xbbg/tests/data/sample_dvd.pkl'),\n ... source='bds', col_maps={'Dividend Frequency': 'dvd_freq'}\n ... ).loc[:, ['ex_date', 'dividend_amount', 'dvd_freq']].reset_index()\n ticker ex_date dividend_amount dvd_freq\n 0 C US Equity 2018-02-02 0.32 Quarter"} {"_id": "q_7241", "text": "Format intraday data\n\n Args:\n data: pd.DataFrame from bdib\n ticker: ticker\n\n Returns:\n pd.DataFrame\n\n Examples:\n >>> format_intraday(\n ... data=pd.read_parquet('xbbg/tests/data/sample_bdib.parq'),\n ... ticker='SPY US Equity',\n ... ).xs('close', axis=1, level=1, drop_level=False)\n ticker SPY US Equity\n field close\n 2018-12-28 09:30:00-05:00 249.67\n 2018-12-28 09:31:00-05:00 249.54\n 2018-12-28 09:32:00-05:00 249.22\n 2018-12-28 09:33:00-05:00 249.01\n 2018-12-28 09:34:00-05:00 248.86\n >>> format_intraday(\n ... data=pd.read_parquet('xbbg/tests/data/sample_bdib.parq'),\n ... ticker='SPY US Equity', price_only=True\n ... )\n ticker SPY US Equity\n 2018-12-28 09:30:00-05:00 249.67\n 2018-12-28 09:31:00-05:00 249.54\n 2018-12-28 09:32:00-05:00 249.22\n 2018-12-28 09:33:00-05:00 249.01\n 2018-12-28 09:34:00-05:00 248.86"} {"_id": "q_7242", "text": "Logging info for given tickers and fields\n\n Args:\n tickers: tickers\n flds: fields\n\n Returns:\n str\n\n Examples:\n >>> print(info_qry(\n ... tickers=['NVDA US Equity'], flds=['Name', 'Security_Name']\n ... ))\n tickers: ['NVDA US Equity']\n fields: ['Name', 'Security_Name']"} {"_id": "q_7243", "text": "Bloomberg historical data\n\n Args:\n tickers: ticker(s)\n flds: field(s)\n start_date: start date\n end_date: end date - default today\n adjust: `all`, `dvd`, `normal`, `abn` (=abnormal), `split`, `-` or None\n exact match of above words will adjust for corresponding events\n Case 0: `-` no adjustment for dividend or split\n Case 1: `dvd` or `normal|abn` will adjust for all dividends except splits\n Case 2: `adjust` will adjust for splits and ignore all dividends\n Case 3: `all` == `dvd|split` == adjust for all\n Case 4: None == Bloomberg default OR use kwargs\n **kwargs: overrides\n\n Returns:\n pd.DataFrame\n\n Examples:\n >>> res = bdh(\n ... tickers='VIX Index', flds=['High', 'Low', 'Last_Price'],\n ... start_date='2018-02-05', end_date='2018-02-07',\n ... ).round(2).transpose()\n >>> res.index.name = None\n >>> res.columns.name = None\n >>> res\n 2018-02-05 2018-02-06 2018-02-07\n VIX Index High 38.80 50.30 31.64\n Low 16.80 22.42 21.17\n Last_Price 37.32 29.98 27.73\n >>> bdh(\n ... tickers='AAPL US Equity', flds='Px_Last',\n ... start_date='20140605', end_date='20140610', adjust='-'\n ... ).round(2)\n ticker AAPL US Equity\n field Px_Last\n 2014-06-05 647.35\n 2014-06-06 645.57\n 2014-06-09 93.70\n 2014-06-10 94.25\n >>> bdh(\n ... tickers='AAPL US Equity', flds='Px_Last',\n ... start_date='20140606', end_date='20140609',\n ... CshAdjNormal=False, CshAdjAbnormal=False, CapChg=False,\n ... ).round(2)\n ticker AAPL US Equity\n field Px_Last\n 2014-06-06 645.57\n 2014-06-09 93.70"} {"_id": "q_7244", "text": "Bloomberg intraday bar data within market session\n\n Args:\n ticker: ticker\n dt: date\n session: examples include\n day_open_30, am_normal_30_30, day_close_30, allday_exact_0930_1000\n **kwargs:\n ref: reference ticker or exchange for timezone\n keep_tz: if keep tz if reference ticker / exchange is given\n start_time: start time\n end_time: end time\n typ: [TRADE, BID, ASK, BID_BEST, ASK_BEST, BEST_BID, BEST_ASK]\n\n Returns:\n pd.DataFrame"} {"_id": "q_7245", "text": "Earning exposures by Geo or Products\n\n Args:\n ticker: ticker name\n by: [G(eo), P(roduct)]\n typ: type of earning, start with `PG_` in Bloomberg FLDS - default `Revenue`\n ccy: currency of earnings\n level: hierarchy level of earnings\n\n Returns:\n pd.DataFrame\n\n Examples:\n >>> data = earning('AMD US Equity', Eqy_Fund_Year=2017, Number_Of_Periods=1)\n >>> data.round(2)\n level fy2017 fy2017_pct\n Asia-Pacific 1.0 3540.0 66.43\n \u00a0\u00a0\u00a0China 2.0 1747.0 49.35\n \u00a0\u00a0\u00a0Japan 2.0 1242.0 35.08\n \u00a0\u00a0\u00a0Singapore 2.0 551.0 15.56\n United States 1.0 1364.0 25.60\n Europe 1.0 263.0 4.94\n Other Countries 1.0 162.0 3.04"} {"_id": "q_7246", "text": "Active futures contract\n\n Args:\n ticker: futures ticker, i.e., ESA Index, Z A Index, CLA Comdty, etc.\n dt: date\n\n Returns:\n str: ticker name"} {"_id": "q_7247", "text": "Get proper ticker from generic ticker\n\n Args:\n gen_ticker: generic ticker\n dt: date\n freq: futures contract frequency\n log: level of logs\n\n Returns:\n str: exact futures ticker"} {"_id": "q_7248", "text": "Check exchange hours vs local hours\n\n Args:\n tickers: list of tickers\n tz_exch: exchange timezone\n tz_loc: local timezone\n\n Returns:\n Local and exchange hours"} {"_id": "q_7249", "text": "Data file location for Bloomberg historical data\n\n Args:\n ticker: ticker name\n dt: date\n typ: [TRADE, BID, ASK, BID_BEST, ASK_BEST, BEST_BID, BEST_ASK]\n\n Returns:\n file location\n\n Examples:\n >>> os.environ['BBG_ROOT'] = ''\n >>> hist_file(ticker='ES1 Index', dt='2018-08-01') == ''\n True\n >>> os.environ['BBG_ROOT'] = '/data/bbg'\n >>> hist_file(ticker='ES1 Index', dt='2018-08-01')\n '/data/bbg/Index/ES1 Index/TRADE/2018-08-01.parq'"} {"_id": "q_7250", "text": "Data file location for Bloomberg reference data\n\n Args:\n ticker: ticker name\n fld: field\n has_date: whether add current date to data file\n cache: if has_date is True, whether to load file from latest cached\n ext: file extension\n **kwargs: other overrides passed to ref function\n\n Returns:\n file location\n\n Examples:\n >>> import shutil\n >>>\n >>> os.environ['BBG_ROOT'] = ''\n >>> ref_file('BLT LN Equity', fld='Crncy') == ''\n True\n >>> os.environ['BBG_ROOT'] = '/data/bbg'\n >>> ref_file('BLT LN Equity', fld='Crncy', cache=True)\n '/data/bbg/Equity/BLT LN Equity/Crncy/ovrd=None.parq'\n >>> ref_file('BLT LN Equity', fld='Crncy')\n ''\n >>> cur_dt = utils.cur_time(tz=utils.DEFAULT_TZ)\n >>> ref_file(\n ... 'BLT LN Equity', fld='DVD_Hist_All', has_date=True, cache=True,\n ... ).replace(cur_dt, '[cur_date]')\n '/data/bbg/Equity/BLT LN Equity/DVD_Hist_All/asof=[cur_date], ovrd=None.parq'\n >>> ref_file(\n ... 'BLT LN Equity', fld='DVD_Hist_All', has_date=True,\n ... cache=True, DVD_Start_Dt='20180101',\n ... ).replace(cur_dt, '[cur_date]')[:-5]\n '/data/bbg/Equity/BLT LN Equity/DVD_Hist_All/asof=[cur_date], DVD_Start_Dt=20180101'\n >>> sample = 'asof=2018-11-02, DVD_Start_Dt=20180101, DVD_End_Dt=20180501.pkl'\n >>> root_path = 'xbbg/tests/data'\n >>> sub_path = f'{root_path}/Equity/AAPL US Equity/DVD_Hist_All'\n >>> os.environ['BBG_ROOT'] = root_path\n >>> for tmp_file in files.all_files(sub_path): os.remove(tmp_file)\n >>> files.create_folder(sub_path)\n >>> sample in shutil.copy(f'{root_path}/{sample}', sub_path)\n True\n >>> new_file = ref_file(\n ... 'AAPL US Equity', 'DVD_Hist_All', DVD_Start_Dt='20180101',\n ... has_date=True, cache=True, ext='pkl'\n ... )\n >>> new_file.split('/')[-1] == f'asof={cur_dt}, DVD_Start_Dt=20180101.pkl'\n True\n >>> old_file = 'asof=2018-11-02, DVD_Start_Dt=20180101, DVD_End_Dt=20180501.pkl'\n >>> old_full = '/'.join(new_file.split('/')[:-1] + [old_file])\n >>> updated_file = old_full.replace('2018-11-02', cur_dt)\n >>> updated_file in shutil.copy(old_full, updated_file)\n True\n >>> exist_file = ref_file(\n ... 'AAPL US Equity', 'DVD_Hist_All', DVD_Start_Dt='20180101',\n ... has_date=True, cache=True, ext='pkl'\n ... )\n >>> exist_file == updated_file\n False\n >>> exist_file = ref_file(\n ... 'AAPL US Equity', 'DVD_Hist_All', DVD_Start_Dt='20180101',\n ... DVD_End_Dt='20180501', has_date=True, cache=True, ext='pkl'\n ... )\n >>> exist_file == updated_file\n True"} {"_id": "q_7251", "text": "Check whether data is done for the day and save\n\n Args:\n data: data\n ticker: ticker\n dt: date\n typ: [TRADE, BID, ASK, BID_BEST, ASK_BEST, BEST_BID, BEST_ASK]\n\n Examples:\n >>> os.environ['BBG_ROOT'] = 'xbbg/tests/data'\n >>> sample = pd.read_parquet('xbbg/tests/data/aapl.parq')\n >>> save_intraday(sample, 'AAPL US Equity', '2018-11-02')\n >>> # Invalid exchange\n >>> save_intraday(sample, 'AAPL XX Equity', '2018-11-02')\n >>> # Invalid empty data\n >>> save_intraday(pd.DataFrame(), 'AAPL US Equity', '2018-11-02')\n >>> # Invalid date - too close\n >>> cur_dt = utils.cur_time()\n >>> save_intraday(sample, 'AAPL US Equity', cur_dt)"} {"_id": "q_7252", "text": "Exchange info for given ticker\n\n Args:\n ticker: ticker or exchange\n\n Returns:\n pd.Series\n\n Examples:\n >>> exch_info('SPY US Equity')\n tz America/New_York\n allday [04:00, 20:00]\n day [09:30, 16:00]\n pre [04:00, 09:30]\n post [16:01, 20:00]\n dtype: object\n >>> exch_info('ES1 Index')\n tz America/New_York\n allday [18:00, 17:00]\n day [08:00, 17:00]\n dtype: object\n >>> exch_info('Z 1 Index')\n tz Europe/London\n allday [01:00, 21:00]\n day [01:00, 21:00]\n dtype: object\n >>> exch_info('TESTTICKER Corp').empty\n True\n >>> exch_info('US')\n tz America/New_York\n allday [04:00, 20:00]\n day [09:30, 16:00]\n pre [04:00, 09:30]\n post [16:01, 20:00]\n dtype: object"} {"_id": "q_7253", "text": "Get info for given market\n\n Args:\n ticker: Bloomberg full ticker\n\n Returns:\n dict\n\n Examples:\n >>> info = market_info('SHCOMP Index')\n >>> info['exch']\n 'EquityChina'\n >>> info = market_info('ICICIC=1 IS Equity')\n >>> info['freq'], info['is_fut']\n ('M', True)\n >>> info = market_info('INT1 Curncy')\n >>> info['freq'], info['is_fut']\n ('M', True)\n >>> info = market_info('CL1 Comdty')\n >>> info['freq'], info['is_fut']\n ('M', True)\n >>> # Wrong tickers\n >>> market_info('C XX Equity')\n {}\n >>> market_info('XXX Comdty')\n {}\n >>> market_info('Bond_ISIN Corp')\n {}\n >>> market_info('XYZ Index')\n {}\n >>> market_info('XYZ Curncy')\n {}"} {"_id": "q_7254", "text": "Currency pair info\n\n Args:\n local: local currency\n base: base currency\n\n Returns:\n CurrencyPair\n\n Examples:\n >>> ccy_pair(local='HKD', base='USD')\n CurrencyPair(ticker='HKD Curncy', factor=1.0, power=1)\n >>> ccy_pair(local='GBp')\n CurrencyPair(ticker='GBP Curncy', factor=100, power=-1)\n >>> ccy_pair(local='USD', base='GBp')\n CurrencyPair(ticker='GBP Curncy', factor=0.01, power=1)\n >>> ccy_pair(local='XYZ', base='USD')\n CurrencyPair(ticker='', factor=1.0, power=1)\n >>> ccy_pair(local='GBP', base='GBp')\n CurrencyPair(ticker='', factor=0.01, power=1)\n >>> ccy_pair(local='GBp', base='GBP')\n CurrencyPair(ticker='', factor=100.0, power=1)"} {"_id": "q_7255", "text": "Market close time for ticker\n\n Args:\n ticker: ticker name\n dt: date\n timing: [EOD (default), BOD]\n tz: conversion to timezone\n\n Returns:\n str: date & time\n\n Examples:\n >>> market_timing('7267 JT Equity', dt='2018-09-10')\n '2018-09-10 14:58'\n >>> market_timing('7267 JT Equity', dt='2018-09-10', tz=timezone.TimeZone.NY)\n '2018-09-10 01:58:00-04:00'\n >>> market_timing('7267 JT Equity', dt='2018-01-10', tz='NY')\n '2018-01-10 00:58:00-05:00'\n >>> market_timing('7267 JT Equity', dt='2018-09-10', tz='SPX Index')\n '2018-09-10 01:58:00-04:00'\n >>> market_timing('8035 JT Equity', dt='2018-09-10', timing='BOD')\n '2018-09-10 09:01'\n >>> market_timing('Z 1 Index', dt='2018-09-10', timing='FINISHED')\n '2018-09-10 21:00'\n >>> market_timing('TESTTICKER Corp', dt='2018-09-10')\n ''"} {"_id": "q_7256", "text": "Load parameters for assets\n\n Args:\n cat: category\n\n Returns:\n dict\n\n Examples:\n >>> import pandas as pd\n >>>\n >>> assets = load_info(cat='assets')\n >>> all(cat in assets for cat in ['Equity', 'Index', 'Curncy', 'Corp'])\n True\n >>> os.environ['BBG_PATH'] = ''\n >>> exch = load_info(cat='exch')\n >>> pd.Series(exch['EquityUS']).allday\n [400, 2000]\n >>> test_root = f'{PKG_PATH}/tests'\n >>> os.environ['BBG_PATH'] = test_root\n >>> ovrd_exch = load_info(cat='exch')\n >>> # Somehow os.environ is not set properly in doctest environment\n >>> ovrd_exch.update(_load_yaml_(f'{test_root}/markets/exch.yml'))\n >>> pd.Series(ovrd_exch['EquityUS']).allday\n [300, 2100]"} {"_id": "q_7257", "text": "Convert YAML input to hours\n\n Args:\n num: number in YMAL file, e.g., 900, 1700, etc.\n\n Returns:\n str\n\n Examples:\n >>> to_hour(900)\n '09:00'\n >>> to_hour(1700)\n '17:00'"} {"_id": "q_7258", "text": "Make folder as well as all parent folders if not exists\n\n Args:\n path_name: full path name\n is_file: whether input is name of file"} {"_id": "q_7259", "text": "Search all files with criteria\n Returned list will be sorted by last modified\n\n Args:\n path_name: full path name\n keyword: keyword to search\n ext: file extensions, split by ','\n full_path: whether return full path (default True)\n has_date: whether has date in file name (default False)\n date_fmt: date format to check for has_date parameter\n\n Returns:\n list: all file names with criteria fulfilled"} {"_id": "q_7260", "text": "Search all folders with criteria\n Returned list will be sorted by last modified\n\n Args:\n path_name: full path name\n keyword: keyword to search\n has_date: whether has date in file name (default False)\n date_fmt: date format to check for has_date parameter\n\n Returns:\n list: all folder names fulfilled criteria"} {"_id": "q_7261", "text": "Sort files or folders by modified time\n\n Args:\n files_or_folders: list of files or folders\n\n Returns:\n list"} {"_id": "q_7262", "text": "Filter files or dates by date patterns\n\n Args:\n files_or_folders: list of files or folders\n date_fmt: date format\n\n Returns:\n list"} {"_id": "q_7263", "text": "File modified time in python\n\n Args:\n file_name: file name\n\n Returns:\n pd.Timestamp"} {"_id": "q_7264", "text": "Get interval from defined session\n\n Args:\n ticker: ticker\n session: session\n\n Returns:\n Session of start_time and end_time\n\n Examples:\n >>> get_interval('005490 KS Equity', 'day_open_30')\n Session(start_time='09:00', end_time='09:30')\n >>> get_interval('005490 KS Equity', 'day_normal_30_20')\n Session(start_time='09:31', end_time='15:00')\n >>> get_interval('005490 KS Equity', 'day_close_20')\n Session(start_time='15:01', end_time='15:20')\n >>> get_interval('700 HK Equity', 'am_open_30')\n Session(start_time='09:30', end_time='10:00')\n >>> get_interval('700 HK Equity', 'am_normal_30_30')\n Session(start_time='10:01', end_time='11:30')\n >>> get_interval('700 HK Equity', 'am_close_30')\n Session(start_time='11:31', end_time='12:00')\n >>> get_interval('ES1 Index', 'day_exact_2130_2230')\n Session(start_time=None, end_time=None)\n >>> get_interval('ES1 Index', 'allday_exact_2130_2230')\n Session(start_time='21:30', end_time='22:30')\n >>> get_interval('ES1 Index', 'allday_exact_2130_0230')\n Session(start_time='21:30', end_time='02:30')\n >>> get_interval('AMLP US', 'day_open_30')\n Session(start_time=None, end_time=None)\n >>> get_interval('7974 JP Equity', 'day_normal_180_300') is SessNA\n True\n >>> get_interval('Z 1 Index', 'allday_normal_30_30')\n Session(start_time='01:31', end_time='20:30')\n >>> get_interval('GBP Curncy', 'day')\n Session(start_time='17:02', end_time='17:00')"} {"_id": "q_7265", "text": "Shift start time by mins\n\n Args:\n start_time: start time in terms of HH:MM string\n mins: number of minutes (+ / -)\n\n Returns:\n end time in terms of HH:MM string"} {"_id": "q_7266", "text": "Time intervals for market open\n\n Args:\n session: [allday, day, am, pm, night]\n mins: mintues after open\n\n Returns:\n Session of start_time and end_time"} {"_id": "q_7267", "text": "Time intervals for market close\n\n Args:\n session: [allday, day, am, pm, night]\n mins: mintues before close\n\n Returns:\n Session of start_time and end_time"} {"_id": "q_7268", "text": "Explicitly specify start time and end time\n\n Args:\n session: predefined session\n start_time: start time in terms of HHMM string\n end_time: end time in terms of HHMM string\n\n Returns:\n Session of start_time and end_time"} {"_id": "q_7269", "text": "Convert to tz\n\n Args:\n dt: date time\n to_tz: to tz\n from_tz: from tz - will be ignored if tz from dt is given\n\n Returns:\n str: date & time\n\n Examples:\n >>> dt_1 = pd.Timestamp('2018-09-10 16:00', tz='Asia/Hong_Kong')\n >>> tz_convert(dt_1, to_tz='NY')\n '2018-09-10 04:00:00-04:00'\n >>> dt_2 = pd.Timestamp('2018-01-10 16:00')\n >>> tz_convert(dt_2, to_tz='HK', from_tz='NY')\n '2018-01-11 05:00:00+08:00'\n >>> dt_3 = '2018-09-10 15:00'\n >>> tz_convert(dt_3, to_tz='NY', from_tz='JP')\n '2018-09-10 02:00:00-04:00'"} {"_id": "q_7270", "text": "Full infomation for missing query"} {"_id": "q_7271", "text": "Check number of trials for missing values\n\n Returns:\n int: number of trials already tried"} {"_id": "q_7272", "text": "Decorator for public views that do not require authentication\n Sets an attribute in the fuction STRONGHOLD_IS_PUBLIC to True"} {"_id": "q_7273", "text": "Get the version of the package from the given file by\n executing it and extracting the given `name`."} {"_id": "q_7274", "text": "Find all of the packages."} {"_id": "q_7275", "text": "Echo a command before running it. Defaults to repo as cwd"} {"_id": "q_7276", "text": "Return a Command that checks that certain files exist.\n\n Raises a ValueError if any of the files are missing.\n\n Note: The check is skipped if the `--skip-npm` flag is used."} {"_id": "q_7277", "text": "Wrap a setup command\n\n Parameters\n ----------\n cmds: list(str)\n The names of the other commands to run prior to the command.\n strict: boolean, optional\n Whether to raise errors when a pre-command fails."} {"_id": "q_7278", "text": "Expand data file specs into valid data files metadata.\n\n Parameters\n ----------\n data_specs: list of tuples\n See [createcmdclass] for description.\n existing: list of tuples\n The existing distribution data_files metadata.\n\n Returns\n -------\n A valid list of data_files items."} {"_id": "q_7279", "text": "Translate and compile a glob pattern to a regular expression matcher."} {"_id": "q_7280", "text": "Iterate over all the parts of a path.\n\n Splits path recursively with os.path.split()."} {"_id": "q_7281", "text": "Join translated glob pattern parts.\n\n This is different from a simple join, as care need to be taken\n to allow ** to match ZERO or more directories."} {"_id": "q_7282", "text": "Translate a glob PATTERN PART to a regular expression."} {"_id": "q_7283", "text": "Send DDL to create the specified `table`\n\n :Parameters:\n - `table`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader.Table` object that represents the table to read/write.\n\n Returns None"} {"_id": "q_7284", "text": "Send DDL to create the specified `table` indexes\n\n :Parameters:\n - `table`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader.Table` object that represents the table to read/write.\n\n Returns None"} {"_id": "q_7285", "text": "Send DDL to create the specified `table` triggers\n\n :Parameters:\n - `table`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader.Table` object that represents the table to read/write.\n\n Returns None"} {"_id": "q_7286", "text": "Write the contents of `table`\n\n :Parameters:\n - `table`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader.Table` object that represents the table to read/write.\n - `reader`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader` object that allows reading from the data source.\n\n Returns None"} {"_id": "q_7287", "text": "Write TRIGGERs existing on `table` to the output file\n\n :Parameters:\n - `table`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader.Table` object that represents the table to read/write.\n\n Returns None"} {"_id": "q_7288", "text": "Utility for sending a predefined request and printing response as well\n as storing messages in a list, useful for testing\n\n Parameters\n ----------\n session: blpapi.session.Session\n request: blpapi.request.Request\n Request to be sent\n\n Returns\n -------\n List of all messages received"} {"_id": "q_7289", "text": "Initialize blpapi.Session services"} {"_id": "q_7290", "text": "Get Open, High, Low, Close, Volume, and numEvents for a ticker.\n Return pandas DataFrame\n\n Parameters\n ----------\n ticker: string\n String corresponding to ticker\n start_datetime: string\n UTC datetime in format YYYY-mm-ddTHH:MM:SS\n end_datetime: string\n UTC datetime in format YYYY-mm-ddTHH:MM:SS\n event_type: string {TRADE, BID, ASK, BID_BEST, ASK_BEST, BEST_BID,\n BEST_ASK}\n Requested data event type\n interval: int {1... 1440}\n Length of time bars\n elms: list of tuples\n List of tuples where each tuple corresponds to the other elements\n to be set. Refer to the IntradayBarRequest section in the\n 'Services & schemas reference guide' for more info on these values"} {"_id": "q_7291", "text": "Enqueue task with specified data."} {"_id": "q_7292", "text": "This method is a good one to extend if you want to create a queue which always applies an extra predicate."} {"_id": "q_7293", "text": "Designed to be passed as the default kwarg in simplejson.dumps. Serializes dates and datetimes to ISO strings."} {"_id": "q_7294", "text": "Returns a new connection to the database."} {"_id": "q_7295", "text": "Run a set of InsertWorkers and record their performance."} {"_id": "q_7296", "text": "Used for development only"} {"_id": "q_7297", "text": "Returns the number of connections cached by the pool."} {"_id": "q_7298", "text": "OperationalError's are emitted by the _mysql library for\n almost every error code emitted by MySQL. Because of this we\n verify that the error is actually a connection error before\n terminating the connection and firing off a PoolConnectionException"} {"_id": "q_7299", "text": "Build a simple expression ready to be added onto another query.\n\n >>> simple_expression(joiner=' AND ', name='bob', role='admin')\n \"`name`=%(_QB_name)s AND `name`=%(_QB_role)s\", { '_QB_name': 'bob', '_QB_role': 'admin' }"} {"_id": "q_7300", "text": "Build a update query.\n\n >>> update('foo_table', a=5, b=2)\n \"UPDATE `foo_table` SET `a`=%(_QB_a)s, `b`=%(_QB_b)s\", { '_QB_a': 5, '_QB_b': 2 }"} {"_id": "q_7301", "text": "Connect to the database specified"} {"_id": "q_7302", "text": "Start a step."} {"_id": "q_7303", "text": "Stop a step."} {"_id": "q_7304", "text": "load steps -> basically load all the datetime isoformats into datetimes"} {"_id": "q_7305", "text": "Assemble one EVM instruction from its textual representation.\n\n :param asmcode: assembly code for one instruction\n :type asmcode: str\n :param pc: program counter of the instruction(optional)\n :type pc: int\n :param fork: fork name (optional)\n :type fork: str\n :return: An Instruction object\n :rtype: Instruction\n\n Example use::\n\n >>> print assemble_one('LT')"} {"_id": "q_7306", "text": "Assemble a sequence of textual representation of EVM instructions\n\n :param asmcode: assembly code for any number of instructions\n :type asmcode: str\n :param pc: program counter of the first instruction(optional)\n :type pc: int\n :param fork: fork name (optional)\n :type fork: str\n :return: An generator of Instruction objects\n :rtype: generator[Instructions]\n\n Example use::\n\n >>> assemble_one('''PUSH1 0x60\\n \\\n PUSH1 0x40\\n \\\n MSTORE\\n \\\n PUSH1 0x2\\n \\\n PUSH2 0x108\\n \\\n PUSH1 0x0\\n \\\n POP\\n \\\n SSTORE\\n \\\n PUSH1 0x40\\n \\\n MLOAD\\n \\\n ''')"} {"_id": "q_7307", "text": "Disassemble a single instruction from a bytecode\n\n :param bytecode: the bytecode stream\n :type bytecode: str | bytes | bytearray | iterator\n :param pc: program counter of the instruction(optional)\n :type pc: int\n :param fork: fork name (optional)\n :type fork: str\n :return: an Instruction object\n :rtype: Instruction\n\n Example use::\n\n >>> print disassemble_one('\\x60\\x10')"} {"_id": "q_7308", "text": "Disassemble all instructions in bytecode\n\n :param bytecode: an evm bytecode (binary)\n :type bytecode: str | bytes | bytearray | iterator\n :param pc: program counter of the first instruction(optional)\n :type pc: int\n :param fork: fork name (optional)\n :type fork: str\n :return: An generator of Instruction objects\n :rtype: list[Instruction]\n\n Example use::\n\n >>> for inst in disassemble_all(bytecode):\n ... print(instr)\n\n ...\n PUSH1 0x60\n PUSH1 0x40\n MSTORE\n PUSH1 0x2\n PUSH2 0x108\n PUSH1 0x0\n POP\n SSTORE\n PUSH1 0x40\n MLOAD"} {"_id": "q_7309", "text": "Convert block number to fork name.\n\n :param block_number: block number\n :type block_number: int\n :return: fork name\n :rtype: str\n\n Example use::\n\n >>> block_to_fork(0)\n ...\n \"frontier\"\n >>> block_to_fork(4370000)\n ...\n \"byzantium\"\n >>> block_to_fork(4370001)\n ...\n \"byzantium\""} {"_id": "q_7310", "text": "Disconnects from the websocket connection and joins the Thread.\n\n :return:"} {"_id": "q_7311", "text": "Issues a reconnection by setting the reconnect_required event.\n\n :return:"} {"_id": "q_7312", "text": "Creates a websocket connection.\n\n :return:"} {"_id": "q_7313", "text": "Handles and passes received data to the appropriate handlers.\n\n :return:"} {"_id": "q_7314", "text": "Stops ping, pong and connection timers.\n\n :return:"} {"_id": "q_7315", "text": "Sends a ping message to the API and starts pong timers.\n\n :return:"} {"_id": "q_7316", "text": "Sends the given Payload to the API via the websocket connection.\n\n :param kwargs: payload paarameters as key=value pairs\n :return:"} {"_id": "q_7317", "text": "Unpauses the connection.\n\n Send a message up to client that he should re-subscribe to all\n channels.\n\n :return:"} {"_id": "q_7318", "text": "Distributes system messages to the appropriate handler.\n\n System messages include everything that arrives as a dict,\n or a list containing a heartbeat.\n\n :param data:\n :param ts:\n :return:"} {"_id": "q_7319", "text": "Handle INFO messages from the API and issues relevant actions.\n\n :param data:\n :param ts:"} {"_id": "q_7320", "text": "Handles data messages by passing them up to the client.\n\n :param data:\n :param ts:\n :return:"} {"_id": "q_7321", "text": "Resubscribes to all channels found in self.channel_configs.\n\n :param soft: if True, unsubscribes first.\n :return: None"} {"_id": "q_7322", "text": "Handles authentication responses.\n\n :param dtype:\n :param data:\n :param ts:\n :return:"} {"_id": "q_7323", "text": "Handles configuration messages.\n\n :param dtype:\n :param data:\n :param ts:\n :return:"} {"_id": "q_7324", "text": "Reset the client.\n\n :return:"} {"_id": "q_7325", "text": "Return a queue containing all received candles data.\n\n :param pair: str, Symbol pair to request data for\n :param timeframe: str\n :return: Queue()"} {"_id": "q_7326", "text": "Send configuration to websocket server\n\n :param decimals_as_strings: bool, turn on/off decimals as strings\n :param ts_as_dates: bool, decide to request timestamps as dates instead\n :param sequencing: bool, turn on sequencing\n\t:param ts: bool, request the timestamp to be appended to every array\n sent by the server\n :param kwargs:\n :return:"} {"_id": "q_7327", "text": "Unsubscribe to the passed pair's ticker channel.\n\n :param pair: str, Symbol pair to request data for\n :param kwargs:\n :return:"} {"_id": "q_7328", "text": "Subscribe to the passed pair's order book channel.\n\n :param pair: str, Symbol pair to request data for\n :param kwargs:\n :return:"} {"_id": "q_7329", "text": "Unsubscribe to the passed pair's order book channel.\n\n :param pair: str, Symbol pair to request data for\n :param kwargs:\n :return:"} {"_id": "q_7330", "text": "Subscribe to the passed pair's raw order book channel.\n\n :param pair: str, Symbol pair to request data for\n :param prec:\n :param kwargs:\n :return:"} {"_id": "q_7331", "text": "Unsubscribe to the passed pair's raw order book channel.\n\n :param pair: str, Symbol pair to request data for\n :param prec:\n :param kwargs:\n :return:"} {"_id": "q_7332", "text": "Subscribe to the passed pair's trades channel.\n\n :param pair: str, Symbol pair to request data for\n :param kwargs:\n :return:"} {"_id": "q_7333", "text": "Unsubscribe to the passed pair's trades channel.\n\n :param pair: str, Symbol pair to request data for\n :param kwargs:\n :return:"} {"_id": "q_7334", "text": "Authenticate with the Bitfinex API.\n\n :return:"} {"_id": "q_7335", "text": "Internal callback for device command messages, parses source device from topic string and\n passes the information on to the registered device command callback"} {"_id": "q_7336", "text": "Internal callback for gateway command messages, parses source device from topic string and\n passes the information on to the registered device command callback"} {"_id": "q_7337", "text": "Internal callback for gateway notification messages, parses source device from topic string and\n passes the information on to the registered device command callback"} {"_id": "q_7338", "text": "Register one or more new device types, each request can contain a maximum of 512KB."} {"_id": "q_7339", "text": "Publish an event to Watson IoT Platform.\n\n # Parameters\n event (string): Name of this event\n msgFormat (string): Format of the data for this event\n data (dict): Data for this event\n qos (int): MQTT quality of service level to use (`0`, `1`, or `2`)\n on_publish(function): A function that will be called when receipt \n of the publication is confirmed. \n \n # Callback and QoS\n The use of the optional #on_publish function has different implications depending \n on the level of qos used to publish the event: \n \n - qos 0: the client has asynchronously begun to send the event\n - qos 1 and 2: the client has confirmation of delivery from the platform"} {"_id": "q_7340", "text": "Update an existing device"} {"_id": "q_7341", "text": "Iterate through all Connectors"} {"_id": "q_7342", "text": "List all device management extension packages"} {"_id": "q_7343", "text": "Create a new device management extension package\n In case of failure it throws APIException"} {"_id": "q_7344", "text": "Update a schema. Throws APIException on failure."} {"_id": "q_7345", "text": "Disconnect the client from IBM Watson IoT Platform"} {"_id": "q_7346", "text": "Called when the broker responds to our connection request.\n\n The value of rc determines success or not:\n 0: Connection successful\n 1: Connection refused - incorrect protocol version\n 2: Connection refused - invalid client identifier\n 3: Connection refused - server unavailable\n 4: Connection refused - bad username or password\n 5: Connection refused - not authorised\n 6-255: Currently unused."} {"_id": "q_7347", "text": "Subscribe to device event messages\n\n # Parameters\n typeId (string): typeId for the subscription, optional. Defaults to all device types (MQTT `+` wildcard)\n deviceId (string): deviceId for the subscription, optional. Defaults to all devices (MQTT `+` wildcard)\n eventId (string): eventId for the subscription, optional. Defaults to all events (MQTT `+` wildcard)\n msgFormat (string): msgFormat for the subscription, optional. Defaults to all formats (MQTT `+` wildcard)\n qos (int): MQTT quality of service level to use (`0`, `1`, or `2`)\n\n # Returns\n int: If the subscription was successful then the return Message ID (mid) for the subscribe request\n will be returned. The mid value can be used to track the subscribe request by checking against\n the mid argument if you register a subscriptionCallback method.\n If the subscription fails then the return value will be `0`"} {"_id": "q_7348", "text": "Publish a command to a device\n\n # Parameters\n typeId (string) : The type of the device this command is to be published to\n deviceId (string): The id of the device this command is to be published to\n command (string) : The name of the command\n msgFormat (string) : The format of the command payload\n data (dict) : The command data\n qos (int) : The equivalent MQTT semantics of quality of service using the same constants (optional, defaults to `0`)\n on_publish (function) : A function that will be called when receipt of the publication is confirmed. This has\n different implications depending on the qos:\n - qos 0 : the client has asynchronously begun to send the event\n - qos 1 and 2 : the client has confirmation of delivery from WIoTP"} {"_id": "q_7349", "text": "Internal callback for messages that have not been handled by any of the specific internal callbacks, these\n messages are not passed on to any user provided callback"} {"_id": "q_7350", "text": "Internal callback for device event messages, parses source device from topic string and\n passes the information on to the registerd device event callback"} {"_id": "q_7351", "text": "Internal callback for device status messages, parses source device from topic string and\n passes the information on to the registerd device status callback"} {"_id": "q_7352", "text": "Internal callback for application command messages, parses source application from topic string and\n passes the information on to the registerd applicaion status callback"} {"_id": "q_7353", "text": "Retrieves the last cached message for specified event from a specific device."} {"_id": "q_7354", "text": "Retrieves a list of the last cached message for all events from a specific device."} {"_id": "q_7355", "text": "Initiates a device management request, such as reboot.\n In case of failure it throws APIException"} {"_id": "q_7356", "text": "Force a flush of the index to storage. Renders index\n inaccessible."} {"_id": "q_7357", "text": "Returns the ``k``-nearest objects to the given coordinates.\n\n :param coordinates: sequence or array\n This may be an object that satisfies the numpy array\n protocol, providing the index's dimension * 2 coordinate\n pairs representing the `mink` and `maxk` coordinates in\n each dimension defining the bounds of the query window.\n\n :param num_results: integer\n The number of results to return nearest to the given coordinates.\n If two index entries are equidistant, *both* are returned.\n This property means that :attr:`num_results` may return more\n items than specified\n\n :param objects: True / False / 'raw'\n If True, the nearest method will return index objects that\n were pickled when they were stored with each index entry, as\n well as the id and bounds of the index entries.\n If 'raw', it will return the object as entered into the database\n without the :class:`rtree.index.Item` wrapper.\n\n Example of finding the three items nearest to this one::\n\n >>> from rtree import index\n >>> idx = index.Index()\n >>> idx.insert(4321, (34.37, 26.73, 49.37, 41.73), obj=42)\n >>> hits = idx.nearest((0, 0, 10, 10), 3, objects=True)"} {"_id": "q_7358", "text": "Deletes items from the index with the given ``'id'`` within the\n specified coordinates.\n\n :param id: long integer\n A long integer that is the identifier for this index entry. IDs\n need not be unique to be inserted into the index, and it is up\n to the user to ensure they are unique if this is a requirement.\n\n :param coordinates: sequence or array\n Dimension * 2 coordinate pairs, representing the min\n and max coordinates in each dimension of the item to be\n deleted from the index. Their ordering will depend on the\n index's :attr:`interleaved` data member.\n These are not the coordinates of a space containing the\n item, but those of the item itself. Together with the\n id parameter, they determine which item will be deleted.\n This may be an object that satisfies the numpy array protocol.\n\n Example::\n\n >>> from rtree import index\n >>> idx = index.Index()\n >>> idx.delete(4321,\n ... (34.3776829412, 26.7375853734, 49.3776829412,\n ... 41.7375853734))"} {"_id": "q_7359", "text": "Must be overridden. Must return a string with the loaded data."} {"_id": "q_7360", "text": "Deletes the item from the container within the specified\n coordinates.\n\n :param obj: object\n Any object.\n\n :param coordinates: sequence or array\n Dimension * 2 coordinate pairs, representing the min\n and max coordinates in each dimension of the item to be\n deleted from the index. Their ordering will depend on the\n index's :attr:`interleaved` data member.\n These are not the coordinates of a space containing the\n item, but those of the item itself. Together with the\n id parameter, they determine which item will be deleted.\n This may be an object that satisfies the numpy array protocol.\n\n Example::\n\n >>> from rtree import index\n >>> idx = index.RtreeContainer()\n >>> idx.delete(object(),\n ... (34.3776829412, 26.7375853734, 49.3776829412,\n ... 41.7375853734))\n Traceback (most recent call last):\n ...\n IndexError: object is not in the index"} {"_id": "q_7361", "text": "Define delay adjustment policy"} {"_id": "q_7362", "text": "Convert string into camel case.\n\n Args:\n string: String to convert.\n\n Returns:\n string: Camel case string."} {"_id": "q_7363", "text": "Convert string into capital case.\n First letters will be uppercase.\n\n Args:\n string: String to convert.\n\n Returns:\n string: Capital case string."} {"_id": "q_7364", "text": "Convert string into spinal case.\n Join punctuation with backslash.\n\n Args:\n string: String to convert.\n\n Returns:\n string: Spinal cased string."} {"_id": "q_7365", "text": "Convert string into sentence case.\n First letter capped and each punctuations are joined with space.\n\n Args:\n string: String to convert.\n\n Returns:\n string: Sentence cased string."} {"_id": "q_7366", "text": "Convert string into snake case.\n Join punctuation with underscore\n\n Args:\n string: String to convert.\n\n Returns:\n string: Snake cased string."} {"_id": "q_7367", "text": "Attempt an import of the specified application"} {"_id": "q_7368", "text": "Initializes the Flask application with Common."} {"_id": "q_7369", "text": "Return a PIL Image instance cropped from `image`.\n\n Image has an aspect ratio provided by dividing `width` / `height`),\n sized down to `width`x`height`. Any 'excess pixels' are trimmed away\n in respect to the pixel of `image` that corresponds to `ppoi` (Primary\n Point of Interest).\n\n `image`: A PIL Image instance\n `width`: Integer, width of the image to return (in pixels)\n `height`: Integer, height of the image to return (in pixels)\n `ppoi`: A 2-tuple of floats with values greater than 0 and less than 1\n These values are converted into a cartesian coordinate that\n signifies the 'center pixel' which the crop will center on\n (to trim the excess from the 'long side').\n\n Determines whether to trim away pixels from either the left/right or\n top/bottom sides by comparing the aspect ratio of `image` vs the\n aspect ratio of `width`x`height`.\n\n Will trim from the left/right sides if the aspect ratio of `image`\n is greater-than-or-equal-to the aspect ratio of `width`x`height`.\n\n Will trim from the top/bottom sides if the aspect ration of `image`\n is less-than the aspect ratio or `width`x`height`.\n\n Similar to Kevin Cazabon's ImageOps.fit method but uses the\n ppoi value as an absolute centerpoint (as opposed as a\n percentage to trim off the 'long sides')."} {"_id": "q_7370", "text": "Return a BytesIO instance of `image` that fits in a bounding box.\n\n Bounding box dimensions are `width`x`height`."} {"_id": "q_7371", "text": "Return a BytesIO instance of `image` with inverted colors."} {"_id": "q_7372", "text": "Ensure data is prepped properly before handing off to ImageField."} {"_id": "q_7373", "text": "Process the field's placeholder image.\n\n Ensures the placeholder image has been saved to the same storage class\n as the field in a top level folder with a name specified by\n settings.VERSATILEIMAGEFIELD_SETTINGS['placeholder_directory_name']\n\n This should be called by the VersatileImageFileDescriptor __get__.\n If self.placeholder_image_name is already set it just returns right away."} {"_id": "q_7374", "text": "Return field's value just before saving."} {"_id": "q_7375", "text": "Update field's ppoi field, if defined.\n\n This method is hooked up this field's pre_save method to update\n the ppoi immediately before the model instance (`instance`)\n it is associated with is saved.\n\n This field's ppoi can be forced to update with force=True,\n which is how VersatileImageField.pre_save calls this method."} {"_id": "q_7376", "text": "Handle data sent from MultiValueField forms that set ppoi values.\n\n `instance`: The model instance that is being altered via a form\n `data`: The data sent from the form to this field which can be either:\n * `None`: This is unset data from an optional field\n * A two-position tuple: (image_form_data, ppoi_data)\n * `image_form-data` options:\n * `None` the file for this field is unchanged\n * `False` unassign the file form the field\n * `ppoi_data` data structure:\n * `%(x_coordinate)sx%(y_coordinate)s': The ppoi data to\n assign to the unchanged file"} {"_id": "q_7377", "text": "Unregister the FilteredImage subclass currently assigned to attr_name.\n\n If a FilteredImage subclass isn't already registered to filters.\n `attr_name` NotRegistered will raise."} {"_id": "q_7378", "text": "Return the appropriate URL.\n\n URL is constructed based on these field conditions:\n * If empty (not `self.name`) and a placeholder is defined, the\n URL to the placeholder is returned.\n * Otherwise, defaults to vanilla ImageFieldFile behavior."} {"_id": "q_7379", "text": "Return the location where filtered images are stored."} {"_id": "q_7380", "text": "Return the location where sized images are stored."} {"_id": "q_7381", "text": "Return the location where filtered + sized images are stored."} {"_id": "q_7382", "text": "Preprocess an image.\n\n An API hook for image pre-processing. Calls any image format specific\n pre-processors (if defined). I.E. If `image_format` is 'JPEG', this\n method will look for a method named `preprocess_JPEG`, if found\n `image` will be passed to it.\n\n Arguments:\n * `image`: a PIL Image instance\n * `image_format`: str, a valid PIL format (i.e. 'JPEG' or 'GIF')\n\n Subclasses should return a 2-tuple:\n * [0]: A PIL Image instance.\n * [1]: A dictionary of additional keyword arguments to be used\n when the instance is saved. If no additional keyword\n arguments, return an empty dict ({})."} {"_id": "q_7383", "text": "Receive a PIL Image instance of a JPEG and returns 2-tuple.\n\n Args:\n * [0]: Image instance, converted to RGB\n * [1]: Dict with a quality key (mapped to the value of `QUAL` as\n defined by the `VERSATILEIMAGEFIELD_JPEG_RESIZE_QUALITY`\n setting)"} {"_id": "q_7384", "text": "Return a PIL Image instance stored at `path_to_image`."} {"_id": "q_7385", "text": "Return PPOI value as a string."} {"_id": "q_7386", "text": "Create a resized image.\n\n `path_to_image`: The path to the image with the media directory to\n resize. If `None`, the\n VERSATILEIMAGEFIELD_PLACEHOLDER_IMAGE will be used.\n `save_path_on_storage`: Where on self.storage to save the resized image\n `width`: Width of resized image (int)\n `height`: Desired height of resized image (int)\n `filename_key`: A string that will be used in the sized image filename\n to signify what operation was done to it.\n Examples: 'crop' or 'scale'"} {"_id": "q_7387", "text": "Return a `path_to_image` location on `storage` as dictated by `width`, `height`\n and `filename_key`"} {"_id": "q_7388", "text": "Return the 'filtered path'"} {"_id": "q_7389", "text": "Validate a list of size keys.\n\n `sizes`: An iterable of 2-tuples, both strings. Example:\n [\n ('large', 'url'),\n ('medium', 'crop__400x400'),\n ('small', 'thumbnail__100x100')\n ]"} {"_id": "q_7390", "text": "Build a URL from `image_key`."} {"_id": "q_7391", "text": "Takes a raw `Instruction` and translates it into a human readable text\n representation. As of writing, the text representation for WASM is not yet\n standardized, so we just emit some generic format."} {"_id": "q_7392", "text": "Takes a `FunctionBody` and optionally a `FunctionType`, yielding the string \n representation of the function line by line. The function type is required\n for formatting function parameter and return value information."} {"_id": "q_7393", "text": "Decodes raw bytecode, yielding `Instruction`s."} {"_id": "q_7394", "text": "Deprecates a function, printing a warning on the first usage."} {"_id": "q_7395", "text": "Checks the validity of the input.\n\n In case of an invalid input throws ValueError."} {"_id": "q_7396", "text": "Helper method that returns the index of the string based on node's\n starting index"} {"_id": "q_7397", "text": "Returns the Largest Common Substring of Strings provided in stringIdxs.\n If stringIdxs is not provided, the LCS of all strings is returned.\n\n ::param stringIdxs: Optional: List of indexes of strings."} {"_id": "q_7398", "text": "Helper method returns the starting indexes of strings in GST"} {"_id": "q_7399", "text": "Helper method, returns the edge label between a node and it's parent"} {"_id": "q_7400", "text": "Generator of unique terminal symbols used for building the Generalized Suffix Tree.\n Unicode Private Use Area U+E000..U+F8FF is used to ensure that terminal symbols\n are not part of the input string."} {"_id": "q_7401", "text": "connect to the server"} {"_id": "q_7402", "text": "Parse read a response from the AGI and parse it.\n\n :return dict: The AGI response parsed into a dict."} {"_id": "q_7403", "text": "Parse AGI results using Regular expression.\n\n AGI Result examples::\n\n 100 result=0 Trying...\n\n 200 result=0\n\n 200 result=-1\n\n 200 result=132456\n\n 200 result= (timeout)\n\n 510 Invalid or unknown command\n\n 520-Invalid command syntax. Proper usage follows:\n int() argument must be a string, a bytes-like object or a number, not\n 'NoneType'\n\n HANGUP"} {"_id": "q_7404", "text": "Mostly used for unit testing. Allow to use a static uuid and reset\n all counter"} {"_id": "q_7405", "text": "Mostly used for debugging"} {"_id": "q_7406", "text": "Returns data from a package directory.\n 'path' should be an absolute path."} {"_id": "q_7407", "text": "Create a graph of constraints for both must- and cannot-links"} {"_id": "q_7408", "text": "Translates a regular Scikit-Learn estimator or pipeline to a PMML pipeline.\n\n\tParameters:\n\t----------\n\tobj: BaseEstimator\n\t\tThe object.\n\n\tactive_fields: list of strings, optional\n\t\tFeature names. If missing, \"x1\", \"x2\", .., \"xn\" are assumed.\n\n\ttarget_fields: list of strings, optional\n\t\tLabel name(s). If missing, \"y\" is assumed."} {"_id": "q_7409", "text": "Converts a fitted Scikit-Learn pipeline to PMML.\n\n\tParameters:\n\t----------\n\tpipeline: PMMLPipeline\n\t\tThe pipeline.\n\n\tpmml: string\n\t\tThe path to where the PMML document should be stored.\n\n\tuser_classpath: list of strings, optional\n\t\tThe paths to JAR files that provide custom Transformer, Selector and/or Estimator converter classes.\n\t\tThe JPMML-SkLearn classpath is constructed by appending user JAR files to package JAR files.\n\n\twith_repr: boolean, optional\n\t\tIf true, insert the string representation of pipeline into the PMML document.\n\n\tdebug: boolean, optional\n\t\tIf true, print information about the conversion process.\n\n\tjava_encoding: string, optional\n\t\tThe character encoding to use for decoding Java output and error byte streams."} {"_id": "q_7410", "text": "Returns an instance of the formset"} {"_id": "q_7411", "text": "If the formset is valid, save the associated models."} {"_id": "q_7412", "text": "Handles POST requests, instantiating a formset instance with the passed\n POST variables and then checked for validity."} {"_id": "q_7413", "text": "Overrides construct_formset to attach the model class as\n an attribute of the returned formset instance."} {"_id": "q_7414", "text": "Returns the inline formset instances"} {"_id": "q_7415", "text": "Handles GET requests and instantiates a blank version of the form and formsets."} {"_id": "q_7416", "text": "Handles POST requests, instantiating a form and formset instances with the passed\n POST variables and then checked for validity."} {"_id": "q_7417", "text": "If `inlines_names` has been defined, add each formset to the context under\n its corresponding entry in `inlines_names`"} {"_id": "q_7418", "text": "Returns the start date for a model instance"} {"_id": "q_7419", "text": "Returns an integer representing the first day of the week.\n\n 0 represents Monday, 6 represents Sunday."} {"_id": "q_7420", "text": "Returns a queryset of models for the month requested"} {"_id": "q_7421", "text": "Injects variables necessary for rendering the calendar into the context.\n\n Variables added are: `calendar`, `weekdays`, `month`, `next_month` and `previous_month`."} {"_id": "q_7422", "text": "Get primary key properties for a SQLAlchemy model.\n\n :param model: SQLAlchemy model class"} {"_id": "q_7423", "text": "Deserialize a serialized value to a model instance.\n\n If the parent schema is transient, create a new (transient) instance.\n Otherwise, attempt to find an existing instance in the database.\n :param value: The value to deserialize."} {"_id": "q_7424", "text": "Deserialize data to internal representation.\n\n :param session: Optional SQLAlchemy session.\n :param instance: Optional existing instance to modify.\n :param transient: Optional switch to allow transient instantiation."} {"_id": "q_7425", "text": "Deletes old stellar tables that are not used anymore"} {"_id": "q_7426", "text": "Takes a snapshot of the database"} {"_id": "q_7427", "text": "Returns a list of snapshots"} {"_id": "q_7428", "text": "Removes a snapshot"} {"_id": "q_7429", "text": "Renames a snapshot"} {"_id": "q_7430", "text": "Replaces a snapshot"} {"_id": "q_7431", "text": "Updates indexes after each epoch for shuffling"} {"_id": "q_7432", "text": "Defines the default function for cleaning text.\n\n This function operates over a list."} {"_id": "q_7433", "text": "Apply function to list of elements.\n\n Automatically determines the chunk size."} {"_id": "q_7434", "text": "Analyze document length statistics for padding strategy"} {"_id": "q_7435", "text": "Return a new Colorful object with the given color config."} {"_id": "q_7436", "text": "Parse the given rgb.txt file into a Python dict.\n\n See https://en.wikipedia.org/wiki/X11_color_names for more information\n\n :param str path: the path to the X11 rgb.txt file"} {"_id": "q_7437", "text": "Sanitze the given color palette so it can\n be safely used by Colorful.\n\n It will convert colors specified in hex RGB to\n a RGB channel triplet."} {"_id": "q_7438", "text": "Detect what color palettes are supported.\n It'll return a valid color mode to use\n with colorful.\n\n :param dict env: the environment dict like returned by ``os.envion``"} {"_id": "q_7439", "text": "Convert the given hex string to a\n valid RGB channel triplet."} {"_id": "q_7440", "text": "Check if the given hex value is a valid RGB color\n\n It should match the format: [0-9a-fA-F]\n and be of length 3 or 6."} {"_id": "q_7441", "text": "Translate the given color name to a valid\n ANSI escape code.\n\n :parma str colorname: the name of the color to resolve\n :parma str offset: the offset for the color code\n :param int colormode: the color mode to use. See ``translate_rgb_to_ansi_code``\n :parma dict colorpalette: the color palette to use for the color name mapping\n\n :returns str: the color as ANSI escape code\n\n :raises ColorfulError: if the given color name is invalid"} {"_id": "q_7442", "text": "Resolve the given modifier name to a valid\n ANSI escape code.\n\n :param str modifiername: the name of the modifier to resolve\n :param int colormode: the color mode to use. See ``translate_rgb_to_ansi_code``\n\n :returns str: the ANSI escape code for the modifier\n\n :raises ColorfulError: if the given modifier name is invalid"} {"_id": "q_7443", "text": "Translate the given style to an ANSI escape code\n sequence.\n\n ``style`` examples are:\n\n * green\n * bold\n * red_on_black\n * bold_green\n * italic_yellow_on_cyan\n\n :param str style: the style to translate\n :param int colormode: the color mode to use. See ``translate_rgb_to_ansi_code``\n :parma dict colorpalette: the color palette to use for the color name mapping"} {"_id": "q_7444", "text": "Style the given string according to the given\n ANSI style string.\n\n :param str string: the string to style\n :param tuple ansi_style: the styling string returned by ``translate_style``\n :param int colormode: the color mode to use. See ``translate_rgb_to_ansi_code``\n\n :returns: a string containing proper ANSI sequence"} {"_id": "q_7445", "text": "Use a predefined style as color palette\n\n :param str style_name: the name of the style"} {"_id": "q_7446", "text": "Format the given string with the given ``args`` and ``kwargs``.\n The string can contain references to ``c`` which is provided by\n this colorful object.\n\n :param str string: the string to format"} {"_id": "q_7447", "text": "Get data from the USB device."} {"_id": "q_7448", "text": "Get device humidity reading.\n\n Params:\n - sensors: optional list of sensors to get a reading for, examples:\n [0,] - get reading for sensor 0\n [0, 1,] - get reading for sensors 0 and 1\n None - get readings for all sensors"} {"_id": "q_7449", "text": "Read data from device."} {"_id": "q_7450", "text": "Update, rolling back on failure."} {"_id": "q_7451", "text": "Create a new temporary file and write some initial text to it.\n\n :param text: the text to write to the temp file\n :type text: str\n :returns: the file name of the newly created temp file\n :rtype: str"} {"_id": "q_7452", "text": "Get a list of contacts from one or more address books.\n\n :param address_books: the address books to search\n :type address_books: list(address_book.AddressBook)\n :param query: a search query to select contacts\n :type quer: str\n :param method: the search method, one of \"all\", \"name\" or \"uid\"\n :type method: str\n :param reverse: reverse the order of the returned contacts\n :type reverse: bool\n :param group: group results by address book\n :type group: bool\n :param sort: the field to use for sorting, one of \"first_name\", \"last_name\"\n :type sort: str\n :returns: contacts from the address_books that match the query\n :rtype: list(CarddavObject)"} {"_id": "q_7453", "text": "Merge the parsed arguments from argparse into the config object.\n\n :param args: the parsed command line arguments\n :type args: argparse.Namespace\n :param config: the parsed config file\n :type config: config.Config\n :returns: the merged config object\n :rtype: config.Config"} {"_id": "q_7454", "text": "Load all address books with the given names from the config.\n\n :param names: the address books to load\n :type names: list(str)\n :param config: the config instance to use when looking up address books\n :type config: config.Config\n :param search_queries: a mapping of address book names to search queries\n :type search_queries: dict\n :yields: the loaded address books\n :ytype: addressbook.AddressBook"} {"_id": "q_7455", "text": "Prepare the search query string from the given command line args.\n\n Each address book can get a search query string to filter vcards befor\n loading them. Depending on the question if the address book is used for\n source or target searches different regexes have to be combined into one\n search string.\n\n :param args: the parsed command line\n :type args: argparse.Namespace\n :returns: a dict mapping abook names to their loading queries, if the query\n is None it means that all cards should be loaded\n :rtype: dict(str:str or None)"} {"_id": "q_7456", "text": "Print a phone application friendly contact table.\n\n :param search_terms: used as search term to filter the contacts before\n printing\n :type search_terms: str\n :param vcard_list: the vcards to search for matching entries which should\n be printed\n :type vcard_list: list of carddav_object.CarddavObject\n :param parsable: machine readable output: columns devided by tabulator (\\t)\n :type parsable: bool\n :returns: None\n :rtype: None"} {"_id": "q_7457", "text": "Print a user friendly contacts table.\n\n :param vcard_list: the vcards to print\n :type vcard_list: list of carddav_object.CarddavObject\n :param parsable: machine readable output: columns devided by tabulator (\\t)\n :type parsable: bool\n :returns: None\n :rtype: None"} {"_id": "q_7458", "text": "Modify a contact in an external editor.\n\n :param selected_vcard: the contact to modify\n :type selected_vcard: carddav_object.CarddavObject\n :param input_from_stdin_or_file: new data from stdin (or a file) that\n should be incorperated into the contact, this should be a yaml\n formatted string\n :type input_from_stdin_or_file: str\n :param open_editor: whether to open the new contact in the edior after\n creation\n :type open_editor: bool\n :returns: None\n :rtype: None"} {"_id": "q_7459", "text": "Remove a contact from the addressbook.\n\n :param selected_vcard: the contact to delete\n :type selected_vcard: carddav_object.CarddavObject\n :param force: delete without confirmation\n :type force: bool\n :returns: None\n :rtype: None"} {"_id": "q_7460", "text": "Open the vcard file for a contact in an external editor.\n\n :param selected_vcard: the contact to edit\n :type selected_vcard: carddav_object.CarddavObject\n :param editor: the eitor command to use\n :type editor: str\n :returns: None\n :rtype: None"} {"_id": "q_7461", "text": "Merge two contacts into one.\n\n :param vcard_list: the vcards from which to choose contacts for mergeing\n :type vcard_list: list of carddav_object.CarddavObject\n :param selected_address_books: the addressbooks to use to find the target\n contact\n :type selected_address_books: list(addressbook.AddressBook)\n :param search_terms: the search terms to find the target contact\n :type search_terms: str\n :param target_uid: the uid of the target contact or empty\n :type target_uid: str\n :returns: None\n :rtype: None"} {"_id": "q_7462", "text": "Find the name of the action for the supplied alias. If no action is\n asociated with the given alias, None is returned.\n\n :param alias: the alias to look up\n :type alias: str\n :rturns: the name of the corresponding action or None\n :rtype: str or NoneType"} {"_id": "q_7463", "text": "Convert the named field to bool.\n\n The current value should be one of the strings \"yes\" or \"no\". It will\n be replaced with its boolean counterpart. If the field is not present\n in the config object, the default value is used.\n\n :param config: the config section where to set the option\n :type config: configobj.ConfigObj\n :param name: the name of the option to convert\n :type name: str\n :param default: the default value to use if the option was not\n previously set\n :type default: bool\n :returns: None"} {"_id": "q_7464", "text": "Use this if you want to create a new contact from user input."} {"_id": "q_7465", "text": "Get some part of the \"N\" entry in the vCard as a list\n\n :param part: the name to get e.g. \"prefix\" or \"given\"\n :type part: str\n :returns: a list of entries for this name part\n :rtype: list(str)"} {"_id": "q_7466", "text": "categories variable must be a list"} {"_id": "q_7467", "text": "Parse type value of phone numbers, email and post addresses.\n\n :param types: list of type values\n :type types: list(str)\n :param value: the corresponding label, required for more verbose\n exceptions\n :type value: str\n :param supported_types: all allowed standard types\n :type supported_types: list(str)\n :returns: tuple of standard and custom types and pref integer\n :rtype: tuple(list(str), list(str), int)"} {"_id": "q_7468", "text": "converts list to string recursively so that nested lists are supported\n\n :param input: a list of strings and lists of strings (and so on recursive)\n :type input: list\n :param delimiter: the deimiter to use when joining the items\n :type delimiter: str\n :returns: the recursively joined list\n :rtype: str"} {"_id": "q_7469", "text": "Convert string to date object.\n\n :param input: the date string to parse\n :type input: str\n :returns: the parsed datetime object\n :rtype: datetime.datetime"} {"_id": "q_7470", "text": "Calculate the minimum length of initial substrings of uid1 and uid2\n for them to be different.\n\n :param uid1: first uid to compare\n :type uid1: str\n :param uid2: second uid to compare\n :type uid2: str\n :returns: the length of the shortes unequal initial substrings\n :rtype: int"} {"_id": "q_7471", "text": "Search in all fields for contacts matching query.\n\n :param query: the query to search for\n :type query: str\n :yields: all found contacts\n :rtype: generator(carddav_object.CarddavObject)"} {"_id": "q_7472", "text": "Search in the name filed for contacts matching query.\n\n :param query: the query to search for\n :type query: str\n :yields: all found contacts\n :rtype: generator(carddav_object.CarddavObject)"} {"_id": "q_7473", "text": "Search for contacts with a matching uid.\n\n :param query: the query to search for\n :type query: str\n :yields: all found contacts\n :rtype: generator(carddav_object.CarddavObject)"} {"_id": "q_7474", "text": "Search this address book for contacts matching the query.\n\n The method can be one of \"all\", \"name\" and \"uid\". The backend for this\n address book migth be load()ed if needed.\n\n :param query: the query to search for\n :type query: str\n :param method: the type of fileds to use when seaching\n :type method: str\n :returns: all found contacts\n :rtype: list(carddav_object.CarddavObject)"} {"_id": "q_7475", "text": "Create a dictionary of shortend UIDs for all contacts.\n\n All arguments are only used if the address book is not yet initialized\n and will just be handed to self.load().\n\n :param query: see self.load()\n :type query: str\n :returns: the contacts mapped by the shortes unique prefix of their UID\n :rtype: dict(str: CarddavObject)"} {"_id": "q_7476", "text": "Get the shortend UID for the given UID.\n\n :param uid: the full UID to shorten\n :type uid: str\n :returns: the shortend uid or the empty string\n :rtype: str"} {"_id": "q_7477", "text": "Load all vcard files in this address book from disk.\n\n If a search string is given only files which contents match that will\n be loaded.\n\n :param query: a regular expression to limit the results\n :type query: str\n :param search_in_source_files: apply search regexp directly on the .vcf files to speed up parsing (less accurate)\n :type search_in_source_files: bool\n :returns: the number of successfully loaded cards and the number of\n errors\n :rtype: int, int\n :throws: AddressBookParseError"} {"_id": "q_7478", "text": "Create the JSON for configuring arthur to collect data\n\n https://github.com/grimoirelab/arthur#adding-tasks\n Sample for git:\n\n {\n \"tasks\": [\n {\n \"task_id\": \"arthur.git\",\n \"backend\": \"git\",\n \"backend_args\": {\n \"gitpath\": \"/tmp/arthur_git/\",\n \"uri\": \"https://github.com/grimoirelab/arthur.git\"\n },\n \"category\": \"commit\",\n \"archive_args\": {\n \"archive_path\": '/tmp/test_archives',\n \"fetch_from_archive\": false,\n \"archive_after\": None\n },\n \"scheduler_args\": {\n \"delay\": 10\n }\n }\n ]\n }"} {"_id": "q_7479", "text": "Return the GitHub SHA for a file in the repository"} {"_id": "q_7480", "text": "Execute the merge identities phase\n\n :param config: a Mordred config object"} {"_id": "q_7481", "text": "Execute the panels phase\n\n :param config: a Mordred config object"} {"_id": "q_7482", "text": "Config logging level output output"} {"_id": "q_7483", "text": "Get params to execute the micro-mordred"} {"_id": "q_7484", "text": "Upload a panel to Elasticsearch if it does not exist yet.\n\n If a list of data sources is specified, upload only those\n elements (visualizations, searches) that match that data source.\n\n :param panel_file: file name of panel (dashobard) to upload\n :param data_sources: list of data sources\n :param strict: only upload a dashboard if it is newer than the one already existing"} {"_id": "q_7485", "text": "Upload to Kibiter the title for the dashboard.\n\n The title is shown on top of the dashboard menu, and is Usually\n the name of the project being dashboarded.\n This is done only for Kibiter 6.x.\n\n :param kibiter_major: major version of kibiter"} {"_id": "q_7486", "text": "Create the menu definition to access the panels in a dashboard.\n\n :param menu: dashboard menu to upload\n :param kibiter_major: major version of kibiter"} {"_id": "q_7487", "text": "Remove existing menu for dashboard, if any.\n\n Usually, we remove the menu before creating a new one.\n\n :param kibiter_major: major version of kibiter"} {"_id": "q_7488", "text": "Get the menu entries from the panel definition"} {"_id": "q_7489", "text": "Order the dashboard menu"} {"_id": "q_7490", "text": "Compose projects.json only for mbox, but using the mailing_lists lists\n\n change: 'https://dev.eclipse.org/mailman/listinfo/emft-dev'\n to: 'emfg-dev /home/bitergia/mboxes/emft-dev.mbox/emft-dev.mbox\n\n :param projects: projects.json\n :return: projects.json with mbox"} {"_id": "q_7491", "text": "Compose projects.json for git\n\n We need to replace '/c/' by '/gitroot/' for instance\n\n change: 'http://git.eclipse.org/c/xwt/org.eclipse.xwt.git'\n to: 'http://git.eclipse.org/gitroot/xwt/org.eclipse.xwt.git'\n\n :param projects: projects.json\n :param data: eclipse JSON\n :return: projects.json with git"} {"_id": "q_7492", "text": "Compose projects.json for mailing lists\n\n At upstream has two different key for mailing list: 'mailings_lists' and 'dev_list'\n The key 'mailing_lists' is an array with mailing lists\n The key 'dev_list' is a dict with only one mailing list\n\n :param projects: projects.json\n :param data: eclipse JSON\n :return: projects.json with mailing_lists"} {"_id": "q_7493", "text": "Compose projects.json for github\n\n :param projects: projects.json\n :param data: eclipse JSON\n :return: projects.json with github"} {"_id": "q_7494", "text": "Compose the projects JSON file only with the projects name\n\n :param projects: projects.json\n :param data: eclipse JSON with the origin format\n :return: projects.json with titles"} {"_id": "q_7495", "text": "Compose projects.json with all data sources\n\n :param projects: projects.json\n :param data: eclipse JSON\n :return: projects.json with all data sources"} {"_id": "q_7496", "text": "Execute autorefresh for areas of code study if configured"} {"_id": "q_7497", "text": "Execute the studies configured for the current backend"} {"_id": "q_7498", "text": "Retain the identities in SortingHat based on the `retention_time`\n value declared in the setup.cfg.\n\n :param retention_time: maximum number of minutes wrt the current date to retain the SortingHat data"} {"_id": "q_7499", "text": "return list with the repositories for a backend_section"} {"_id": "q_7500", "text": "Convert from eclipse projects format to grimoire projects json format"} {"_id": "q_7501", "text": "Change a param in the config"} {"_id": "q_7502", "text": "Get Elasticsearch version.\n\n Get the version of Elasticsearch. This is useful because\n Elasticsearch and Kibiter are paired (same major version for 5, 6).\n\n :param url: Elasticseearch url hosting Kibiter indices\n :returns: major version, as string"} {"_id": "q_7503", "text": "Start a task manager per backend to complete the tasks.\n\n :param task_cls: list of tasks classes to be executed\n :param big_delay: seconds before global tasks are executed, should be days usually\n :param small_delay: seconds before backend tasks are executed, should be minutes\n :param wait_for_threads: boolean to set when threads are infinite or\n should be synchronized in a meeting point"} {"_id": "q_7504", "text": "Tasks that should be done just one time"} {"_id": "q_7505", "text": "Validates the provided config to make sure all the required fields are \n there."} {"_id": "q_7506", "text": "Customize the message format based on the log level."} {"_id": "q_7507", "text": "Initialize the dictionary of architectures for assembling via keystone"} {"_id": "q_7508", "text": "Sys.out replacer, by default with stderr.\n\n Use it like this:\n with replace_print_with(fileobj):\n print \"hello\" # writes to the file\n print \"done\" # prints to stdout\n\n Args:\n fileobj: a file object to replace stdout.\n\n Yields:\n The printer."} {"_id": "q_7509", "text": "Compact a list of integers into a comma-separated string of intervals.\n\n Args:\n value_list: A list of sortable integers such as a list of numbers\n\n Returns:\n A compact string representation, such as \"1-5,8,12-15\""} {"_id": "q_7510", "text": "Get a storage client using the provided credentials or defaults."} {"_id": "q_7511", "text": "Load context from a text file in gcs.\n\n Args:\n gcs_file_path: The target file path; should have the 'gs://' prefix.\n credentials: Optional credential to be used to load the file from gcs.\n\n Returns:\n The content of the text file as a string."} {"_id": "q_7512", "text": "Check whether the file exists, in GCS.\n\n Args:\n gcs_file_path: The target file path; should have the 'gs://' prefix.\n credentials: Optional credential to be used to load the file from gcs.\n\n Returns:\n True if the file's there."} {"_id": "q_7513", "text": "True iff an object exists matching the input GCS pattern.\n\n The GCS pattern must be a full object reference or a \"simple pattern\" that\n conforms to the dsub input and output parameter restrictions:\n\n * No support for **, ? wildcards or [] character ranges\n * Wildcards may only appear in the file name\n\n Args:\n file_pattern: eg. 'gs://foo/ba*'\n credentials: Optional credential to be used to load the file from gcs.\n\n Raises:\n ValueError: if file_pattern breaks the rules.\n\n Returns:\n True iff a file exists that matches that pattern."} {"_id": "q_7514", "text": "True if each output contains at least one file or no output specified."} {"_id": "q_7515", "text": "Return a dict object representing a pipeline input argument."} {"_id": "q_7516", "text": "Return a multi-line string of the full pipeline docker command."} {"_id": "q_7517", "text": "Builds pipeline args for execution.\n\n Args:\n project: string name of project.\n script: Body of the script to execute.\n job_params: dictionary of values for labels, envs, inputs, and outputs\n for this job.\n task_params: dictionary of values for labels, envs, inputs, and outputs\n for this task.\n reserved_labels: dictionary of reserved labels (e.g. task-id,\n task-attempt)\n preemptible: use a preemptible VM for the job\n logging_uri: path for job logging output.\n scopes: list of scope.\n keep_alive: Seconds to keep VM alive on failure\n\n Returns:\n A nested dictionary with one entry under the key pipelineArgs containing\n the pipeline arguments."} {"_id": "q_7518", "text": "Convert the integer UTC time value into a local datetime."} {"_id": "q_7519", "text": "Returns a Pipeline objects for the job."} {"_id": "q_7520", "text": "Kills the operations associated with the specified job or job.task.\n\n Args:\n user_ids: List of user ids who \"own\" the job(s) to cancel.\n job_ids: List of job_ids to cancel.\n task_ids: List of task-ids to cancel.\n labels: List of LabelParam, each must match the job(s) to be canceled.\n create_time_min: a timezone-aware datetime value for the earliest create\n time of a task, inclusive.\n create_time_max: a timezone-aware datetime value for the most recent\n create time of a task, inclusive.\n\n Returns:\n A list of tasks canceled and a list of error messages."} {"_id": "q_7521", "text": "Returns the most relevant status string and last updated date string.\n\n This string is meant for display only.\n\n Returns:\n A printable status string and date string."} {"_id": "q_7522", "text": "Create a task name from a job-id, task-id, and task-attempt.\n\n Task names are used internally by dsub as well as by the docker task runner.\n The name is formatted as \".[.task-attempt]\". Task names\n follow formatting conventions allowing them to be safely used as a docker\n name.\n\n Args:\n job_id: (str) the job ID.\n task_id: (str) the task ID.\n task_attempt: (int) the task attempt.\n\n Returns:\n a task name string."} {"_id": "q_7523", "text": "Rewrite string so that all characters are valid in a docker name suffix."} {"_id": "q_7524", "text": "Return a tuple for sorting 'most recent first'."} {"_id": "q_7525", "text": "Determine if the provided time is within the range, inclusive."} {"_id": "q_7526", "text": "Return a Task object with this task's info."} {"_id": "q_7527", "text": "Returns a command to delocalize logs.\n\n Args:\n logging_path: location of log files.\n user_project: name of the project to be billed for the request.\n\n Returns:\n eg. 'gs://bucket/path/myfile' or 'gs://bucket/script-foobar-12'"} {"_id": "q_7528", "text": "The local dir for staging files for that particular task."} {"_id": "q_7529", "text": "Returns a command that will stage recursive inputs."} {"_id": "q_7530", "text": "Returns a directory or file path to be the target for \"gsutil cp\".\n\n If the filename contains a wildcard, then the target path must\n be a directory in order to ensure consistency whether the source pattern\n contains one or multiple files.\n\n\n Args:\n local_file_path: A full path terminating in a file or a file wildcard.\n\n Returns:\n The path to use as the \"gsutil cp\" target."} {"_id": "q_7531", "text": "Returns a command that will stage inputs."} {"_id": "q_7532", "text": "Get the dsub version out of the _dsub_version.py source file.\n\n Setup.py should not import dsub version from dsub directly since ambiguity in\n import order could lead to an old version of dsub setting the version number.\n Parsing the file directly is simpler than using import tools (whose interface\n varies between python 2.7, 3.4, and 3.5).\n\n Returns:\n string of dsub version.\n\n Raises:\n ValueError: if the version is not found."} {"_id": "q_7533", "text": "Return a dict with variables for the 'prepare' action."} {"_id": "q_7534", "text": "Return a dict with variables for the 'localization' action."} {"_id": "q_7535", "text": "Return a dict with variables for the 'delocalization' action."} {"_id": "q_7536", "text": "Returns a dictionary of for the user container environment."} {"_id": "q_7537", "text": "Returns the status of this operation.\n\n Raises:\n ValueError: if the operation status cannot be determined.\n\n Returns:\n A printable status string (RUNNING, SUCCESS, CANCELED or FAILURE)."} {"_id": "q_7538", "text": "Returns the most relevant status string and failed action.\n\n This string is meant for display only.\n\n Returns:\n A printable status string and name of failed action (if any)."} {"_id": "q_7539", "text": "Rounds ram up to the nearest multiple of _MEMORY_MULTIPLE."} {"_id": "q_7540", "text": "Returns a custom machine type string."} {"_id": "q_7541", "text": "Build a VirtualMachine object for a Pipeline request.\n\n Args:\n network (dict): Network details for the pipeline to run in.\n machine_type (str): GCE Machine Type string for the pipeline.\n preemptible (bool): Use a preemptible VM for the job.\n service_account (dict): Service account configuration for the VM.\n boot_disk_size_gb (int): Boot disk size in GB.\n disks (list[dict]): List of disks to mount.\n accelerators (list[dict]): List of accelerators to attach to the VM.\n labels (dict[string, string]): Labels for the VM.\n cpu_platform (str): The CPU platform to request.\n nvidia_driver_version (str): The NVIDIA driver version to use when attaching\n an NVIDIA GPU accelerator.\n\n Returns:\n An object representing a VirtualMachine."} {"_id": "q_7542", "text": "Build an Action object for a Pipeline request.\n\n Args:\n name (str): An optional name for the container.\n image_uri (str): The URI to pull the container image from.\n commands (List[str]): commands and arguments to run inside the container.\n entrypoint (str): overrides the ENTRYPOINT specified in the container.\n environment (dict[str,str]): The environment to pass into the container.\n pid_namespace (str): The PID namespace to run the action inside.\n flags (str): Flags that control the execution of this action.\n port_mappings (dict[int, int]): A map of container to host port mappings for\n this container.\n mounts (List): A list of mounts to make available to the action.\n labels (dict[str]): Labels to associate with the action.\n\n Returns:\n An object representing an Action resource."} {"_id": "q_7543", "text": "Returns a provider for job submission requests."} {"_id": "q_7544", "text": "Add provider required arguments epilog message, parse, and validate."} {"_id": "q_7545", "text": "A string with the arguments to point dstat to the same provider+project."} {"_id": "q_7546", "text": "Returns a URI with placeholders replaced by metadata values."} {"_id": "q_7547", "text": "Inserts task metadata into the logging URI.\n\n The core behavior is inspired by the Google Pipelines API:\n (1) If a the uri ends in \".log\", then that is the logging path.\n (2) Otherwise, the uri is treated as \"directory\" for logs and a filename\n needs to be automatically generated.\n\n For (1), if the job is a --tasks job, then the {task-id} is inserted\n before \".log\".\n\n For (2), the file name generated is {job-id}, or for --tasks jobs, it is\n {job-id}.{task-id}.\n\n In both cases .{task-attempt} is inserted before .log for --retries jobs.\n\n In addition, full task metadata substitution is supported. The URI\n may include substitution strings such as\n \"{job-id}\", \"{task-id}\", \"{job-name}\", \"{user-id}\", and \"{task-attempt}\".\n\n Args:\n uri: User-specified logging URI which may contain substitution fields.\n job_metadata: job-global metadata.\n task_metadata: tasks-specific metadata.\n\n Returns:\n The logging_uri formatted as described above."} {"_id": "q_7548", "text": "Validated google-v2 arguments."} {"_id": "q_7549", "text": "Extract job-global resources requirements from input args.\n\n Args:\n args: parsed command-line arguments\n\n Returns:\n Resources object containing the requested resources for the job"} {"_id": "q_7550", "text": "Print status info as we wait for those jobs.\n\n Blocks until either all of the listed jobs succeed,\n or one of them fails.\n\n Args:\n provider: job service provider\n job_ids: a set of job IDs (string) to wait for\n poll_interval: integer seconds to wait between iterations\n stop_on_failure: whether to stop waiting if one of the tasks fails.\n\n Returns:\n Empty list if there was no error,\n a list of error messages from the failed tasks otherwise."} {"_id": "q_7551", "text": "Wait for job and retry any tasks that fail.\n\n Stops retrying an individual task when: it succeeds, is canceled, or has been\n retried \"retries\" times.\n\n This function exits when there are no tasks running and there are no tasks\n eligible to be retried.\n\n Args:\n provider: job service provider\n job_id: a single job ID (string) to wait for\n poll_interval: integer seconds to wait between iterations\n retries: number of retries\n job_descriptor: job descriptor used to originally submit job\n\n Returns:\n Empty list if there was no error,\n a list containing an error message from a failed task otherwise."} {"_id": "q_7552", "text": "A list with, for each job, its dominant task.\n\n The dominant task is the one that exemplifies its job's\n status. It is either:\n - the first (FAILURE or CANCELED) task, or if none\n - the first RUNNING task, or if none\n - the first SUCCESS task.\n\n Args:\n tasks: a list of tasks to consider\n\n Returns:\n A list with, for each job, its dominant task."} {"_id": "q_7553", "text": "Waits until any of the listed jobs is not running.\n\n In particular, if any of the jobs sees one of its tasks fail,\n we count the whole job as failing (but do not terminate the remaining\n tasks ourselves).\n\n Args:\n provider: job service provider\n job_ids: a list of job IDs (string) to wait for\n poll_interval: integer seconds to wait between iterations\n\n Returns:\n A set of the jobIDs with still at least one running task."} {"_id": "q_7554", "text": "Validates that job and task argument names do not overlap."} {"_id": "q_7555", "text": "Helper function to return an appropriate set of mount parameters."} {"_id": "q_7556", "text": "Convenience function simplifies construction of the logging uri."} {"_id": "q_7557", "text": "Split a string into a pair, which can have one empty value.\n\n Args:\n pair_string: The string to be split.\n separator: The separator to be used for splitting.\n nullable_idx: The location to be set to null if the separator is not in the\n input string. Should be either 0 or 1.\n\n Returns:\n A list containing the pair.\n\n Raises:\n IndexError: If nullable_idx is not 0 or 1."} {"_id": "q_7558", "text": "Parses task parameters from a TSV.\n\n Args:\n tasks: Dict containing the path to a TSV file and task numbers to run\n variables, input, and output parameters as column headings. Subsequent\n lines specify parameter values, one row per job.\n retries: Number of retries allowed.\n input_file_param_util: Utility for producing InputFileParam objects.\n output_file_param_util: Utility for producing OutputFileParam objects.\n\n Returns:\n task_descriptors: an array of records, each containing the task-id,\n task-attempt, 'envs', 'inputs', 'outputs', 'labels' that defines the set of\n parameters for each task of the job.\n\n Raises:\n ValueError: If no job records were provided"} {"_id": "q_7559", "text": "Parse flags of key=value pairs and return a list of argclass.\n\n For pair variables, we need to:\n * split the input into name=value pairs (value optional)\n * Create the EnvParam object\n\n Args:\n labels: list of 'key' or 'key=value' strings.\n argclass: Container class for args, must instantiate with argclass(k, v).\n\n Returns:\n list of argclass objects."} {"_id": "q_7560", "text": "Convert the timeout duration to seconds.\n\n The value must be of the form \"\" where supported\n units are s, m, h, d, w (seconds, minutes, hours, days, weeks).\n\n Args:\n interval: A \"\" string.\n valid_units: A list of supported units.\n\n Returns:\n A string of the form \"s\" or None if timeout is empty."} {"_id": "q_7561", "text": "Produce a default variable name if none is specified."} {"_id": "q_7562", "text": "Find the file provider for a URI."} {"_id": "q_7563", "text": "Do basic validation of the uri, return the path and filename."} {"_id": "q_7564", "text": "Return a valid docker_path from a Google Persistent Disk url."} {"_id": "q_7565", "text": "Return a MountParam given a GCS bucket, disk image or local path."} {"_id": "q_7566", "text": "Turn the specified name and value into a valid Google label."} {"_id": "q_7567", "text": "For each task, ensure that each task param entry is not None."} {"_id": "q_7568", "text": "Return a new dict with any empty items removed.\n\n Note that this is not a deep check. If d contains a dictionary which\n itself contains empty items, those are never checked.\n\n This method exists to make to_serializable() functions cleaner.\n We could revisit this some day, but for now, the serialized objects are\n stripped of empty values to keep the output YAML more compact.\n\n Args:\n d: a dictionary\n required: list of required keys (for example, TaskDescriptors always emit\n the \"task-id\", even if None)\n\n Returns:\n A dictionary with empty items removed."} {"_id": "q_7569", "text": "Converts a task-id to the numeric task-id.\n\n Args:\n task_id: task-id in either task-n or n format\n\n Returns:\n n"} {"_id": "q_7570", "text": "Raise ValueError if the label is invalid."} {"_id": "q_7571", "text": "Populate a JobDescriptor from the local provider's original meta.yaml.\n\n The local job provider had the first incarnation of a YAML file for each\n task. That idea was extended here in the JobDescriptor and the local\n provider adopted the JobDescriptor.to_yaml() call to write its meta.yaml.\n\n The JobDescriptor.from_yaml() detects if it receives a local provider's\n \"v0\" meta.yaml and calls this function.\n\n Args:\n job: an object produced from decoding meta.yaml.\n\n Returns:\n A JobDescriptor populated as best we can from the old meta.yaml."} {"_id": "q_7572", "text": "Populate and return a JobDescriptor from a YAML string."} {"_id": "q_7573", "text": "Returns the task_descriptor corresponding to task_id."} {"_id": "q_7574", "text": "Return a dictionary of environment variables for the user container."} {"_id": "q_7575", "text": "Returns a dict combining the field for job and task params."} {"_id": "q_7576", "text": "Kill jobs or job tasks.\n\n This function separates ddel logic from flag parsing and user output. Users\n of ddel who intend to access the data programmatically should use this.\n\n Args:\n provider: an instantiated dsub provider.\n user_ids: a set of user ids who \"own\" the job(s) to delete.\n job_ids: a set of job ids to delete.\n task_ids: a set of task ids to delete.\n labels: a set of LabelParam, each must match the job(s) to be cancelled.\n create_time_min: a timezone-aware datetime value for the earliest create\n time of a task, inclusive.\n create_time_max: a timezone-aware datetime value for the most recent create\n time of a task, inclusive.\n\n Returns:\n list of job ids which were deleted."} {"_id": "q_7577", "text": "Return the value for the specified action."} {"_id": "q_7578", "text": "Return the environment for the operation."} {"_id": "q_7579", "text": "Return the image for the operation."} {"_id": "q_7580", "text": "Return all events of a particular type."} {"_id": "q_7581", "text": "Generate formatted jobs individually, in order of create-time.\n\n Args:\n provider: an instantiated dsub provider.\n statuses: a set of status strings that eligible jobs may match.\n user_ids: a set of user strings that eligible jobs may match.\n job_ids: a set of job-id strings eligible jobs may match.\n job_names: a set of job-name strings eligible jobs may match.\n task_ids: a set of task-id strings eligible tasks may match.\n task_attempts: a set of task-attempt strings eligible tasks may match.\n labels: set of LabelParam that all tasks must match.\n create_time_min: a timezone-aware datetime value for the earliest create\n time of a task, inclusive.\n create_time_max: a timezone-aware datetime value for the most recent create\n time of a task, inclusive.\n max_tasks: (int) maximum number of tasks to return per dstat job lookup.\n page_size: the page size to use for each query to the backend. May be\n ignored by some provider implementations.\n summary_output: (bool) summarize the job list.\n\n Yields:\n Individual task dictionaries with associated metadata"} {"_id": "q_7582", "text": "Returns a list of zones based on any wildcard input.\n\n This function is intended to provide an easy method for producing a list\n of desired zones for a pipeline to run in.\n\n The Pipelines API default zone list is \"any zone\". The problem with\n \"any zone\" is that it can lead to incurring Cloud Storage egress charges\n if the GCE zone selected is in a different region than the GCS bucket.\n See https://cloud.google.com/storage/pricing#network-egress.\n\n A user with a multi-region US bucket would want to pipelines to run in\n a \"us-*\" zone.\n A user with a regional bucket in US would want to restrict pipelines to\n run in a zone in that region.\n\n Rarely does the specific zone matter for a pipeline.\n\n This function allows for a simple short-hand such as:\n [ \"us-*\" ]\n [ \"us-central1-*\" ]\n These examples will expand out to the full list of US and us-central1 zones\n respectively.\n\n Args:\n input_list: list of zone names/patterns\n\n Returns:\n A list of zones, with any wildcard zone specifications expanded."} {"_id": "q_7583", "text": "Converts a datestamp from RFC3339 UTC to a datetime.\n\n Args:\n rfc3339_utc_string: a datetime string in RFC3339 UTC \"Zulu\" format\n\n Returns:\n A datetime."} {"_id": "q_7584", "text": "Returns the job-id or job-id.task-id for the operation."} {"_id": "q_7585", "text": "Cancel a batch of operations.\n\n Args:\n batch_fn: API-specific batch function.\n cancel_fn: API-specific cancel function.\n ops: A list of operations to cancel.\n\n Returns:\n A list of operations canceled and a list of error messages."} {"_id": "q_7586", "text": "Specific check for auth error codes.\n\n Return True if we should retry.\n\n False otherwise.\n Args:\n exception: An exception to test for transience.\n\n Returns:\n True if we should retry. False otherwise."} {"_id": "q_7587", "text": "Configures genomics API client.\n\n Args:\n api_name: Name of the Google API (for example: \"genomics\")\n api_version: Version of the API (for example: \"v2alpha1\")\n credentials: Credentials to be used for the gcloud API calls.\n\n Returns:\n A configured Google Genomics API client with appropriate credentials."} {"_id": "q_7588", "text": "Executes operation.\n\n Args:\n api: The base API object\n\n Returns:\n A response body object"} {"_id": "q_7589", "text": "Returns a type from a snippit of python source. Should normally be\n something just like 'str' or 'Object'.\n\n arg_type the source to be evaluated\n T the default type\n arg context of where this type was extracted\n sig context from where the arg was extracted\n\n Returns a type or a Type"} {"_id": "q_7590", "text": "Returns a jsonified response with the specified HTTP status code.\n\n The positional and keyword arguments are passed directly to the\n :func:`flask.jsonify` function which creates the response."} {"_id": "q_7591", "text": "Performs the actual sending action and returns the result"} {"_id": "q_7592", "text": "Return the Exception data in a format for JSON-RPC"} {"_id": "q_7593", "text": "An `inspect.getargspec` with a relaxed sanity check to support Cython.\n\n Motivation:\n\n A Cython-compiled function is *not* an instance of Python's\n types.FunctionType. That is the sanity check the standard Py2\n library uses in `inspect.getargspec()`. So, an exception is raised\n when calling `argh.dispatch_command(cythonCompiledFunc)`. However,\n the CyFunctions do have perfectly usable `.func_code` and\n `.func_defaults` which is all `inspect.getargspec` needs.\n\n This function just copies `inspect.getargspec()` from the standard\n library but relaxes the test to a more duck-typing one of having\n both `.func_code` and `.func_defaults` attributes."} {"_id": "q_7594", "text": "Prompts user for input. Correctly handles prompt message encoding."} {"_id": "q_7595", "text": "Encodes given value so it can be written to given file object.\n\n Value may be Unicode, binary string or any other data type.\n\n The exact behaviour depends on the Python version:\n\n Python 3.x\n\n `sys.stdout` is a `_io.TextIOWrapper` instance that accepts `str`\n (unicode) and breaks on `bytes`.\n\n It is OK to simply assume that everything is Unicode unless special\n handling is introduced in the client code.\n\n Thus, no additional processing is performed.\n\n Python 2.x\n\n `sys.stdout` is a file-like object that accepts `str` (bytes)\n and breaks when `unicode` is passed to `sys.stdout.write()`.\n\n We can expect both Unicode and bytes. They need to be encoded so as\n to match the file object encoding.\n\n The output is binary if the object doesn't explicitly require Unicode."} {"_id": "q_7596", "text": "Adds types, actions, etc. to given argument specification.\n For example, ``default=3`` implies ``type=int``.\n\n :param arg: a :class:`argh.utils.Arg` instance"} {"_id": "q_7597", "text": "Declares an argument for given function. Does not register the function\n anywhere, nor does it modify the function in any way.\n\n The signature of the decorator matches that of\n :meth:`argparse.ArgumentParser.add_argument`, only some keywords are not\n required if they can be easily guessed (e.g. you don't have to specify type\n or action when an `int` or `bool` default value is supplied).\n\n Typical use cases:\n\n - In combination with :func:`expects_obj` (which is not recommended);\n - in combination with ordinary function signatures to add details that\n cannot be expressed with that syntax (e.g. help message).\n\n Usage::\n\n from argh import arg\n\n @arg('path', help='path to the file to load')\n @arg('--format', choices=['yaml','json'])\n @arg('-v', '--verbosity', choices=range(0,3), default=2)\n def load(path, something=None, format='json', dry_run=False, verbosity=1):\n loaders = {'json': json.load, 'yaml': yaml.load}\n loader = loaders[args.format]\n data = loader(args.path)\n if not args.dry_run:\n if verbosity < 1:\n print('saving to the database')\n put_to_database(data)\n\n In this example:\n\n - `path` declaration is extended with `help`;\n - `format` declaration is extended with `choices`;\n - `dry_run` declaration is not duplicated;\n - `verbosity` is extended with `choices` and the default value is\n overridden. (If both function signature and `@arg` define a default\n value for an argument, `@arg` wins.)\n\n .. note::\n\n It is recommended to avoid using this decorator unless there's no way\n to tune the argument's behaviour or presentation using ordinary\n function signatures. Readability counts, don't repeat yourself."} {"_id": "q_7598", "text": "Make a guess about the config file location an try loading it."} {"_id": "q_7599", "text": "Validate a configuration key according to `section.item`."} {"_id": "q_7600", "text": "Searches the given text for mentions and expands them.\n\n For example:\n \"@source.nick\" will be expanded to \"@\"."} {"_id": "q_7601", "text": "Try loading given cache file."} {"_id": "q_7602", "text": "Checks if specified URL is cached."} {"_id": "q_7603", "text": "Retrieves tweets from the cache."} {"_id": "q_7604", "text": "Tries to remove cached tweets."} {"_id": "q_7605", "text": "Retrieve your personal timeline."} {"_id": "q_7606", "text": "Get or set config item."} {"_id": "q_7607", "text": "Return human-readable relative time string."} {"_id": "q_7608", "text": "Copy the Query object, optionally replacing the filters, order_by, or\n limit information on the copy. This is mostly an internal detail that\n you can ignore."} {"_id": "q_7609", "text": "Returns only the first result from the query, if any."} {"_id": "q_7610", "text": "This function handles all on_delete semantics defined on OneToMany columns.\n\n This function only exists because 'cascade' is *very* hard to get right."} {"_id": "q_7611", "text": "Performs the actual prefix, suffix, and pattern match operations."} {"_id": "q_7612", "text": "Estimates the total work necessary to calculate the prefix match over the\n given index with the provided prefix."} {"_id": "q_7613", "text": "Search for model ids that match the provided filters.\n\n Arguments:\n\n * *filters* - A list of filters that apply to the search of one of\n the following two forms:\n\n 1. ``'column:string'`` - a plain string will match a word in a\n text search on the column\n\n .. note:: Read the documentation about the ``Query`` object\n for what is actually passed during text search\n\n 2. ``('column', min, max)`` - a numeric column range search,\n between min and max (inclusive by default)\n\n .. note:: Read the documentation about the ``Query`` object\n for information about open-ended ranges\n\n 3. ``['column:string1', 'column:string2']`` - will match any\n of the provided words in a text search on the column\n\n 4. ``Prefix('column', 'prefix')`` - will match prefixes of\n words in a text search on the column\n\n 5. ``Suffix('column', 'suffix')`` - will match suffixes of\n words in a text search on the column\n\n 6. ``Pattern('column', 'pattern')`` - will match patterns over\n words in a text search on the column\n\n * *order_by* - A string that names the numeric column by which to\n sort the results by. Prefixing with '-' will return results in\n descending order\n\n .. note:: While you can technically pass a non-numeric index as an\n *order_by* clause, the results will basically be to order the\n results by string comparison of the ids (10 will come before 2).\n\n .. note:: If you omit the ``order_by`` argument, results will be\n ordered by the last filter. If the last filter was a text\n filter, see the previous note. If the last filter was numeric,\n then results will be ordered by that result.\n\n * *offset* - A numeric starting offset for results\n * *count* - The maximum number of results to return from the query"} {"_id": "q_7614", "text": "This utility function will iterate over all entities of a provided model,\n refreshing their indices. This is primarily useful after adding an index\n on a column.\n\n Arguments:\n\n * *model* - the model whose entities you want to reindex\n * *block_size* - the maximum number of entities you want to fetch from\n Redis at a time, defaulting to 100\n\n This function will yield its progression through re-indexing all of your\n entities.\n\n Example use::\n\n for progress, total in refresh_indices(MyModel, block_size=200):\n print \"%s of %s\"%(progress, total)\n\n .. note:: This uses the session object to handle index refresh via calls to\n ``.commit()``. If you have any outstanding entities known in the\n session, they will be committed."} {"_id": "q_7615", "text": "This utility function will clean out old index data that was accidentally\n left during item deletion in rom versions <= 0.27.0 . You should run this\n after you have upgraded all of your clients to version 0.28.0 or later.\n\n Arguments:\n\n * *model* - the model whose entities you want to reindex\n * *block_size* - the maximum number of items to check at a time\n defaulting to 100\n\n This function will yield its progression through re-checking all of the\n data that could be left over.\n\n Example use::\n\n for progress, total in clean_old_index(MyModel, block_size=200):\n print \"%s of %s\"%(progress, total)"} {"_id": "q_7616", "text": "Adds an entity to the session."} {"_id": "q_7617", "text": "... Actually write data to Redis. This is an internal detail. Please don't\n call me directly."} {"_id": "q_7618", "text": "Deletes the entity immediately. Also performs any on_delete operations\n specified as part of column definitions."} {"_id": "q_7619", "text": "Will fetch one or more entities of this type from the session or\n Redis.\n\n Used like::\n\n MyModel.get(5)\n MyModel.get([1, 6, 2, 4])\n\n Passing a list or a tuple will return multiple entities, in the same\n order that the ids were passed."} {"_id": "q_7620", "text": "Parse the options, set defaults and then fire up PhantomJS."} {"_id": "q_7621", "text": "Call PhantomJS with the specified flags and options."} {"_id": "q_7622", "text": "Release, incrementing the internal counter by one."} {"_id": "q_7623", "text": "Register an approximation of memory used by FTP server process\n and all of its children."} {"_id": "q_7624", "text": "Connect to FTP server, login and return an ftplib.FTP instance."} {"_id": "q_7625", "text": "Decorator. Bring coroutine result up, so it can be used as async context\n\n ::\n\n >>> async def foo():\n ...\n ... ...\n ... return AsyncContextInstance(...)\n ...\n ... ctx = await foo()\n ... async with ctx:\n ...\n ... # do\n\n ::\n\n >>> @async_enterable\n ... async def foo():\n ...\n ... ...\n ... return AsyncContextInstance(...)\n ...\n ... async with foo() as ctx:\n ...\n ... # do\n ...\n ... ctx = await foo()\n ... async with ctx:\n ...\n ... # do"} {"_id": "q_7626", "text": "Context manager with threading lock for set locale on enter, and set it\n back to original state on exit.\n\n ::\n\n >>> with setlocale(\"C\"):\n ... ..."} {"_id": "q_7627", "text": "Count `data` for throttle\n\n :param data: bytes of data for count\n :type data: :py:class:`bytes`\n\n :param start: start of read/write time from\n :py:meth:`asyncio.BaseEventLoop.time`\n :type start: :py:class:`float`"} {"_id": "q_7628", "text": "Set throttle limit\n\n :param value: bytes per second\n :type value: :py:class:`int` or :py:class:`None`"} {"_id": "q_7629", "text": "Parsing directory server response.\n\n :param s: response line\n :type s: :py:class:`str`\n\n :rtype: :py:class:`pathlib.PurePosixPath`"} {"_id": "q_7630", "text": "Parsing Microsoft Windows `dir` output\n\n :param b: response line\n :type b: :py:class:`bytes` or :py:class:`str`\n\n :return: (path, info)\n :rtype: (:py:class:`pathlib.PurePosixPath`, :py:class:`dict`)"} {"_id": "q_7631", "text": "Create stream for write data to `destination` file.\n\n :param destination: destination path of file on server side\n :type destination: :py:class:`str` or :py:class:`pathlib.PurePosixPath`\n\n :param offset: byte offset for stream start position\n :type offset: :py:class:`int`\n\n :rtype: :py:class:`aioftp.DataConnectionThrottleStreamIO`"} {"_id": "q_7632", "text": "Compute jenks natural breaks on a sequence of `values`, given `nb_class`,\n the number of desired class.\n\n Parameters\n ----------\n values : array-like\n The Iterable sequence of numbers (integer/float) to be used.\n nb_class : int\n The desired number of class (as some other functions requests\n a `k` value, `nb_class` is like `k` + 1). Have to be lesser than\n the length of `values` and greater than 2.\n\n Returns\n -------\n breaks : tuple of floats\n The computed break values, including minimum and maximum, in order\n to have all the bounds for building `nb_class` class,\n so the returned tuple has a length of `nb_class` + 1.\n\n\n Examples\n --------\n Using nb_class = 3, expecting 4 break values , including min and max :\n\n >>> jenks_breaks(\n [1.3, 7.1, 7.3, 2.3, 3.9, 4.1, 7.8, 1.2, 4.3, 7.3, 5.0, 4.3],\n nb_class = 3) # Should output (1.2, 2.3, 5.0, 7.8)"} {"_id": "q_7633", "text": "Copy the contents of the screen to PIL image memory.\n\n :param bbox: optional bounding box (x1,y1,x2,y2)\n :param childprocess: pyscreenshot can cause an error,\n if it is used on more different virtual displays\n and back-end is not in different process.\n Some back-ends are always different processes: scrot, imagemagick\n The default is False if the program was started inside IDLE,\n otherwise it is True.\n :param backend: back-end can be forced if set (examples:scrot, wx,..),\n otherwise back-end is automatic"} {"_id": "q_7634", "text": "Open a Mapchete process.\n\n Parameters\n ----------\n config : MapcheteConfig object, config dict or path to mapchete file\n Mapchete process configuration\n mode : string\n * ``memory``: Generate process output on demand without reading\n pre-existing data or writing new data.\n * ``readonly``: Just read data without processing new data.\n * ``continue``: (default) Don't overwrite existing output.\n * ``overwrite``: Overwrite existing output.\n zoom : list or integer\n process zoom level or a pair of minimum and maximum zoom level\n bounds : tuple\n left, bottom, right, top process boundaries in output pyramid\n single_input_file : string\n single input file if supported by process\n with_cache : bool\n process output data cached in memory\n\n Returns\n -------\n Mapchete\n a Mapchete process object"} {"_id": "q_7635", "text": "Determine zoom levels."} {"_id": "q_7636", "text": "Worker function running the process."} {"_id": "q_7637", "text": "Yield process tiles.\n\n Tiles intersecting with the input data bounding boxes as well as\n process bounds, if provided, are considered process tiles. This is to\n avoid iterating through empty tiles.\n\n Parameters\n ----------\n zoom : integer\n zoom level process tiles should be returned from; if none is given,\n return all process tiles\n\n yields\n ------\n BufferedTile objects"} {"_id": "q_7638", "text": "Process a large batch of tiles.\n\n Parameters\n ----------\n process : MapcheteProcess\n process to be run\n zoom : list or int\n either single zoom level or list of minimum and maximum zoom level;\n None processes all (default: None)\n tile : tuple\n zoom, row and column of tile to be processed (cannot be used with\n zoom)\n multi : int\n number of workers (default: number of CPU cores)\n max_chunksize : int\n maximum number of process tiles to be queued for each worker;\n (default: 1)"} {"_id": "q_7639", "text": "Process a large batch of tiles and yield report messages per tile.\n\n Parameters\n ----------\n zoom : list or int\n either single zoom level or list of minimum and maximum zoom level;\n None processes all (default: None)\n tile : tuple\n zoom, row and column of tile to be processed (cannot be used with\n zoom)\n multi : int\n number of workers (default: number of CPU cores)\n max_chunksize : int\n maximum number of process tiles to be queued for each worker;\n (default: 1)"} {"_id": "q_7640", "text": "Run the Mapchete process.\n\n Execute, write and return data.\n\n Parameters\n ----------\n process_tile : Tile or tile index tuple\n Member of the process tile pyramid (not necessarily the output\n pyramid, if output has a different metatiling setting)\n\n Returns\n -------\n data : NumPy array or features\n process output"} {"_id": "q_7641", "text": "Extract data from tile."} {"_id": "q_7642", "text": "Calculate hillshading from elevation data.\n\n Parameters\n ----------\n elevation : array\n input elevation data\n azimuth : float\n horizontal angle of light source (315: North-West)\n altitude : float\n vertical angle of light source (90 would result in slope shading)\n z : float\n vertical exaggeration factor\n scale : float\n scale factor of pixel size units versus height units (insert 112000\n when having elevation values in meters in a geodetic projection)\n\n Returns\n -------\n hillshade : array"} {"_id": "q_7643", "text": "Extract contour lines from elevation data.\n\n Parameters\n ----------\n elevation : array\n input elevation data\n interval : integer\n elevation value interval when drawing contour lines\n field : string\n output field name containing elevation value\n base : integer\n elevation base value the intervals are computed from\n\n Returns\n -------\n contours : iterable\n contours as GeoJSON-like pairs of properties and geometry"} {"_id": "q_7644", "text": "Clip array by geometry.\n\n Parameters\n ----------\n array : array\n raster data to be clipped\n geometries : iterable\n geometries used to clip source array\n inverted : bool\n invert clipping (default: False)\n clip_buffer : int\n buffer (in pixels) geometries before applying clip\n\n Returns\n -------\n clipped array : array"} {"_id": "q_7645", "text": "Create tile pyramid out of input raster."} {"_id": "q_7646", "text": "Create a tile pyramid out of an input raster dataset."} {"_id": "q_7647", "text": "Determine minimum and maximum zoomlevel."} {"_id": "q_7648", "text": "Validate whether value is found in config and has the right type.\n\n Parameters\n ----------\n config : dict\n configuration dictionary\n values : list\n list of (str, type) tuples of values and value types expected in config\n\n Returns\n -------\n True if config is valid.\n\n Raises\n ------\n Exception if value is not found or has the wrong type."} {"_id": "q_7649", "text": "Return hash of x."} {"_id": "q_7650", "text": "Validate and return zoom levels."} {"_id": "q_7651", "text": "Snaps bounds to tiles boundaries of specific zoom level.\n\n Parameters\n ----------\n bounds : bounds to be snapped\n pyramid : TilePyramid\n zoom : int\n\n Returns\n -------\n Bounds(left, bottom, right, top)"} {"_id": "q_7652", "text": "Clips bounds by clip.\n\n Parameters\n ----------\n bounds : bounds to be clipped\n clip : clip bounds\n\n Returns\n -------\n Bounds(left, bottom, right, top)"} {"_id": "q_7653", "text": "Return parameter dictionary per zoom level."} {"_id": "q_7654", "text": "Return the element filtered by zoom level.\n\n - An input integer or float gets returned as is.\n - An input string is checked whether it starts with \"zoom\". Then, the\n provided zoom level gets parsed and compared with the actual zoom\n level. If zoom levels match, the element gets returned.\n TODOs/gotchas:\n - Elements are unordered, which can lead to unexpected results when\n defining the YAML config.\n - Provided zoom levels for one element in config file are not allowed\n to \"overlap\", i.e. there is not yet a decision mechanism implemented\n which handles this case."} {"_id": "q_7655", "text": "Return element only if zoom condition matches with config string."} {"_id": "q_7656", "text": "Flatten dict tree into dictionary where keys are paths of old dict."} {"_id": "q_7657", "text": "Reverse tree flattening."} {"_id": "q_7658", "text": "Process bounds this process is currently initialized with.\n\n This gets triggered by using the ``init_bounds`` kwarg. If not set, it will\n be equal to self.bounds."} {"_id": "q_7659", "text": "Effective process bounds required to initialize inputs.\n\n Process bounds sometimes have to be larger, because all intersecting process\n tiles have to be covered as well."} {"_id": "q_7660", "text": "Output object of driver."} {"_id": "q_7661", "text": "Input items used for process stored in a dictionary.\n\n Keys are the hashes of the input parameters, values the respective\n InputData classes."} {"_id": "q_7662", "text": "Optional baselevels configuration.\n\n baselevels:\n min: \n max: \n lower: \n higher: "} {"_id": "q_7663", "text": "Return configuration parameters snapshot for zoom as dictionary.\n\n Parameters\n ----------\n zoom : int\n zoom level\n\n Returns\n -------\n configuration snapshot : dictionary\n zoom level dependent process configuration"} {"_id": "q_7664", "text": "Return process bounding box for zoom level.\n\n Parameters\n ----------\n zoom : int or None\n if None, the union of all zoom level areas is returned\n\n Returns\n -------\n process area : shapely geometry"} {"_id": "q_7665", "text": "Generate indexes for given zoom level.\n\n Parameters\n ----------\n mp : Mapchete object\n process output to be indexed\n out_dir : path\n optionally override process output directory\n zoom : int\n zoom level to be processed\n geojson : bool\n generate GeoJSON index (default: False)\n gpkg : bool\n generate GeoPackage index (default: False)\n shapefile : bool\n generate Shapefile index (default: False)\n txt : bool\n generate tile path list textfile (default: False)\n vrt : bool\n GDAL-style VRT file (default: False)\n fieldname : str\n field name which contains paths of tiles (default: \"location\")\n basepath : str\n if set, use custom base path instead of output path\n for_gdal : bool\n use GDAL compatible remote paths, i.e. add \"/vsicurl/\" before path\n (default: True)"} {"_id": "q_7666", "text": "Return raster metadata."} {"_id": "q_7667", "text": "Example process for testing.\n\n Inputs:\n -------\n file1\n raster file\n\n Parameters:\n -----------\n\n Output:\n -------\n np.ndarray"} {"_id": "q_7668", "text": "Check if output format is valid with other process parameters.\n\n Parameters\n ----------\n config : dictionary\n output configuration parameters\n\n Returns\n -------\n is_valid : bool"} {"_id": "q_7669", "text": "Return all available output formats.\n\n Returns\n -------\n formats : list\n all available output formats"} {"_id": "q_7670", "text": "Return output class of driver.\n\n Returns\n -------\n output : ``OutputData``\n output writer object"} {"_id": "q_7671", "text": "Return input class of driver.\n\n Returns\n -------\n input_params : ``InputData``\n input parameters"} {"_id": "q_7672", "text": "Dump output JSON and verify parameters if output metadata exist."} {"_id": "q_7673", "text": "Determine target file path.\n\n Parameters\n ----------\n tile : ``BufferedTile``\n must be member of output ``TilePyramid``\n\n Returns\n -------\n path : string"} {"_id": "q_7674", "text": "Create directory and subdirectory if necessary.\n\n Parameters\n ----------\n tile : ``BufferedTile``\n must be member of output ``TilePyramid``"} {"_id": "q_7675", "text": "Check whether process output is allowed with output driver.\n\n Parameters\n ----------\n process_data : raw process output\n\n Returns\n -------\n True or False"} {"_id": "q_7676", "text": "Return verified and cleaned output.\n\n Parameters\n ----------\n process_data : raw process output\n\n Returns\n -------\n NumPy array or list of features."} {"_id": "q_7677", "text": "Extract subset from multiple tiles.\n\n input_data_tiles : list of (``Tile``, process data) tuples\n out_tile : ``Tile``\n\n Returns\n -------\n NumPy array or list of features."} {"_id": "q_7678", "text": "Calculate slope and aspect map.\n\n Return a pair of arrays 2 pixels smaller than the input elevation array.\n\n Slope is returned in radians, from 0 for sheer face to pi/2 for\n flat ground. Aspect is returned in radians, counterclockwise from -pi\n at north around to pi.\n\n Logic here is borrowed from hillshade.cpp:\n http://www.perrygeo.net/wordpress/?p=7\n\n Parameters\n ----------\n elevation : array\n input elevation data\n xres : float\n column width\n yres : float\n row height\n z : float\n vertical exaggeration factor\n scale : float\n scale factor of pixel size units versus height units (insert 112000\n when having elevation values in meters in a geodetic projection)\n\n Returns\n -------\n slope shade : array"} {"_id": "q_7679", "text": "Return hillshaded numpy array.\n\n Parameters\n ----------\n elevation : array\n input elevation data\n tile : Tile\n tile covering the array\n z : float\n vertical exaggeration factor\n scale : float\n scale factor of pixel size units versus height units (insert 112000\n when having elevation values in meters in a geodetic projection)"} {"_id": "q_7680", "text": "Return ``BufferedTile`` object of this ``BufferedTilePyramid``.\n\n Parameters\n ----------\n zoom : integer\n zoom level\n row : integer\n tile matrix row\n col : integer\n tile matrix column\n\n Returns\n -------\n buffered tile : ``BufferedTile``"} {"_id": "q_7681", "text": "Return all tiles intersecting with bounds.\n\n Bounds values will be cleaned if they cross the antimeridian or are\n outside of the Northern or Southern tile pyramid bounds.\n\n Parameters\n ----------\n bounds : tuple\n (left, bottom, right, top) bounding values in tile pyramid CRS\n zoom : integer\n zoom level\n\n Yields\n ------\n intersecting tiles : generator\n generates ``BufferedTiles``"} {"_id": "q_7682", "text": "All metatiles intersecting with given bounding box.\n\n Parameters\n ----------\n geometry : ``shapely.geometry``\n zoom : integer\n zoom level\n\n Yields\n ------\n intersecting tiles : generator\n generates ``BufferedTiles``"} {"_id": "q_7683", "text": "Return all tiles intersecting with input geometry.\n\n Parameters\n ----------\n geometry : ``shapely.geometry``\n zoom : integer\n zoom level\n\n Yields\n ------\n intersecting tiles : ``BufferedTile``"} {"_id": "q_7684", "text": "Return all BufferedTiles intersecting with tile.\n\n Parameters\n ----------\n tile : ``BufferedTile``\n another tile"} {"_id": "q_7685", "text": "Return dictionary representation of pyramid parameters."} {"_id": "q_7686", "text": "Return tile neighbors.\n\n Tile neighbors are unique, i.e. in some edge cases, where both the left\n and right neighbor wrapped around the antimeridian is the same. Also,\n neighbors ouside the northern and southern TilePyramid boundaries are\n excluded, because they are invalid.\n\n -------------\n | 8 | 1 | 5 |\n -------------\n | 4 | x | 2 |\n -------------\n | 7 | 3 | 6 |\n -------------\n\n Parameters\n ----------\n connectedness : int\n [4 or 8] return four direct neighbors or all eight.\n\n Returns\n -------\n list of BufferedTiles"} {"_id": "q_7687", "text": "Read, stretch and return raster data.\n\n Inputs:\n -------\n raster\n raster file\n\n Parameters:\n -----------\n resampling : str\n rasterio.Resampling method\n scale_method : str\n - dtype_scale: use dtype minimum and maximum values\n - minmax_scale: use dataset bands minimum and maximum values\n - crop: clip data to output dtype\n scales_minmax : tuple\n tuple of band specific scale values\n\n Output:\n -------\n np.ndarray"} {"_id": "q_7688", "text": "Open process output as input for other process.\n\n Parameters\n ----------\n tile : ``Tile``\n process : ``MapcheteProcess``\n kwargs : keyword arguments"} {"_id": "q_7689", "text": "Serve a Mapchete process.\n\n Creates the Mapchete host and serves both web page with OpenLayers and the\n WMTS simple REST endpoint."} {"_id": "q_7690", "text": "Extract a numpy array from a raster file."} {"_id": "q_7691", "text": "Extract raster data window array.\n\n Parameters\n ----------\n in_raster : array or ReferencedRaster\n in_affine : ``Affine`` required if in_raster is an array\n out_tile : ``BufferedTile``\n\n Returns\n -------\n extracted array : array"} {"_id": "q_7692", "text": "Extract and resample from array to target tile.\n\n Parameters\n ----------\n in_raster : array\n in_affine : ``Affine``\n out_tile : ``BufferedTile``\n resampling : string\n one of rasterio's resampling methods (default: nearest)\n nodataval : integer or float\n raster nodata value (default: 0)\n\n Returns\n -------\n resampled array : array"} {"_id": "q_7693", "text": "Determine if distance over antimeridian is shorter than normal distance."} {"_id": "q_7694", "text": "Turn input data into a proper array for further usage.\n\n Outut array is always 3-dimensional with the given data type. If the output\n is masked, the fill_value corresponds to the given nodata value and the\n nodata value will be burned into the data array.\n\n Parameters\n ----------\n data : array or iterable\n array (masked or normal) or iterable containing arrays\n nodata : integer or float\n nodata value (default: 0) used if input is not a masked array and\n for output array\n masked : bool\n return a NumPy Array or a NumPy MaskedArray (default: True)\n dtype : string\n data type of output array (default: \"int16\")\n\n Returns\n -------\n array : array"} {"_id": "q_7695", "text": "Reproject a geometry to target CRS.\n\n Also, clips geometry if it lies outside the destination CRS boundary.\n Supported destination CRSes for clipping: 4326 (WGS84), 3857 (Spherical\n Mercator) and 3035 (ETRS89 / ETRS-LAEA).\n\n Parameters\n ----------\n geometry : ``shapely.geometry``\n src_crs : ``rasterio.crs.CRS`` or EPSG code\n CRS of source data\n dst_crs : ``rasterio.crs.CRS`` or EPSG code\n target CRS\n error_on_clip : bool\n raises a ``RuntimeError`` if a geometry is outside of CRS bounds\n (default: False)\n validity_check : bool\n checks if reprojected geometry is valid and throws ``TopologicalError``\n if invalid (default: True)\n antimeridian_cutting : bool\n cut geometry at Antimeridian; can result in a multipart output geometry\n\n Returns\n -------\n geometry : ``shapely.geometry``"} {"_id": "q_7696", "text": "Segmentize Polygon outer ring by segmentize value.\n\n Just Polygon geometry type supported.\n\n Parameters\n ----------\n geometry : ``shapely.geometry``\n segmentize_value: float\n\n Returns\n -------\n geometry : ``shapely.geometry``"} {"_id": "q_7697", "text": "Write features to GeoJSON file.\n\n Parameters\n ----------\n in_data : features\n out_schema : dictionary\n output schema for fiona\n out_tile : ``BufferedTile``\n tile used for output extent\n out_path : string\n output path for GeoJSON file"} {"_id": "q_7698", "text": "Return geometry of a specific type if possible.\n\n Filters and splits up GeometryCollection into target types. This is\n necessary when after clipping and/or reprojecting the geometry types from\n source geometries change (i.e. a Polygon becomes a LineString or a\n LineString becomes Point) in some edge cases.\n\n Parameters\n ----------\n geometry : ``shapely.geometry``\n target_type : string\n target geometry type\n allow_multipart : bool\n allow multipart geometries (default: True)\n\n Returns\n -------\n cleaned geometry : ``shapely.geometry``\n returns None if input geometry type differs from target type\n\n Raises\n ------\n GeometryTypeError : if geometry type does not match target_type"} {"_id": "q_7699", "text": "Yield single part geometries if geom is multipart, otherwise yield geom.\n\n Parameters:\n -----------\n geom : shapely geometry\n\n Returns:\n --------\n shapely single part geometries"} {"_id": "q_7700", "text": "Convert and optionally clip input raster data.\n\n Inputs:\n -------\n raster\n singleband or multiband data input\n clip (optional)\n vector data used to clip output\n\n Parameters\n ----------\n td_resampling : str (default: 'nearest')\n Resampling used when reading from TileDirectory.\n td_matching_method : str ('gdal' or 'min') (default: 'gdal')\n gdal: Uses GDAL's standard method. Here, the target resolution is\n calculated by averaging the extent's pixel sizes over both x and y\n axes. This approach returns a zoom level which may not have the\n best quality but will speed up reading significantly.\n min: Returns the zoom level which matches the minimum resolution of the\n extents four corner pixels. This approach returns the zoom level\n with the best possible quality but with low performance. If the\n tile extent is outside of the destination pyramid, a\n TopologicalError will be raised.\n td_matching_max_zoom : int (optional, default: None)\n If set, it will prevent reading from zoom levels above the maximum.\n td_matching_precision : int (default: 8)\n Round resolutions to n digits before comparing.\n td_fallback_to_higher_zoom : bool (default: False)\n In case no data is found at zoom level, try to read data from higher\n zoom levels. Enabling this setting can lead to many IO requests in\n areas with no data.\n clip_pixelbuffer : int\n Use pixelbuffer when clipping output by geometry. (default: 0)\n\n Output\n ------\n np.ndarray"} {"_id": "q_7701", "text": "Determine the best base zoom level for a raster.\n\n \"Best\" means the maximum zoom level where no oversampling has to be done.\n\n Parameters\n ----------\n input_file : path to raster file\n tile_pyramid_type : ``TilePyramid`` projection (``geodetic`` or``mercator``)\n\n Returns\n -------\n zoom : integer"} {"_id": "q_7702", "text": "Determine whether file path is remote or local.\n\n Parameters\n ----------\n path : path to file\n\n Returns\n -------\n is_remote : bool"} {"_id": "q_7703", "text": "Check if file exists either remote or local.\n\n Parameters:\n -----------\n path : path to file\n\n Returns:\n --------\n exists : bool"} {"_id": "q_7704", "text": "Return absolute path if path is local.\n\n Parameters:\n -----------\n path : path to file\n base_dir : base directory used for absolute path\n\n Returns:\n --------\n absolute path"} {"_id": "q_7705", "text": "Return relative path if path is local.\n\n Parameters:\n -----------\n path : path to file\n base_dir : directory where path sould be relative to\n\n Returns:\n --------\n relative path"} {"_id": "q_7706", "text": "Write local or remote."} {"_id": "q_7707", "text": "Read local or remote."} {"_id": "q_7708", "text": "Attach a reducer function to a given type in the dispatch table."} {"_id": "q_7709", "text": "Return the number of CPUs the current process can use.\n\n The returned number of CPUs accounts for:\n * the number of CPUs in the system, as given by\n ``multiprocessing.cpu_count``;\n * the CPU affinity settings of the current process\n (available with Python 3.4+ on some Unix systems);\n * CFS scheduler CPU bandwidth limit (available on Linux only, typically\n set by docker and similar container orchestration systems);\n * the value of the LOKY_MAX_CPU_COUNT environment variable if defined.\n and is given as the minimum of these constraints.\n It is also always larger or equal to 1."} {"_id": "q_7710", "text": "Evaluates calls from call_queue and places the results in result_queue.\n\n This worker is run in a separate process.\n\n Args:\n call_queue: A ctx.Queue of _CallItems that will be read and\n evaluated by the worker.\n result_queue: A ctx.Queue of _ResultItems that will written\n to by the worker.\n initializer: A callable initializer, or None\n initargs: A tuple of args for the initializer\n process_management_lock: A ctx.Lock avoiding worker timeout while some\n workers are being spawned.\n timeout: maximum time to wait for a new item in the call_queue. If that\n time is expired, the worker will shutdown.\n worker_exit_lock: Lock to avoid flagging the executor as broken on\n workers timeout.\n current_depth: Nested parallelism level, to avoid infinite spawning."} {"_id": "q_7711", "text": "Fills call_queue with _WorkItems from pending_work_items.\n\n This function never blocks.\n\n Args:\n pending_work_items: A dict mapping work ids to _WorkItems e.g.\n {5: <_WorkItem...>, 6: <_WorkItem...>, ...}\n work_ids: A queue.Queue of work ids e.g. Queue([5, 6, ...]). Work ids\n are consumed and the corresponding _WorkItems from\n pending_work_items are transformed into _CallItems and put in\n call_queue.\n call_queue: A ctx.Queue that will be filled with _CallItems\n derived from _WorkItems."} {"_id": "q_7712", "text": "Wrapper for non-picklable object to use cloudpickle to serialize them.\n\n Note that this wrapper tends to slow down the serialization process as it\n is done with cloudpickle which is typically slower compared to pickle. The\n proper way to solve serialization issues is to avoid defining functions and\n objects in the main scripts and to implement __reduce__ functions for\n complex classes."} {"_id": "q_7713", "text": "Return a wrapper for an fd."} {"_id": "q_7714", "text": "Return the current ReusableExectutor instance.\n\n Start a new instance if it has not been started already or if the previous\n instance was left in a broken state.\n\n If the previous instance does not have the requested number of workers, the\n executor is dynamically resized to adjust the number of workers prior to\n returning.\n\n Reusing a singleton instance spares the overhead of starting new worker\n processes and importing common python packages each time.\n\n ``max_workers`` controls the maximum number of tasks that can be running in\n parallel in worker processes. By default this is set to the number of\n CPUs on the host.\n\n Setting ``timeout`` (in seconds) makes idle workers automatically shutdown\n so as to release system resources. New workers are respawn upon submission\n of new tasks so that ``max_workers`` are available to accept the newly\n submitted tasks. Setting ``timeout`` to around 100 times the time required\n to spawn new processes and import packages in them (on the order of 100ms)\n ensures that the overhead of spawning workers is negligible.\n\n Setting ``kill_workers=True`` makes it possible to forcibly interrupt\n previously spawned jobs to get a new instance of the reusable executor\n with new constructor argument values.\n\n The ``job_reducers`` and ``result_reducers`` are used to customize the\n pickling of tasks and results send to the executor.\n\n When provided, the ``initializer`` is run first in newly spawned\n processes with argument ``initargs``."} {"_id": "q_7715", "text": "Wait for the cache to be empty before resizing the pool."} {"_id": "q_7716", "text": "Return info about parent needed by child to unpickle process object"} {"_id": "q_7717", "text": "Try to get current process ready to unpickle process object"} {"_id": "q_7718", "text": "Close all the file descriptors except those in keep_fds."} {"_id": "q_7719", "text": "Return a formated string with the exitcodes of terminated workers.\n\n If necessary, wait (up to .25s) for the system to correctly set the\n exitcode of one terminated worker."} {"_id": "q_7720", "text": "Format a list of exit code with names of the signals if possible"} {"_id": "q_7721", "text": "Run semaphore tracker."} {"_id": "q_7722", "text": "Make sure that semaphore tracker process is running.\n\n This can be run from any process. Usually a child process will use\n the semaphore created by its parent."} {"_id": "q_7723", "text": "A simple event processor that prints out events."} {"_id": "q_7724", "text": "Program counter."} {"_id": "q_7725", "text": "Almost a copy of code.interact\n Closely emulate the interactive Python interpreter.\n\n This is a backwards compatible interface to the InteractiveConsole\n class. When readfunc is not specified, it attempts to import the\n readline module to enable GNU readline if it is available.\n\n Arguments (all optional, all default to None):\n\n banner -- passed to InteractiveConsole.interact()\n readfunc -- if not None, replaces InteractiveConsole.raw_input()\n local -- passed to InteractiveInterpreter.__init__()"} {"_id": "q_7726", "text": "Split a command line's arguments in a shell-like manner returned\n as a list of lists. Use ';;' with white space to indicate separate\n commands.\n\n This is a modified version of the standard library's shlex.split()\n function, but with a default of posix=False for splitting, so that quotes\n in inputs are respected."} {"_id": "q_7727", "text": "Run each function in `hooks' with args"} {"_id": "q_7728", "text": "Eval arg and it is an integer return the value. Otherwise\n return None"} {"_id": "q_7729", "text": "If no argument use the default. If arg is a an integer between\n least min_value and at_most, use that. Otherwise report an error.\n If there's a stack frame use that in evaluation."} {"_id": "q_7730", "text": "Find the next token in str string from start_pos, we return\n the token and the next blank position after the token or\n str.size if this is the last token. Tokens are delimited by\n white space."} {"_id": "q_7731", "text": "Script interface to read a command. `prompt' is a parameter for\n compatibilty and is ignored."} {"_id": "q_7732", "text": "Closes both input and output"} {"_id": "q_7733", "text": "Disassemble byte string of code. If end_line is negative\n it counts the number of statement linestarts to use."} {"_id": "q_7734", "text": "Return a count of the number of frames"} {"_id": "q_7735", "text": "If f_back is looking at a call function, return\n the name for it. Otherwise return None"} {"_id": "q_7736", "text": "Print count entries of the stack trace"} {"_id": "q_7737", "text": "Find subcmd in self.subcmds"} {"_id": "q_7738", "text": "Show short help for a subcommand."} {"_id": "q_7739", "text": "Add subcmd to the available subcommands for this object.\n It will have the supplied docstring, and subcmd_cb will be called\n when we want to run the command. min_len is the minimum length\n allowed to abbreviate the command. in_list indicates with the\n show command will be run when giving a list of all sub commands\n of this object. Some commands have long output like \"show commands\"\n so we might not want to show that."} {"_id": "q_7740", "text": "Run subcmd_name with args using obj for the environent"} {"_id": "q_7741", "text": "Enter the debugger.\n\nParameters\n----------\n\nlevel : how many stack frames go back. Usually it will be\nthe default 0. But sometimes though there may be calls in setup to the debugger\nthat you may want to skip.\n\nstep_ignore : how many line events to ignore after the\ndebug() call. 0 means don't even wait for the debug() call to finish.\n\nparam dbg_opts : is an optional \"options\" dictionary that gets fed\ntrepan.Debugger(); `start_opts' are the optional \"options\"\ndictionary that gets fed to trepan.Debugger.core.start().\n\nUse like this:\n\n.. code-block:: python\n\n ... # Possibly some Python code\n import trepan.api # Needed only once\n ... # Possibly some more Python code\n trepan.api.debug() # You can wrap inside conditional logic too\n pass # Stop will be here.\n # Below is code you want to use the debugger to do things.\n .... # more Python code\n # If you get to a place in the program where you aren't going\n # want to debug any more, but want to remove debugger trace overhead:\n trepan.api.stop()\n\nParameter \"level\" specifies how many stack frames go back. Usually it will be\nthe default 0. But sometimes though there may be calls in setup to the debugger\nthat you may want to skip.\n\nParameter \"step_ignore\" specifies how many line events to ignore after the\ndebug() call. 0 means don't even wait for the debug() call to finish.\n\nIn situations where you want an immediate stop in the \"debug\" call\nrather than the statement following it (\"pass\" above), add parameter\nstep_ignore=0 to debug() like this::\n\n import trepan.api # Needed only once\n # ... as before\n trepan.api.debug(step_ignore=0)\n # ... as before\n\nModule variable _debugger_obj_ from module trepan.debugger is used as\nthe debugger instance variable; it can be subsequently used to change\nsettings or alter behavior. It should be of type Debugger (found in\nmodule trepan). If not, it will get changed to that type::\n\n $ python\n >>> from trepan.debugger import debugger_obj\n >>> type(debugger_obj)\n \n >>> import trepan.api\n >>> trepan.api.debug()\n ...\n (Trepan) c\n >>> from trepan.debugger import debugger_obj\n >>> debugger_obj\n \n >>>\n\nIf however you want your own separate debugger instance, you can\ncreate it from the debugger _class Debugger()_ from module\ntrepan.debugger::\n\n $ python\n >>> from trepan.debugger import Debugger\n >>> dbgr = Debugger() # Add options as desired\n >>> dbgr\n \n\n`dbg_opts' is an optional \"options\" dictionary that gets fed\ntrepan.Debugger(); `start_opts' are the optional \"options\"\ndictionary that gets fed to trepan.Debugger.core.start()."} {"_id": "q_7742", "text": "Find the first frame that is a debugged frame. We do this\n Generally we want traceback information without polluting it with\n debugger frames. We can tell these because those are frames on the\n top which don't have f_trace set. So we'll look back from the top\n to find the fist frame where f_trace is set."} {"_id": "q_7743", "text": "If arg is an int, use that otherwise take default."} {"_id": "q_7744", "text": "Return True if arg is 'on' or 1 and False arg is 'off' or 0.\n Any other value is raises ValueError."} {"_id": "q_7745", "text": "set a Boolean-valued debugger setting. 'obj' is a generally a\n subcommand that has 'name' and 'debugger.settings' attributes"} {"_id": "q_7746", "text": "set an Integer-valued debugger setting. 'obj' is a generally a\n subcommand that has 'name' and 'debugger.settings' attributes"} {"_id": "q_7747", "text": "Generic subcommand showing a boolean-valued debugger setting.\n 'obj' is generally a subcommand that has 'name' and\n 'debugger.setting' attributes."} {"_id": "q_7748", "text": "Return True if we are looking at a def statement"} {"_id": "q_7749", "text": "Get bacground from\n default values based on the TERM environment variable"} {"_id": "q_7750", "text": "Pass as parameters R G B values in hex\n On return, variable is_dark_bg is set"} {"_id": "q_7751", "text": "return suitable frame signature to key display expressions off of."} {"_id": "q_7752", "text": "display any items that are active"} {"_id": "q_7753", "text": "Set breakpoint at current location, or a specified frame"} {"_id": "q_7754", "text": "Find the corresponding signal name for 'num'. Return None\n if 'num' is invalid."} {"_id": "q_7755", "text": "Find the corresponding signal number for 'name'. Return None\n if 'name' is invalid."} {"_id": "q_7756", "text": "Return a signal name for a signal name or signal\n number. Return None is name_num is an int but not a valid signal\n number and False if name_num is a not number. If name_num is a\n signal name or signal number, the canonic if name is returned."} {"_id": "q_7757", "text": "Check to see if any of the signal handlers we are interested in have\n changed or is not initially set. Change any that are not right."} {"_id": "q_7758", "text": "Print information about a signal"} {"_id": "q_7759", "text": "Delegate the actions specified in 'arg' to another\n method."} {"_id": "q_7760", "text": "Return a full pathname for filename if we can find one. path\n is a list of directories to prepend to filename. If no file is\n found we'll return None"} {"_id": "q_7761", "text": "Do a shell-like path lookup for py_script and return the results.\n If we can't find anything return py_script"} {"_id": "q_7762", "text": "used to write to a debugger that is connected to this\n server; `str' written will have a newline added to it"} {"_id": "q_7763", "text": "Execution status of the program."} {"_id": "q_7764", "text": "List commands arranged in an aligned columns"} {"_id": "q_7765", "text": "Enter debugger read loop after your program has crashed.\n\n exc is a triple like you get back from sys.exc_info. If no exc\n parameter, is supplied, the values from sys.last_type,\n sys.last_value, sys.last_traceback are used. And if these don't\n exist either we'll assume that sys.exc_info() contains what we\n want and frameno is the index location of where we want to start.\n\n 'frameno' specifies how many frames to ignore in the traceback.\n The default is 1, that is, we don't need to show the immediate\n call into post_mortem. If you have wrapper functions that call\n this one, you may want to increase frameno."} {"_id": "q_7766", "text": "Closes both socket and server connection."} {"_id": "q_7767", "text": "This method the debugger uses to write. In contrast to\n writeline, no newline is added to the end to `str'. Also\n msg doesn't have to be a string."} {"_id": "q_7768", "text": "Complete an arbitrary expression."} {"_id": "q_7769", "text": "Add `frame_or_fn' to the list of functions that are not to\n be debugged"} {"_id": "q_7770", "text": "Turns `filename' into its canonic representation and returns this\n string. This allows a user to refer to a given file in one of several\n equivalent ways.\n\n Relative filenames need to be fully resolved, since the current working\n directory might change over the course of execution.\n\n If filename is enclosed in < ... >, then we assume it is\n one of the bogus internal Python names like which is seen\n for example when executing \"exec cmd\"."} {"_id": "q_7771", "text": "Return filename or the basename of that depending on the\n basename setting"} {"_id": "q_7772", "text": "Return True if debugging is in progress."} {"_id": "q_7773", "text": "Does the magic to determine if we stop here and run a\n command processor or not. If so, return True and set\n self.stop_reason; if not, return False.\n\n Determining factors can be whether a breakpoint was\n encountered, whether we are stepping, next'ing, finish'ing,\n and, if so, whether there is an ignore counter."} {"_id": "q_7774", "text": "Sets to stop on the next event that happens in frame 'frame'."} {"_id": "q_7775", "text": "A mini stack trace routine for threads."} {"_id": "q_7776", "text": "Get file information"} {"_id": "q_7777", "text": "Check whether we should break here because of `b.funcname`."} {"_id": "q_7778", "text": "remove breakpoint `bp"} {"_id": "q_7779", "text": "Remove a breakpoint given its breakpoint number."} {"_id": "q_7780", "text": "Enable or disable all breakpoints."} {"_id": "q_7781", "text": "Enable or disable a breakpoint given its breakpoint number."} {"_id": "q_7782", "text": "Read a line of input. Prompt and use_raw exist to be\n compatible with other input routines and are ignored.\n EOFError will be raised on EOF."} {"_id": "q_7783", "text": "Restore an original login session, checking the signed session"} {"_id": "q_7784", "text": "Yield each document in a Luminoso project in turn. Requires a client whose\n URL points to a project.\n\n If expanded=True, it will include additional fields that Luminoso added in\n its analysis, such as 'terms' and 'vector'.\n\n Otherwise, it will contain only the fields necessary to reconstruct the\n document: 'title', 'text', and 'metadata'.\n\n Shows a progress bar if progress=True."} {"_id": "q_7785", "text": "Handle arguments for the 'lumi-download' command."} {"_id": "q_7786", "text": "Read a JSON or CSV file and convert it into a JSON stream, which will\n be saved in an anonymous temp file."} {"_id": "q_7787", "text": "Deduce the format of a file, within reason.\n\n - If the filename ends with .csv or .txt, it's csv.\n - If the filename ends with .jsons, it's a JSON stream (conveniently the\n format we want to output).\n - If the filename ends with .json, it could be a legitimate JSON file, or\n it could be a JSON stream, following a nonstandard convention that many\n people including us are guilty of. In that case:\n - If the first line is a complete JSON document, and there is more in the\n file besides the first line, then it is a JSON stream.\n - Otherwise, it is probably really JSON.\n - If the filename does not end with .json, .jsons, or .csv, we have to guess\n whether it's still CSV or tab-separated values or something like that.\n If it's JSON, the first character would almost certainly have to be a\n bracket or a brace. If it isn't, assume it's CSV or similar."} {"_id": "q_7788", "text": "This function is meant to normalize data for upload to the Luminoso\n Analytics system. Currently it only normalizes dates.\n\n If date_format is not specified, or if there's no date in a particular doc,\n the the doc is yielded unchanged."} {"_id": "q_7789", "text": "Convert a date in a given format to epoch time. Mostly a wrapper for\n datetime's strptime."} {"_id": "q_7790", "text": "Open a CSV file using Python 2's CSV module, working around the deficiency\n where it can't handle the null bytes of UTF-16."} {"_id": "q_7791", "text": "Handle command line arguments to convert a file to a JSON stream as a\n script."} {"_id": "q_7792", "text": "Returns an object that makes requests to the API, authenticated\n with a saved or specified long-lived token, at URLs beginning with\n `url`.\n\n If no URL is specified, or if the specified URL is a path such as\n '/projects' without a scheme and domain, the client will default to\n https://analytics.luminoso.com/api/v5/.\n\n If neither token nor token_file are specified, the client will look\n for a token in $HOME/.luminoso/tokens.json. The file should contain\n a single json dictionary of the format\n `{'root_url': 'token', 'root_url2': 'token2', ...}`."} {"_id": "q_7793", "text": "Take a long-lived API token and store it to a local file. Long-lived\n tokens can be retrieved through the UI. Optional arguments are the\n domain for which the token is valid and the file in which to store the\n token."} {"_id": "q_7794", "text": "Make a DELETE request to the given path, and return the JSON-decoded\n result.\n\n Keyword parameters will be converted to URL parameters.\n\n DELETE requests ask to delete the object represented by this URL."} {"_id": "q_7795", "text": "A convenience method designed to inform you when a project build has\n completed. It polls the API every `interval` seconds until there is\n not a build running. At that point, it returns the \"last_build_info\"\n field of the project record if the build succeeded, and raises a\n LuminosoError with the field as its message if the build failed.\n\n If a `path` is not specified, this method will assume that its URL is\n the URL for the project. Otherwise, it will use the specified path\n (which should be \"/projects//\")."} {"_id": "q_7796", "text": "Get the \"root URL\" for a URL, as described in the LuminosoClient\n documentation."} {"_id": "q_7797", "text": "Obtain the user's long-lived API token and save it in a local file.\n If the user has no long-lived API token, one will be created.\n Returns the token that was saved."} {"_id": "q_7798", "text": "Make a request of the specified type and expect a JSON object in\n response.\n\n If the result has an 'error' value, raise a LuminosoAPIError with\n its contents. Otherwise, return the contents of the 'result' value."} {"_id": "q_7799", "text": "Get the ID of an account you can use to access projects."} {"_id": "q_7800", "text": "Get the documentation that the server sends for the API."} {"_id": "q_7801", "text": "Wait for an asynchronous task to finish.\n\n Unlike the thin methods elsewhere on this object, this one is actually\n specific to how the Luminoso API works. This will poll an API\n endpoint to find out the status of the job numbered `job_id`,\n repeating every 5 seconds (by default) until the job is done. When\n the job is done, it will return an object representing the result of\n that job.\n\n In the Luminoso API, requests that may take a long time return a\n job ID instead of a result, so that your code can continue running\n in the meantime. When it needs the job to be done to proceed, it can\n use this method to wait.\n\n The base URL where it looks for that job is by default `jobs/id/`\n under the current URL, assuming that this LuminosoClient's URL\n represents a project. You can specify a different URL by changing\n `base_path`.\n\n If the job failed, will raise a LuminosoError with the job status\n as its message."} {"_id": "q_7802", "text": "Get the raw text of a response.\n\n This is only generally useful for specific URLs, such as documentation."} {"_id": "q_7803", "text": "Print a JSON list of JSON objects in CSV format."} {"_id": "q_7804", "text": "Read parameters from input file, -j, and -p arguments, in that order."} {"_id": "q_7805", "text": "Limit a document to just the three fields we should upload."} {"_id": "q_7806", "text": "Given an iterator of documents, upload them as a Luminoso project."} {"_id": "q_7807", "text": "Handle arguments for the 'lumi-upload' command."} {"_id": "q_7808", "text": "Upload a file to Luminoso with the given account and project name.\n\n Given a file containing JSON, JSON stream, or CSV data, this verifies\n that we can successfully convert it to a JSON stream, then uploads that\n JSON stream."} {"_id": "q_7809", "text": "Handle command line arguments, to upload a file to a Luminoso project\n as a script."} {"_id": "q_7810", "text": "Obtain a short-lived token using a username and password, and use that\n token to create an auth object."} {"_id": "q_7811", "text": "Set http session."} {"_id": "q_7812", "text": "Login to enedis."} {"_id": "q_7813", "text": "Get data."} {"_id": "q_7814", "text": "Get the latest data from Enedis."} {"_id": "q_7815", "text": "Load the view on first load"} {"_id": "q_7816", "text": "Load the view on first load could also load based on session, group, etc.."} {"_id": "q_7817", "text": "Execute the correct handler depending on what is connecting."} {"_id": "q_7818", "text": "When enaml.js sends a message"} {"_id": "q_7819", "text": "When pages change, update the menus"} {"_id": "q_7820", "text": "Generate the handlers for this site"} {"_id": "q_7821", "text": "Create the toolkit widget for the proxy object.\n\n This method is called during the top-down pass, just before the\n 'init_widget()' method is called. This method should create the\n toolkit widget and assign it to the 'widget' attribute."} {"_id": "q_7822", "text": "Initialize the state of the toolkit widget.\n\n This method is called during the top-down pass, just after the\n 'create_widget()' method is called. This method should init the\n state of the widget. The child widgets will not yet be created."} {"_id": "q_7823", "text": "A reimplemented destructor.\n\n This destructor will clear the reference to the toolkit widget\n and set its parent to None."} {"_id": "q_7824", "text": "Handle the child added event from the declaration.\n\n This handler will insert the child toolkit widget in the correct.\n position. Subclasses which need more control should reimplement this\n method."} {"_id": "q_7825", "text": "Handle the child removed event from the declaration.\n\n This handler will unparent the child toolkit widget. Subclasses\n which need more control should reimplement this method."} {"_id": "q_7826", "text": "Default handler for those not explicitly defined"} {"_id": "q_7827", "text": "Update the proxy widget when the Widget data\n changes."} {"_id": "q_7828", "text": "Find nodes matching the given xpath query"} {"_id": "q_7829", "text": "Initialize the widget with the source."} {"_id": "q_7830", "text": "A change handler for the 'objects' list of the Include.\n\n If the object is initialized objects which are removed will be\n unparented and objects which are added will be reparented. Old\n objects will be destroyed if the 'destroy_old' flag is True."} {"_id": "q_7831", "text": "When the children of the block change. Update the referenced\n block."} {"_id": "q_7832", "text": "Registers a function as a hook. Multiple hooks can be registered for a given type, but the\n order in which they are invoke is unspecified.\n\n :param event_type: The event type this hook will be invoked for."} {"_id": "q_7833", "text": "Callback from Flask"} {"_id": "q_7834", "text": "Remove common indentation from string.\n\n Unlike doctrim there is no special treatment of the first line."} {"_id": "q_7835", "text": "Find all section names and return a list with their names."} {"_id": "q_7836", "text": "Generate table of contents for array of section names."} {"_id": "q_7837", "text": "Print `msg` error and exit with status `exit_code`"} {"_id": "q_7838", "text": "Gets a Item from the Menu by name. Note that the name is not\n case-sensitive but must be spelt correctly.\n\n :param string name: The name of the item.\n :raises StopIteration: Raises exception if no item is found.\n :return: An item object matching the search.\n :rtype: Item"} {"_id": "q_7839", "text": "Clear out the current session on the remote and setup a new one.\n\n :return: A response from having expired the current session.\n :rtype: requests.Response"} {"_id": "q_7840", "text": "Search for dominos pizza stores using a search term.\n\n :param string search: Search term.\n :return: A list of nearby stores matching the search term.\n :rtype: list"} {"_id": "q_7841", "text": "Add an item to the current basket.\n\n :param Item item: Item from menu.\n :param int variant: Item SKU id. Ignored if the item is a side.\n :param int quantity: The quantity of item to be added.\n :return: A response having added an item to the current basket.\n :rtype: requests.Response"} {"_id": "q_7842", "text": "Add a pizza to the current basket.\n\n :param Item item: Item from menu.\n :param int variant: Item SKU id. Some defaults are defined in the VARIANT enum.\n :param int quantity: The quantity of pizza to be added.\n :return: A response having added a pizza to the current basket.\n :rtype: requests.Response"} {"_id": "q_7843", "text": "Add a side to the current basket.\n\n :param Item item: Item from menu.\n :param int quantity: The quantity of side to be added.\n :return: A response having added a side to the current basket.\n :rtype: requests.Response"} {"_id": "q_7844", "text": "Remove an item from the current basket.\n\n :param int idx: Basket item id.\n :return: A response having removed an item from the current basket.\n :rtype: requests.Response"} {"_id": "q_7845", "text": "Select the payment method going to be used to make a purchase.\n\n :param int method: Payment method id.\n :return: A response having set the payment option.\n :rtype: requests.Response"} {"_id": "q_7846", "text": "Proceed with payment using the payment method selected earlier.\n\n :return: A response having processes the payment.\n :rtype: requests.Response"} {"_id": "q_7847", "text": "Make a HTTP GET request to the Dominos UK API with the given parameters\n for the current session.\n\n :param string path: The API endpoint path.\n :params list kargs: A list of arguments.\n :return: A response from the Dominos UK API.\n :rtype: response.Response"} {"_id": "q_7848", "text": "Make a HTTP POST request to the Dominos UK API with the given\n parameters for the current session.\n\n :param string path: The API endpoint path.\n :params list kargs: A list of arguments.\n :return: A response from the Dominos UK API.\n :rtype: response.Response"} {"_id": "q_7849", "text": "Make a HTTP request to the Dominos UK API with the given parameters for\n the current session.\n\n :param verb func: HTTP method on the session.\n :param string path: The API endpoint path.\n :params list kargs: A list of arguments.\n :return: A response from the Dominos UK API.\n :rtype: response.Response"} {"_id": "q_7850", "text": "Add an item to the end of the menu before the exit item\n\n :param MenuItem item: The item to be added"} {"_id": "q_7851", "text": "Add the exit item if necessary. Used to make sure there aren't multiple exit items\n\n :return: True if item needed to be added, False otherwise\n :rtype: bool"} {"_id": "q_7852", "text": "Redraws the menu and refreshes the screen. Should be called whenever something changes that needs to be redrawn."} {"_id": "q_7853", "text": "Gets the next single character and decides what to do with it"} {"_id": "q_7854", "text": "Select the current item and run it"} {"_id": "q_7855", "text": "Take an old-style menuData dictionary and return a CursesMenu\n\n :param dict menu_data:\n :return: A new CursesMenu\n :rtype: CursesMenu"} {"_id": "q_7856", "text": "Compute the maximum temporal distance.\n\n Returns\n -------\n max_temporal_distance : float"} {"_id": "q_7857", "text": "Temporal distance cumulative density function.\n\n Returns\n -------\n x_values: numpy.array\n values for the x-axis\n cdf: numpy.array\n cdf values"} {"_id": "q_7858", "text": "Remove dangling entries from the shapes directory.\n\n Parameters\n ----------\n db_conn: sqlite3.Connection\n connection to the GTFS object"} {"_id": "q_7859", "text": "Given a set of transit events and the static walk network,\n \"transform\" the static walking network into a set of \"pseudo-connections\".\n\n As a first approximation, we add pseudo-connections to depart after each arrival of a transit connection\n to it's arrival stop.\n\n Parameters\n ----------\n transit_connections: list[Connection]\n start_time_dep : int\n start time in unixtime seconds\n end_time_dep: int\n end time in unixtime seconds (no new connections will be scanned after this time)\n transfer_margin: int\n required extra margin required for transfers in seconds\n walk_speed: float\n walking speed between stops in meters / second\n walk_network: networkx.Graph\n each edge should have the walking distance as a data attribute (\"d_walk\") expressed in meters\n\n Returns\n -------\n pseudo_connections: set[Connection]"} {"_id": "q_7860", "text": "Get the earliest visit time of the stop."} {"_id": "q_7861", "text": "Whether the spreading stop can infect using this event."} {"_id": "q_7862", "text": "Create day_trips and day_stop_times views.\n\n day_trips: day_trips2 x trips = days x trips\n day_stop_times: day_trips2 x trips x stop_times = days x trips x stop_times"} {"_id": "q_7863", "text": "Create a colourbar with limits of lwr and upr"} {"_id": "q_7864", "text": "Write temporal networks by route type to disk.\n\n Parameters\n ----------\n gtfs: gtfspy.GTFS\n extract_output_dir: str"} {"_id": "q_7865", "text": "Write out the database according to the GTFS format.\n\n Parameters\n ----------\n gtfs: gtfspy.GTFS\n output: str\n Path where to put the GTFS files\n if output ends with \".zip\" a ZIP-file is created instead.\n\n Returns\n -------\n None"} {"_id": "q_7866", "text": "Remove columns ending with I from a pandas.DataFrame\n\n Parameters\n ----------\n df: dataFrame\n\n Returns\n -------\n None"} {"_id": "q_7867", "text": "Context manager for making files with possibility of failure.\n\n If you are creating a file, it is possible that the code will fail\n and leave a corrupt intermediate file. This is especially damaging\n if this is used as automatic input to another process. This context\n manager helps by creating a temporary filename, your code runs and\n creates that temporary file, and then if no exceptions are raised,\n the context manager will move the temporary file to the original\n filename you intended to open.\n\n Parameters\n ----------\n fname : str\n Target filename, this file will be created if all goes well\n fname_tmp : str\n If given, this is used as the temporary filename.\n tmpdir : str or bool\n If given, put temporary files in this directory. If `True`,\n then find a good tmpdir that is not on local filesystem.\n save_tmpfile : bool\n If true, the temporary file is not deleteted if an exception\n is raised.\n keepext : bool, default False\n If true, have tmpfile have same extension as final file.\n\n Returns (as context manager value)\n ----------------------------------\n fname_tmp: str\n Temporary filename to be used. Same as `fname_tmp`\n if given as an argument.\n\n Raises\n ------\n Re-raises any except occuring during the context block."} {"_id": "q_7868", "text": "Utility function to print sqlite queries before executing.\n\n Use instead of cur.execute(). First argument is cursor.\n\n cur.execute(stmt)\n becomes\n util.execute(cur, stmt)"} {"_id": "q_7869", "text": "Create directories if they do not exist, otherwise do nothing.\n\n Return path for convenience"} {"_id": "q_7870", "text": "Checks for rows that are not referenced in the the tables that should be linked\n\n stops <> stop_times using stop_I\n stop_times <> trips <> days, using trip_I\n trips <> routes, using route_I\n :return:"} {"_id": "q_7871", "text": "Print coordinates within a sequence.\n\n This is only used for debugging. Printed in a form that can be\n pasted into Python for visualization."} {"_id": "q_7872", "text": "Find corresponding shape points for a list of stops and create shape break points.\n\n Parameters\n ----------\n stops: stop-sequence (list)\n List of stop points\n shape: list of shape points\n shape-sequence of shape points\n\n Returns\n -------\n break_points: list[int]\n stops[i] corresponds to shape[break_points[i]]. This list can\n be used to partition the shape points into segments between\n one stop and the next.\n badness: float\n Lower indicates better fit to the shape. This is the sum of\n distances (in meters) between every each stop and its closest\n shape point. This is not needed in normal use, but in the\n cases where you must determine the best-fitting shape for a\n stop-sequence, use this."} {"_id": "q_7873", "text": "Get all scheduled stops on a particular route_id.\n\n Given a route_id, return the trip-stop-list with\n latitude/longitudes. This is a bit more tricky than it seems,\n because we have to go from table route->trips->stop_times. This\n functions finds an arbitrary trip (in trip table) with this route ID\n and, and then returns all stop points for that trip.\n\n Parameters\n ----------\n cur : sqlite3.Cursor\n cursor to sqlite3 DB containing GTFS\n route_id : string or any\n route_id to get stop points of\n offset : int\n LIMIT offset if you don't want the first trip returned.\n tripid_glob : string\n If given, allows you to limit tripids which can be selected.\n Mainly useful in debugging.\n\n Returns\n -------\n stop-list\n List of stops in stop-seq format."} {"_id": "q_7874", "text": "Interpolate passage times for shape points.\n\n Parameters\n ----------\n shape_distances: list\n list of cumulative distances along the shape\n shape_breaks: list\n list of shape_breaks\n stop_times: list\n list of stop_times\n\n Returns\n -------\n shape_times: list of ints (seconds) / numpy array\n interpolated shape passage times\n\n The values of stop times before the first shape-break are given the first\n stopping time, and the any shape points after the last break point are\n given the value of the last shape point."} {"_id": "q_7875", "text": "Get the earliest arrival time at the target, given a departure time.\n\n Parameters\n ----------\n dep_time : float, int\n time in unix seconds\n transfer_margin: float, int\n transfer margin in seconds\n\n Returns\n -------\n arrival_time : float\n Arrival time in the given time unit (seconds after unix epoch)."} {"_id": "q_7876", "text": "Get a stop-to-stop network describing a single mode of travel.\n\n Parameters\n ----------\n gtfs : gtfspy.GTFS\n route_type : int\n See gtfspy.route_types.TRANSIT_ROUTE_TYPES for the list of possible types.\n link_attributes: list[str], optional\n defaulting to use the following link attributes:\n \"n_vehicles\" : Number of vehicles passed\n \"duration_min\" : minimum travel time between stops\n \"duration_max\" : maximum travel time between stops\n \"duration_median\" : median travel time between stops\n \"duration_avg\" : average travel time between stops\n \"d\" : distance along straight line (wgs84_distance)\n \"distance_shape\" : minimum distance along shape\n \"capacity_estimate\" : approximate capacity passed through the stop\n \"route_I_counts\" : dict from route_I to counts\n start_time_ut: int\n start time of the time span (in unix time)\n end_time_ut: int\n end time of the time span (in unix time)\n\n Returns\n -------\n net: networkx.DiGraph\n A directed graph Directed graph"} {"_id": "q_7877", "text": "Compute stop-to-stop networks for all travel modes and combine them into a single network.\n The modes of transport are encoded to a single network.\n The network consists of multiple links corresponding to each travel mode.\n Walk mode is not included.\n\n Parameters\n ----------\n gtfs: gtfspy.GTFS\n\n Returns\n -------\n net: networkx.MultiDiGraph\n keys should be one of route_types.TRANSIT_ROUTE_TYPES (i.e. GTFS route_types)"} {"_id": "q_7878", "text": "Compute the temporal network of the data, and return it as a pandas.DataFrame\n\n Parameters\n ----------\n gtfs : gtfspy.GTFS\n start_time_ut: int | None\n start time of the time span (in unix time)\n end_time_ut: int | None\n end time of the time span (in unix time)\n route_type: int | None\n Specifies which mode of public transport are included, or whether all modes should be included.\n The int should be one of the standard GTFS route_types:\n (see also gtfspy.route_types.TRANSIT_ROUTE_TYPES )\n If route_type is not specified, all modes are included.\n\n Returns\n -------\n events_df: pandas.DataFrame\n Columns: departure_stop, arrival_stop, departure_time_ut, arrival_time_ut, route_type, route_I, trip_I"} {"_id": "q_7879", "text": "Get stop pairs through which transfers take place\n\n Returns\n -------\n transfer_stop_pairs: list"} {"_id": "q_7880", "text": "Get name of the GTFS timezone\n\n Returns\n -------\n timezone_name : str\n name of the time zone, e.g. \"Europe/Helsinki\""} {"_id": "q_7881", "text": "Get the shapes of all routes.\n\n Parameters\n ----------\n use_shapes : bool, optional\n by default True (i.e. use shapes as the name of the function indicates)\n if False (fall back to lats and longitudes)\n\n Returns\n -------\n routeShapes: list of dicts that should have the following keys\n name, type, agency, lats, lons\n with types\n list, list, str, list, list"} {"_id": "q_7882", "text": "Get closest stop to a given location.\n\n Parameters\n ----------\n lat: float\n latitude coordinate of the location\n lon: float\n longitude coordinate of the location\n\n Returns\n -------\n stop_I: int\n the index of the stop in the database"} {"_id": "q_7883", "text": "Check that a trip takes place during a day\n\n Parameters\n ----------\n trip_I : int\n index of the trip in the gtfs data base\n day_start_ut : int\n the starting time of the day in unix time (seconds)\n\n Returns\n -------\n takes_place: bool\n boolean value describing whether the trip takes place during\n the given day or not"} {"_id": "q_7884", "text": "Get all possible day start times between start_ut and end_ut\n Currently this function is used only by get_tripIs_within_range_by_dsut\n\n Parameters\n ----------\n start_ut : list\n start time in unix time\n end_ut : list\n end time in unix time\n max_time_overnight : list\n the maximum length of time that a trip can take place on\n during the next day (i.e. after midnight run times like 25:35)\n\n Returns\n -------\n day_start_times_ut : list\n list of ints (unix times in seconds) for returning all possible day\n start times\n start_times_ds : list\n list of ints (unix times in seconds) stating the valid start time in\n day seconds\n end_times_ds : list\n list of ints (unix times in seconds) stating the valid end times in\n day_seconds"} {"_id": "q_7885", "text": "Get all stop data as a pandas DataFrame for all stops, or an individual stop'\n\n Parameters\n ----------\n stop_I : int\n stop index\n\n Returns\n -------\n stop: pandas.DataFrame"} {"_id": "q_7886", "text": "Obtain a list of events that take place during a time interval.\n Each event needs to be only partially overlap the given time interval.\n Does not include walking events.\n\n Parameters\n ----------\n start_time_ut : int\n start of the time interval in unix time (seconds)\n end_time_ut: int\n end of the time interval in unix time (seconds)\n route_type: int\n consider only events for this route_type\n\n Returns\n -------\n events: pandas.DataFrame\n with the following columns and types\n dep_time_ut: int\n arr_time_ut: int\n from_stop_I: int\n to_stop_I: int\n trip_I : int\n shape_id : int\n route_type : int\n\n See also\n --------\n get_transit_events_in_time_span : an older version of the same thing"} {"_id": "q_7887", "text": "Return the first and last day_start_ut\n\n Returns\n -------\n first_day_start_ut: int\n last_day_start_ut: int"} {"_id": "q_7888", "text": "Recover pre-computed travel_impedance between od-pairs from the database.\n\n Returns\n -------\n values: number | Pandas DataFrame"} {"_id": "q_7889", "text": "Update the profile with the new labels.\n Each new label should have the same departure_time.\n\n Parameters\n ----------\n new_labels: list[LabelTime]\n\n Returns\n -------\n added: bool\n whether new_pareto_tuple was added to the set of pareto-optimal tuples"} {"_id": "q_7890", "text": "Get the pareto_optimal set of Labels, given a departure time.\n\n Parameters\n ----------\n dep_time : float, int\n time in unix seconds\n first_leg_can_be_walk : bool, optional\n whether to allow walking to target to be included into the profile\n (I.e. whether this function is called when scanning a pseudo-connection:\n \"double\" walks are not allowed.)\n connection_arrival_time: float, int, optional\n used for computing the walking label if dep_time, i.e., connection.arrival_stop_next_departure_time, is infinity)\n connection: connection object\n\n Returns\n -------\n pareto_optimal_labels : set\n Set of Labels"} {"_id": "q_7891", "text": "Do the actual import. Copy data and store in connection object.\n\n This function:\n - Creates the tables\n - Imports data (using self.gen_rows)\n - Run any post_import hooks.\n - Creates any indexs\n - Does *not* run self.make_views - those must be done\n after all tables are loaded."} {"_id": "q_7892", "text": "Get mean latitude AND longitude of stops\n\n Parameters\n ----------\n gtfs: GTFS\n\n Returns\n -------\n mean_lat : float\n mean_lon : float"} {"_id": "q_7893", "text": "Writes data from get_stats to csv file\n\n Parameters\n ----------\n gtfs: GTFS\n path_to_csv: str\n filepath to the csv file to be generated\n re_write:\n insted of appending, create a new one."} {"_id": "q_7894", "text": "Return the frequency of all types of routes per day.\n\n Parameters\n -----------\n gtfs: GTFS\n\n Returns\n -------\n pandas.DataFrame with columns\n route_I, type, frequency"} {"_id": "q_7895", "text": "A Python decorator for printing out the execution time for a function.\n\n Adapted from:\n www.andreas-jung.com/contents/a-python-decorator-for-measuring-the-execution-time-of-methods"} {"_id": "q_7896", "text": "When receiving the filled out form, check for valid access."} {"_id": "q_7897", "text": "Return a form class for a given string pointing to a lockdown form."} {"_id": "q_7898", "text": "Check if each request is allowed to access the current resource."} {"_id": "q_7899", "text": "Handle redirects properly."} {"_id": "q_7900", "text": "Get the top or flop N results based on a column value for each specified group columns\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `value` (*str*): column name on which you will rank the results\n - `limit` (*int*): Number to specify the N results you want to retrieve.\n Use a positive number x to retrieve the first x results.\n Use a negative number -x to retrieve the last x results.\n\n *optional :*\n - `order` (*str*): `\"asc\"` or `\"desc\"` to sort by ascending ou descending order. By default : `\"asc\"`.\n - `group` (*str*, *list of str*): name(s) of columns on which you want to perform the group operation.\n\n ---\n\n ### Example\n\n **Input**\n\n | variable | Category | value |\n |:--------:|:--------:|:-----:|\n | lili | 1 | 50 |\n | lili | 1 | 20 |\n | toto | 1 | 100 |\n | toto | 1 | 200 |\n | toto | 1 | 300 |\n | lala | 1 | 100 |\n | lala | 1 | 150 |\n | lala | 1 | 250 |\n | lala | 2 | 350 |\n | lala | 2 | 450 |\n\n\n ```cson\n top:\n value: 'value'\n limit: 4\n order: 'asc'\n ```\n\n **Output**\n\n | variable | Category | value |\n |:--------:|:--------:|:-----:|\n | lala | 1 | 250 |\n | toto | 1 | 300 |\n | lala | 2 | 350 |\n | lala | 2 | 450 |"} {"_id": "q_7901", "text": "Get the top or flop N results based on a function and a column value that agregates the input.\n The result is composed by all the original lines including only lines corresponding\n to the top groups\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `value` (*str*): Name of the column name on which you will rank the results.\n - `limit` (*int*): Number to specify the N results you want to retrieve from the sorted values.\n - Use a positive number x to retrieve the first x results.\n - Use a negative number -x to retrieve the last x results.\n - `aggregate_by` (*list of str*)): name(s) of columns you want to aggregate\n\n *optional :*\n - `order` (*str*): `\"asc\"` or `\"desc\"` to sort by ascending ou descending order. By default : `\"asc\"`.\n - `group` (*str*, *list of str*): name(s) of columns on which you want to perform the group operation.\n - `function` : Function to use to group over the group column\n\n ---\n\n ### Example\n\n **Input**\n\n | variable | Category | value |\n |:--------:|:--------:|:-----:|\n | lili | 1 | 50 |\n | lili | 1 | 20 |\n | toto | 1 | 100 |\n | toto | 1 | 200 |\n | toto | 1 | 300 |\n | lala | 1 | 100 |\n | lala | 1 | 150 |\n | lala | 1 | 250 |\n | lala | 2 | 350 |\n | lala | 2 | 450 |\n\n ```cson\n top_group:\n group: [\"Category\"]\n value: 'value'\n aggregate_by: [\"variable\"]\n limit: 2\n order: \"desc\"\n ```\n\n **Output**\n\n | variable | Category | value |\n |:--------:|:--------:|:-----:|\n | toto | 1 | 100 |\n | toto | 1 | 200 |\n | toto | 1 | 300 |\n | lala | 1 | 100 |\n | lala | 1 | 150 |\n | lala | 1 | 250 |\n | lala | 2 | 350 |\n | lala | 2 | 450 |"} {"_id": "q_7902", "text": "Convert string column into datetime column\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): name of the column to format\n - `format` (*str*): current format of the values (see [available formats](\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior))"} {"_id": "q_7903", "text": "Convert datetime column into string column\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - column (*str*): name of the column to format\n - format (*str*): format of the result values (see [available formats](\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior))\n\n *optional :*\n - new_column (*str*): name of the output column. By default `column` is overwritten."} {"_id": "q_7904", "text": "Convert the format of a date\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): name of the column to change the format\n - `output_format` (*str*): format of the output values (see [available formats](\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior))\n\n *optional :*\n - `input_format` (*str*): format of the input values (by default let the parser detect it)\n - `new_column` (*str*): name of the output column (by default overwrite `column`)\n - `new_time_zone` (*str*): name of new time zone (by default no time zone conversion is done)\n\n ---\n\n ### Example\n\n **Input**\n\n label | date\n :------:|:----:\n France | 2017-03-22\n Europe | 2016-03-22\n\n ```cson\n change_date_format:\n column: 'date'\n input_format: '%Y-%m-%d'\n output_format: '%Y-%m'\n ```\n\n Output :\n\n label | date\n :------:|:----:\n France | 2017-03\n Europe | 2016-03"} {"_id": "q_7905", "text": "Convert column's type into type\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): name of the column to convert\n - `type` (*str*): output type. It can be :\n - `\"int\"` : integer type\n - `\"float\"` : general number type\n - `\"str\"` : text type\n\n *optional :*\n - `new_column` (*str*): name of the output column.\n By default the `column` arguments is modified.\n\n ---\n\n ### Example\n\n **Input**\n\n | Column 1 | Column 2 | Column 3 |\n |:-------:|:--------:|:--------:|\n | 'one' | '2014' | 30.0 |\n | 'two' | 2015.0 | '1' |\n | 3.1 | 2016 | 450 |\n\n ```cson\n postprocess: [\n cast:\n column: 'Column 1'\n type: 'str'\n cast:\n column: 'Column 2'\n type: 'int'\n cast:\n column: 'Column 3'\n type: 'float'\n ]\n ```\n\n **Output**\n\n | Column 1 | Column 2 | Column 3 |\n |:-------:|:------:|:--------:|\n | 'one' | 2014 | 30.0 |\n | 'two' | 2015 | 1.0 |\n | '3.1' | 2016 | 450.0 |"} {"_id": "q_7906", "text": "Return a line for each bars of a waterfall chart, totals, groups, subgroups.\n Compute the variation and variation rate for each line.\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `date` (*str*): name of the column that id the period of each lines\n - `value` (*str*): name of the column that contains the vaue for each lines\n - `start` (*dict*):\n - `label`: text displayed under the first master column\n - `id`: value in the date col that id lines for the first period\n - `end` (*dict*):\n - `label`: text displayed under the last master column\n - `id`: value in the date col that id lines for the second period\n\n *optional :*\n - `upperGroup` (*dict*):\n - `id`: name of the column that contains upperGroups unique IDs\n - `label`: not required, text displayed under each upperGroups bars,\n using ID when it's absent\n - `groupsOrder`: not required, order of upperGroups\n - `insideGroup` (*dict*):\n - `id`: name of the column that contains insideGroups unique IDs\n - `label`: not required, text displayed under each insideGroups bars,\n using ID when it's absent\n - `groupsOrder`: not required, order of insideGroups\n - `filters` (*list*): columns to filters on\n\n ---\n\n ### Example\n\n **Input**\n\n | product_id | played | date | ord | category_id | category_name |\n |:------------:|:--------:|:------:|:-----:|:-------------:|:---------------:|\n | super clap | 12 | t1 | 1 | clap | Clap |\n | clap clap | 1 | t1 | 10 | clap | Clap |\n | tac | 1 | t1 | 1 | snare | Snare |\n | super clap | 10 | t2 | 1 | clap | Clap |\n | tac | 100 | t2 | 1 | snare | Snare |\n | bom | 1 | t2 | 1 | tom | Tom |\n\n\n ```cson\n waterfall:\n upperGroup:\n id: 'category_id'\n label: 'category_name'\n insideGroup:\n id: 'product_id'\n groupsOrder: 'ord'\n date: 'date'\n value: 'played'\n start:\n label: 'Trimestre 1'\n id: 't1'\n end:\n label: 'Trimester 2'\n id: 't2'\n ```\n\n **Output**\n\n | value | label | variation | groups | type | order |\n |:-------:|:-----------:|:-----------:|:--------:|:------:|:-------:|\n | 14 | Trimestre 1 | NaN | NaN | NaN | NaN |\n | -3 | Clap | -0.230769 | clap | parent | NaN |\n | -2 | super clap | -0.166667 | clap | child | 1 |\n | -1 | clap clap | -1 | clap | child | 10 |\n | 99 | Snare | 99 | snare | parent | NaN |\n | 99 | tac | 99 | snare | child | 1 |\n | 1 | Tom | inf | tom | parent | NaN |\n | 1 | bom | inf | tom | child | 1 |\n | 111 | Trimester 2 | NaN | NaN | NaN | NaN |"} {"_id": "q_7907", "text": "Get the absolute numeric value of each element of a column\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): name of the column\n\n *optional :*\n - `new_column` (*str*): name of the column containing the result.\n By default, no new column will be created and `column` will be replaced.\n\n ---\n\n ### Example\n\n **Input**\n\n | ENTITY | VALUE_1 | VALUE_2 |\n |:------:|:-------:|:-------:|\n | A | -1.512 | -1.504 |\n | A | 0.432 | 0.14 |\n\n ```cson\n absolute_values:\n column: 'VALUE_1'\n new_column: 'Pika'\n ```\n\n **Output**\n\n | ENTITY | VALUE_1 | VALUE_2 | Pika |\n |:------:|:-------:|:-------:|:-----:|\n | A | -1.512 | -1.504 | 1.512 |\n | A | 0.432 | 0.14 | 0.432 |"} {"_id": "q_7908", "text": "Pivot the data. Reverse operation of melting\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `index` (*list*): names of index columns.\n - `column` (*str*): column name to pivot on\n - `value` (*str*): column name containing the value to fill the pivoted df\n\n *optional :*\n - `agg_function` (*str*): aggregation function to use among 'mean' (default), 'count', 'mean', 'max', 'min'\n\n ---\n\n ### Example\n\n **Input**\n\n | variable | wave | year | value |\n |:--------:|:-------:|:--------:|:-----:|\n | toto | wave 1 | 2014 | 300 |\n | toto | wave 1 | 2015 | 250 |\n | toto | wave 1 | 2016 | 450 |\n\n ```cson\n pivot:\n index: ['variable','wave']\n column: 'year'\n value: 'value'\n ```\n\n **Output**\n\n | variable | wave | 2014 | 2015 | 2015 |\n |:--------:|:-------:|:------:|:----:|:----:|\n | toto | wave 1 | 300 | 250 | 450 |"} {"_id": "q_7909", "text": "Pivot a dataframe by group of variables\n\n ---\n\n ### Parameters\n\n *mandatory :*\n * `variable` (*str*): name of the column used to create the groups.\n * `value` (*str*): name of the column containing the value to fill the pivoted df.\n * `new_columns` (*list of str*): names of the new columns.\n * `groups` (*dict*): names of the groups with their corresponding variables.\n **Warning**: the list of variables must have the same order as `new_columns`\n\n *optional :*\n * `id_cols` (*list of str*) : names of other columns to keep, default `None`.\n\n ---\n\n ### Example\n\n **Input**\n\n | type | variable | montant |\n |:----:|:----------:|:-------:|\n | A | var1 | 5 |\n | A | var1_evol | 0.3 |\n | A | var2 | 6 |\n | A | var2_evol | 0.2 |\n\n ```cson\n pivot_by_group :\n id_cols: ['type']\n variable: 'variable'\n value: 'montant'\n new_columns: ['value', 'variation']\n groups:\n 'Group 1' : ['var1', 'var1_evol']\n 'Group 2' : ['var2', 'var2_evol']\n ```\n\n **Ouput**\n\n | type | variable | value | variation |\n |:----:|:----------:|:-------:|:---------:|\n | A | Group 1 | 5 | 0.3 |\n | A | Group 2 | 6 | 0.2 |"} {"_id": "q_7910", "text": "Aggregate values by groups.\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `group_cols` (*list*): list of columns used to group data\n - `aggregations` (*dict*): dictionnary of values columns to group as keys and aggregation\n function to use as values (See the [list of aggregation functions](\n https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#aggregation))\n\n ---\n\n ### Example\n\n **Input**\n\n | ENTITY | YEAR | VALUE_1 | VALUE_2 |\n |:------:|:----:|:-------:|:-------:|\n | A | 2017 | 10 | 3 |\n | A | 2017 | 20 | 1 |\n | A | 2018 | 10 | 5 |\n | A | 2018 | 30 | 4 |\n | B | 2017 | 60 | 4 |\n | B | 2017 | 40 | 3 |\n | B | 2018 | 50 | 7 |\n | B | 2018 | 60 | 6 |\n\n ```cson\n groupby:\n group_cols: ['ENTITY', 'YEAR']\n aggregations:\n 'VALUE_1': 'sum',\n 'VALUE_2': 'mean'\n ```\n\n **Output**\n\n | ENTITY | YEAR | VALUE_1 | VALUE_2 |\n |:------:|:----:|:-------:|:-------:|\n | A | 2017 | 30 | 2.0 |\n | A | 2018 | 40 | 4.5 |\n | B | 2017 | 100 | 3.5 |\n | B | 2018 | 110 | 6.5 |"} {"_id": "q_7911", "text": "DEPRECATED - please use `compute_cumsum` instead"} {"_id": "q_7912", "text": "Decorator to catch an exception and don't raise it.\n Logs information if a decorator failed.\n\n Note:\n We don't want possible exceptions during logging to be raised.\n This is used to decorate any function that gets executed\n before or after the execution of the decorated function."} {"_id": "q_7913", "text": "Replaces data values and column names according to the locale\n\n ---\n\n ### Parameters\n\n - `values` (optional: dict):\n - key: term to be replaced\n - value:\n - key: the locale e.g. 'en' or 'fr'\n - value: term's translation\n - `columns` (optional: dict):\n - key: columns name to be replaced\n - value:\n - key: the locale e.g. 'en' or 'fr'\n - value: column name's translation\n - `locale` (optional: str): the locale you want to use.\n By default the client locale is used.\n\n ---\n\n ### Example\n\n **Input**\n\n | label | value |\n |:----------------:|:-----:|\n | France | 100 |\n | Europe wo France | 500 |\n\n ```cson\n rename:\n values:\n 'Europe wo France':\n 'en': 'Europe excl. France'\n 'fr': 'Europe excl. France'\n columns:\n 'value':\n 'en': 'revenue'\n 'fr': 'revenue'\n ```\n\n **Output**\n\n | label | revenue |\n |:-------------------:|:-------:|\n | France | 100 |\n | Europe excl. France | 500 |"} {"_id": "q_7914", "text": "Aggregates data to reproduce \"All\" category for requester\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `id_cols` (*list*): the columns id to group\n - `cols_for_combination` (*dict*): colums corresponding to\n the filters as key and their default value as value\n\n *optional :*\n - `agg_func` (*str*, *list* or *dict*): the function(s) to use for aggregating the data.\n Accepted combinations are:\n - string function name\n - list of functions and/or function names, e.g. [np.sum, 'mean']\n - dict of axis labels -> functions, function names or list of such."} {"_id": "q_7915", "text": "Get the value of a function's parameter based on its signature\n and the call's args and kwargs.\n\n Example:\n >>> def foo(a, b, c=3, d=4):\n ... pass\n ...\n >>> # what would be the value of \"c\" when calling foo(1, b=2, c=33) ?\n >>> get_param_value_from_func_call('c', foo, [1], {'b': 2, 'c': 33})\n 33"} {"_id": "q_7916", "text": "Creates aggregates following a given hierarchy\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `levels` (*list of str*): name of the columns composing the hierarchy (from the top to the bottom level).\n - `groupby_vars` (*list of str*): name of the columns with value to aggregate.\n - `extra_groupby_cols` (*list of str*) optional: other columns used to group in each level.\n\n *optional :*\n - `var_name` (*str*) : name of the result variable column. By default, `\u201ctype\u201d`.\n - `value_name` (*str*): name of the result value column. By default, `\u201cvalue\u201d`.\n - `agg_func` (*str*): name of the aggregation operation. By default, `\u201csum\u201d`.\n - `drop_levels` (*list of str*): the names of the levels that you may want to discard from the output.\n\n ---\n\n ### Example\n\n **Input**\n\n | Region | City | Population |\n |:---------:|:--------:|:-----------:|\n | Idf | Panam| 200 |\n | Idf | Antony | 50 |\n | Nord | Lille | 20 |\n\n ```cson\n roll_up:\n levels: [\"Region\", \"City\"]\n groupby_vars: \"Population\"\n ```\n\n **Output**\n\n | Region | City | Population | value | type |\n |:---------:|:--------:|:-----------:|:--------:|:------:|\n | Idf | Panam| 200 | Panam | City |\n | Idf | Antony | 50 | Antony | City |\n | Nord | Lille | 20 | Lille | City |\n | Idf | Nan | 250 | Idf | Region |\n | Nord | Nan | 20 | Nord | Region |"} {"_id": "q_7917", "text": "Keep the row of the data corresponding to the minimal value in a column\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (str): name of the column containing the value you want to keep the minimum\n\n *optional :*\n - `groups` (*str or list(str)*): name of the column(s) used for 'groupby' logic\n (the function will return the argmax by group)\n ---\n\n ### Example\n\n **Input**\n\n | variable | wave | year | value |\n |:--------:|:-------:|:--------:|:-----:|\n | toto | wave 1 | 2014 | 300 |\n | toto | wave 1 | 2015 | 250 |\n | toto | wave 1 | 2016 | 450 |\n\n ```cson\n argmin:\n column: 'year'\n ]\n ```\n\n **Output**\n\n | variable | wave | year | value |\n |:--------:|:-------:|:--------:|:-----:|\n | toto | wave 1 | 2015 | 250 |"} {"_id": "q_7918", "text": "Can fill NaN values from a column with a given value or a column\n\n ---\n\n ### Parameters\n\n - `column` (*str*): name of column you want to fill\n - `value`: NaN will be replaced by this value\n - `column_value`: NaN will be replaced by value from this column\n\n *NOTE*: You must set either the 'value' parameter or the 'column_value' parameter\n\n ---\n\n ### Example\n\n **Input**\n\n | variable | wave | year | my_value |\n |:--------:|:-------:|:--------:|:--------:|\n | toto | wave 1 | 2014 | 300 |\n | toto | wave 1 | 2015 | |\n | toto | wave 1 | 2016 | 450 |\n\n ```cson\n fillna:\n column: 'my_value'\n value: 0\n ```\n\n **Output**\n\n | variable | wave | year | my_value |\n |:--------:|:-------:|:--------:|:--------:|\n | toto | wave 1 | 2014 | 300 |\n | toto | wave 1 | 2015 | 0 |\n | toto | wave 1 | 2016 | 450 |"} {"_id": "q_7919", "text": "add a human readable offset to `dateobj` and return corresponding date.\n\n rely on `pandas.Timedelta` and add the following extra shortcuts:\n - \"w\", \"week\" and \"weeks\" for a week (i.e. 7days)\n - \"month', \"months\" for a month (i.e. no day computation, just increment the month)\n - \"y\", \"year', \"years\" for a year (i.e. no day computation, just increment the year)"} {"_id": "q_7920", "text": "return `dateobj` + `nb_months`\n\n If landing date doesn't exist (e.g. february, 30th), return the last\n day of the landing month.\n\n >>> add_months(date(2018, 1, 1), 1)\n datetime.date(2018, 1, 1)\n >>> add_months(date(2018, 1, 1), -1)\n datetime.date(2017, 12, 1)\n >>> add_months(date(2018, 1, 1), 25)\n datetime.date(2020, 2, 1)\n >>> add_months(date(2018, 1, 1), -25)\n datetime.date(2015, 12, 1)\n >>> add_months(date(2018, 1, 31), 1)\n datetime.date(2018, 2, 28)"} {"_id": "q_7921", "text": "return `dateobj` + `nb_years`\n\n If landing date doesn't exist (e.g. february, 30th), return the last\n day of the landing month.\n\n >>> add_years(date(2018, 1, 1), 1)\n datetime.date(2019, 1, 1)\n >>> add_years(date(2018, 1, 1), -1)\n datetime.date(2017, 1, 1)\n >>> add_years(date(2020, 2, 29), 1)\n datetime.date(2021, 2, 28)\n >>> add_years(date(2020, 2, 29), -1)\n datetime.date(2019, 2, 28)"} {"_id": "q_7922", "text": "parse `datestr` and return corresponding date object.\n\n `datestr` should be a string matching `date_fmt` and parseable by `strptime`\n but some offset can also be added using `(datestr) + OFFSET` or `(datestr) -\n OFFSET` syntax. When using this syntax, `OFFSET` should be understable by\n `pandas.Timedelta` (cf.\n http://pandas.pydata.org/pandas-docs/stable/timedeltas.html) and `w`, `week`\n `month` and `year` offset keywords are also accepted. `datestr` MUST be wrapped\n with parenthesis.\n\n Additionally, the following symbolic names are supported: `TODAY`,\n `YESTERDAY`, `TOMORROW`.\n\n Example usage:\n\n >>> parse_date('2018-01-01', '%Y-%m-%d') datetime.date(2018, 1, 1)\n parse_date('(2018-01-01) + 1day', '%Y-%m-%d') datetime.date(2018, 1, 2)\n parse_date('(2018-01-01) + 2weeks', '%Y-%m-%d') datetime.date(2018, 1, 15)\n\n Parameters: `datestr`: the date to parse, formatted as `date_fmt`\n `date_fmt`: expected date format\n\n Returns: The `date` object. If date could not be parsed, a ValueError will\n be raised."} {"_id": "q_7923", "text": "Filter dataframe your data by date.\n\n This function will interpret `start`, `stop` and `atdate` and build\n the corresponding date range. The caller must specify either:\n\n - `atdate`: keep all rows matching this date exactly,\n - `start`: keep all rows matching this date onwards.\n - `stop`: keep all rows matching dates before this one.\n - `start` and `stop`: keep all rows between `start` and `stop`,\n\n Any other combination will raise an error. The lower bound of the date range\n will be included, the upper bound will be excluded.\n\n When specified, `start`, `stop` and `atdate` values are expected to match the\n `date_format` format or a known symbolic value (i.e. 'TODAY', 'YESTERDAY' or 'TOMORROW').\n\n Additionally, the offset syntax \"(date) + offset\" is also supported (Mind\n the parenthesis around the date string). In that case, the offset must be\n one of the syntax supported by `pandas.Timedelta` (see [pandas doc](\n http://pandas.pydata.org/pandas-docs/stable/timedeltas.html))\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `date_col` (*str*): the name of the dataframe's column to filter on\n\n *optional :*\n - `date_format` (*str*): expected date format in column `date_col` (see [available formats](\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior)\n - `start` (*str*): if specified, lower bound (included) of the date range\n - `stop` (*str*): if specified, upper bound (excluded) of the date range\n - `atdate` (*str*): if specified, the exact date we're filtering on"} {"_id": "q_7924", "text": "Add a column to the dataframe according to the groupby logic on group_cols\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): name of the desired column you need percentage on\n\n *optional :*\n - `group_cols` (*list*): names of columns for the groupby logic\n - `new_column` (*str*): name of the output column. By default `column` will be overwritten.\n\n ---\n\n **Input**\n\n | gender | sport | number |\n |:------:|:----------:|:------:|\n | male | bicycle | 17 |\n | female | basketball | 17 |\n | male | basketball | 3 |\n | female | football | 7 |\n | female | running | 30 |\n | male | running | 20 |\n | male | football | 21 |\n | female | bicycle | 17 |\n\n ```cson\n percentage:\n new_column: 'number_percentage'\n column: 'number'\n group_cols: ['sport']\n ```\n\n **Output**\n\n | gender | sport | number | number_percentage |\n |:------:|:----------:|:------:|:-----------------:|\n | male | bicycle | 17 | 50.0 |\n | female | basketball | 17 | 85.0 |\n | male | basketball | 3 | 15.0 |\n | female | football | 7 | 25.0 |\n | female | running | 30 | 60.0 |\n | male | running | 20 | 40.0 |\n | male | football | 21 | 75.0 |\n | female | bicycle | 17 | 50.0 |"} {"_id": "q_7925", "text": "Get descriptor base path if string or return None."} {"_id": "q_7926", "text": "Validate this Data Package."} {"_id": "q_7927", "text": "Push Data Package to storage.\n\n All parameters should be used as keyword arguments.\n\n Args:\n descriptor (str): path to descriptor\n backend (str): backend name like `sql` or `bigquery`\n backend_options (dict): backend options mentioned in backend docs"} {"_id": "q_7928", "text": "Pull Data Package from storage.\n\n All parameters should be used as keyword arguments.\n\n Args:\n descriptor (str): path where to store descriptor\n name (str): name of the pulled datapackage\n backend (str): backend name like `sql` or `bigquery`\n backend_options (dict): backend options mentioned in backend docs"} {"_id": "q_7929", "text": "Convert resource's path and name to storage's table name.\n\n Args:\n path (str): resource path\n name (str): resource name\n\n Returns:\n str: table name"} {"_id": "q_7930", "text": "Restore schemas from being compatible with storage schemas.\n\n Foreign keys related operations.\n\n Args:\n list: resources from storage\n\n Returns:\n list: restored resources"} {"_id": "q_7931", "text": "It is possible for some of gdb's output to be read before it completely finished its response.\n In that case, a partial mi response was read, which cannot be parsed into structured data.\n We want to ALWAYS parse complete mi records. To do this, we store a buffer of gdb's\n output if the output did not end in a newline.\n\n Args:\n raw_output: Contents of the gdb mi output\n buf (str): Buffered gdb response from the past. This is incomplete and needs to be prepended to\n gdb's next output.\n\n Returns:\n (raw_output, buf)"} {"_id": "q_7932", "text": "Write to gdb process. Block while parsing responses from gdb for a maximum of timeout_sec.\n\n Args:\n mi_cmd_to_write (str or list): String to write to gdb. If list, it is joined by newlines.\n timeout_sec (float): Maximum number of seconds to wait for response before exiting. Must be >= 0.\n raise_error_on_timeout (bool): If read_response is True, raise error if no response is received\n read_response (bool): Block and read response. If there is a separate thread running,\n this can be false, and the reading thread read the output.\n Returns:\n List of parsed gdb responses if read_response is True, otherwise []\n Raises:\n NoGdbProcessError if there is no gdb subprocess running\n TypeError if mi_cmd_to_write is not valid"} {"_id": "q_7933", "text": "Get response from GDB, and block while doing so. If GDB does not have any response ready to be read\n by timeout_sec, an exception is raised.\n\n Args:\n timeout_sec (float): Maximum time to wait for reponse. Must be >= 0. Will return after\n raise_error_on_timeout (bool): Whether an exception should be raised if no response was found\n after timeout_sec\n\n Returns:\n List of parsed GDB responses, returned from gdbmiparser.parse_response, with the\n additional key 'stream' which is either 'stdout' or 'stderr'\n\n Raises:\n GdbTimeoutError if response is not received within timeout_sec\n ValueError if select returned unexpected file number\n NoGdbProcessError if there is no gdb subprocess running"} {"_id": "q_7934", "text": "Get responses on windows. Assume no support for select and use a while loop."} {"_id": "q_7935", "text": "Get responses on unix-like system. Use select to wait for output."} {"_id": "q_7936", "text": "Read count characters starting at self.index,\n and return those characters as a string"} {"_id": "q_7937", "text": "Parse gdb mi text and turn it into a dictionary.\n\n See https://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI-Stream-Records.html#GDB_002fMI-Stream-Records\n for details on types of gdb mi output.\n\n Args:\n gdb_mi_text (str): String output from gdb\n\n Returns:\n dict with the following keys:\n type (either 'notify', 'result', 'console', 'log', 'target', 'done'),\n message (str or None),\n payload (str, list, dict, or None)"} {"_id": "q_7938", "text": "Get notify message and payload dict"} {"_id": "q_7939", "text": "Optimize by SGD, AdaGrad, or AdaDelta."} {"_id": "q_7940", "text": "Return updates in the training."} {"_id": "q_7941", "text": "Get parameters to be optimized."} {"_id": "q_7942", "text": "Compute first glimpse position using down-sampled image."} {"_id": "q_7943", "text": "All codes that create parameters should be put into 'setup' function."} {"_id": "q_7944", "text": "Build the computation graph here."} {"_id": "q_7945", "text": "Process all data with given function.\n The scheme of function should be x,y -> x,y."} {"_id": "q_7946", "text": "Make targets be one-hot vectors."} {"_id": "q_7947", "text": "Print dataset statistics."} {"_id": "q_7948", "text": "We train over mini-batches and evaluate periodically."} {"_id": "q_7949", "text": "Sample outputs from LM."} {"_id": "q_7950", "text": "Compute the alignment weights based on the previous state."} {"_id": "q_7951", "text": "Pad sequences to given length in the left or right side."} {"_id": "q_7952", "text": "RMSPROP optimization core."} {"_id": "q_7953", "text": "Run the model with validation data and return costs."} {"_id": "q_7954", "text": "This function will be called after each iteration."} {"_id": "q_7955", "text": "Create inner loop variables."} {"_id": "q_7956", "text": "Internal scan with dummy input variables."} {"_id": "q_7957", "text": "Momentum SGD optimization core."} {"_id": "q_7958", "text": "Skip N batches in the training."} {"_id": "q_7959", "text": "Load parameters for the training.\n This method can load free parameters and resume the training progress."} {"_id": "q_7960", "text": "Train the model and return costs."} {"_id": "q_7961", "text": "Run one training iteration."} {"_id": "q_7962", "text": "Run one valid iteration, return true if to continue training."} {"_id": "q_7963", "text": "Get specified split of data."} {"_id": "q_7964", "text": "Report usage of training parameters."} {"_id": "q_7965", "text": "An alias of deepy.tensor.var."} {"_id": "q_7966", "text": "Create vars given a dataset and set test values.\n Useful when dataset is already defined."} {"_id": "q_7967", "text": "Create a shared theano scalar value."} {"_id": "q_7968", "text": "Stack encoding layers, this must be done before stacking decoding layers."} {"_id": "q_7969", "text": "Stack decoding layers."} {"_id": "q_7970", "text": "Encode given input."} {"_id": "q_7971", "text": "Register the layer so that it's param will be trained.\n But the output of the layer will not be stacked."} {"_id": "q_7972", "text": "Monitoring the outputs of each layer.\n Useful for troubleshooting convergence problems."} {"_id": "q_7973", "text": "Return all parameters."} {"_id": "q_7974", "text": "Set up variables."} {"_id": "q_7975", "text": "Save parameters to file."} {"_id": "q_7976", "text": "Load parameters from file."} {"_id": "q_7977", "text": "Print network statistics."} {"_id": "q_7978", "text": "Register parameters."} {"_id": "q_7979", "text": "Register updates that will only be executed in training phase."} {"_id": "q_7980", "text": "Register monitors they should be tuple of name and Theano variable."} {"_id": "q_7981", "text": "Get the L2 norm of multiple tensors.\n This function is taken from blocks."} {"_id": "q_7982", "text": "dumps one element to file_obj, a file opened in write mode"} {"_id": "q_7983", "text": "Load parameters to the block."} {"_id": "q_7984", "text": "Creates |oauth2| request elements."} {"_id": "q_7985", "text": "We need to override this method to fix Facebooks naming deviation."} {"_id": "q_7986", "text": "Google doesn't accept client ID and secret to be at the same time in\n request parameters and in the basic authorization header in the access\n token request."} {"_id": "q_7987", "text": "Login handler, must accept both GET and POST to be able to use OpenID."} {"_id": "q_7988", "text": "Replaces all values that are single-item iterables with the value of its\n index 0.\n\n :param dict dict_:\n Dictionary to normalize.\n\n :returns:\n Normalized dictionary."} {"_id": "q_7989", "text": "Converts list of tuples to dictionary with duplicate keys converted to\n lists.\n\n :param list items:\n List of tuples.\n\n :returns:\n :class:`dict`"} {"_id": "q_7990", "text": "Parses response body from JSON, XML or query string.\n\n :param body:\n string\n\n :returns:\n :class:`dict`, :class:`list` if input is JSON or query string,\n :class:`xml.etree.ElementTree.Element` if XML."} {"_id": "q_7991", "text": "Returns a provider class.\n\n :param class_name: :class:`string` or\n :class:`authomatic.providers.BaseProvider` subclass."} {"_id": "q_7992", "text": "Creates the value for ``Set-Cookie`` HTTP header.\n\n :param bool delete:\n If ``True`` the cookie value will be ``deleted`` and the\n Expires value will be ``Thu, 01-Jan-1970 00:00:01 GMT``."} {"_id": "q_7993", "text": "Creates signature for the session."} {"_id": "q_7994", "text": "Converts the value to a signed string with timestamp.\n\n :param value:\n Object to be serialized.\n\n :returns:\n Serialized value."} {"_id": "q_7995", "text": "``True`` if credentials are valid, ``False`` if expired."} {"_id": "q_7996", "text": "Returns ``True`` if credentials expire sooner than specified.\n\n :param int seconds:\n Number of seconds.\n\n :returns:\n ``True`` if credentials expire sooner than specified,\n else ``False``."} {"_id": "q_7997", "text": "Converts the credentials to a percent encoded string to be stored for\n later use.\n\n :returns:\n :class:`string`"} {"_id": "q_7998", "text": "Return true if string is binary data."} {"_id": "q_7999", "text": "The whole response content."} {"_id": "q_8000", "text": "Creates |oauth1| request elements."} {"_id": "q_8001", "text": "Decorator for Flask view functions."} {"_id": "q_8002", "text": "Launches the OpenID authentication procedure."} {"_id": "q_8003", "text": "Logs a message with pre-formatted prefix.\n\n :param int level:\n Logging level as specified in the\n `login module `_ of\n Python standard library.\n\n :param str msg:\n The actual message."} {"_id": "q_8004", "text": "Splits given url to url base and params converted to list of tuples."} {"_id": "q_8005", "text": "Deletes this worker's subscription."} {"_id": "q_8006", "text": "Workers all share the same subscription so that tasks are\n distributed across all workers."} {"_id": "q_8007", "text": "Enqueues a function for the task queue to execute."} {"_id": "q_8008", "text": "Standalone PSQ worker.\n\n The queue argument must be the full importable path to a psq.Queue\n instance.\n\n Example usage:\n\n psqworker config.q\n\n psqworker --path /opt/app queues.fast"} {"_id": "q_8009", "text": "Gets the result of the task.\n\n Arguments:\n timeout: Maximum seconds to wait for a result before raising a\n TimeoutError. If set to None, this will wait forever. If the\n queue doesn't store results and timeout is None, this call will\n never return."} {"_id": "q_8010", "text": "This function is the decorator which is used to wrap a Sanic route with.\n In the simplest case, simply use the default parameters to allow all\n origins in what is the most permissive configuration. If this method\n modifies state or performs authentication which may be brute-forced, you\n should add some degree of protection, such as Cross Site Forgery\n Request protection.\n\n :param origins:\n The origin, or list of origins to allow requests from.\n The origin(s) may be regular expressions, case-sensitive strings,\n or else an asterisk\n\n Default : '*'\n :type origins: list, string or regex\n\n :param methods:\n The method or list of methods which the allowed origins are allowed to\n access for non-simple requests.\n\n Default : [GET, HEAD, POST, OPTIONS, PUT, PATCH, DELETE]\n :type methods: list or string\n\n :param expose_headers:\n The header or list which are safe to expose to the API of a CORS API\n specification.\n\n Default : None\n :type expose_headers: list or string\n\n :param allow_headers:\n The header or list of header field names which can be used when this\n resource is accessed by allowed origins. The header(s) may be regular\n expressions, case-sensitive strings, or else an asterisk.\n\n Default : '*', allow all headers\n :type allow_headers: list, string or regex\n\n :param supports_credentials:\n Allows users to make authenticated requests. If true, injects the\n `Access-Control-Allow-Credentials` header in responses. This allows\n cookies and credentials to be submitted across domains.\n\n :note: This option cannot be used in conjuction with a '*' origin\n\n Default : False\n :type supports_credentials: bool\n\n :param max_age:\n The maximum time for which this CORS request maybe cached. This value\n is set as the `Access-Control-Max-Age` header.\n\n Default : None\n :type max_age: timedelta, integer, string or None\n\n :param send_wildcard: If True, and the origins parameter is `*`, a wildcard\n `Access-Control-Allow-Origin` header is sent, rather than the\n request's `Origin` header.\n\n Default : False\n :type send_wildcard: bool\n\n :param vary_header:\n If True, the header Vary: Origin will be returned as per the W3\n implementation guidelines.\n\n Setting this header when the `Access-Control-Allow-Origin` is\n dynamically generated (e.g. when there is more than one allowed\n origin, and an Origin than '*' is returned) informs CDNs and other\n caches that the CORS headers are dynamic, and cannot be cached.\n\n If False, the Vary header will never be injected or altered.\n\n Default : True\n :type vary_header: bool\n\n :param automatic_options:\n Only applies to the `cross_origin` decorator. If True, Sanic-CORS will\n override Sanic's default OPTIONS handling to return CORS headers for\n OPTIONS requests.\n\n Default : True\n :type automatic_options: bool"} {"_id": "q_8011", "text": "Performs the actual evaluation of Sanic-CORS options and actually\n modifies the response object.\n\n This function is used both in the decorator and the after_request\n callback\n :param sanic.request.Request req:"} {"_id": "q_8012", "text": "Wraps scalars or string types as a list, or returns the iterable instance."} {"_id": "q_8013", "text": "Python 3.4 does not have math.isclose, so we need to steal it and add it here."} {"_id": "q_8014", "text": "Deprecator decorator."} {"_id": "q_8015", "text": "Attempts to deserialize a bytestring into an audiosegment.\n\n :param bstr: The bytestring serialized via an audiosegment's serialize() method.\n :returns: An AudioSegment object deserialized from `bstr`."} {"_id": "q_8016", "text": "Returns an AudioSegment created from the given numpy array.\n\n The numpy array must have shape = (num_samples, num_channels).\n\n :param nparr: The numpy array to create an AudioSegment from.\n :returns: An AudioSegment created from the given array."} {"_id": "q_8017", "text": "Executes a Sox command in a platform-independent manner.\n\n `cmd` must be a format string that includes {inputfile} and {outputfile}."} {"_id": "q_8018", "text": "Returns a copy of this AudioSegment, but whose silence has been removed.\n\n .. note:: This method requires that you have the program 'sox' installed.\n\n .. warning:: This method uses the program 'sox' to perform the task. While this is very fast for a single\n function call, the IO may add up for large numbers of AudioSegment objects.\n\n :param duration_s: The number of seconds of \"silence\" that must be present in a row to\n be stripped.\n :param threshold_percentage: Silence is defined as any samples whose absolute value is below\n `threshold_percentage * max(abs(samples in this segment))`.\n :param console_output: If True, will pipe all sox output to the console.\n :returns: A copy of this AudioSegment, but whose silence has been removed."} {"_id": "q_8019", "text": "Transforms the indicated slice of the AudioSegment into the frequency domain and returns the bins\n and the values.\n\n If neither `start_s` or `start_sample` is specified, the first sample of the slice will be the first sample\n of the AudioSegment.\n\n If neither `duration_s` or `num_samples` is specified, the slice will be from the specified start\n to the end of the segment.\n\n .. code-block:: python\n\n # Example for plotting the FFT using this function\n import matplotlib.pyplot as plt\n import numpy as np\n\n seg = audiosegment.from_file(\"furelise.wav\")\n # Just take the first 3 seconds\n hist_bins, hist_vals = seg[1:3000].fft()\n hist_vals_real_normed = np.abs(hist_vals) / len(hist_vals)\n plt.plot(hist_bins / 1000, hist_vals_real_normed)\n plt.xlabel(\"kHz\")\n plt.ylabel(\"dB\")\n plt.show()\n\n .. image:: images/fft.png\n\n :param start_s: The start time in seconds. If this is specified, you cannot specify `start_sample`.\n :param duration_s: The duration of the slice in seconds. If this is specified, you cannot specify `num_samples`.\n :param start_sample: The zero-based index of the first sample to include in the slice.\n If this is specified, you cannot specify `start_s`.\n :param num_samples: The number of samples to include in the slice. If this is specified, you cannot\n specify `duration_s`.\n :param zero_pad: If True and the combination of start and duration result in running off the end of\n the AudioSegment, the end is zero padded to prevent this.\n :returns: np.ndarray of frequencies in Hz, np.ndarray of amount of each frequency\n :raises: ValueError If `start_s` and `start_sample` are both specified and/or if both `duration_s` and\n `num_samples` are specified."} {"_id": "q_8020", "text": "Yields self's data in chunks of frame_duration_ms.\n\n This function adapted from pywebrtc's example [https://github.com/wiseman/py-webrtcvad/blob/master/example.py].\n\n :param frame_duration_ms: The length of each frame in ms.\n :param zero_pad: Whether or not to zero pad the end of the AudioSegment object to get all\n the audio data out as frames. If not, there may be a part at the end\n of the Segment that is cut off (the part will be <= `frame_duration_ms` in length).\n :returns: A Frame object with properties 'bytes (the data)', 'timestamp (start time)', and 'duration'."} {"_id": "q_8021", "text": "Normalize the values in the AudioSegment so that its `spl` property\n gives `db`.\n\n .. note:: This method is currently broken - it returns an AudioSegment whose\n values are much smaller than reasonable, yet which yield an SPL value\n that equals the given `db`. Such an AudioSegment will not be serializable\n as a WAV file, which will also break any method that relies on SOX.\n I may remove this method in the future, since the SPL of an AudioSegment is\n pretty questionable to begin with.\n\n :param db: The decibels to normalize average to.\n :returns: A new AudioSegment object whose values are changed so that their\n average is `db`.\n :raises: ValueError if there are no samples in this AudioSegment."} {"_id": "q_8022", "text": "Reduces others into this one by concatenating all the others onto this one and\n returning the result. Does not modify self, instead, makes a copy and returns that.\n\n :param others: The other AudioSegment objects to append to this one.\n :returns: The concatenated result."} {"_id": "q_8023", "text": "Get the ID corresponding to the offset which occurs first after the given onset_front_id.\n By `first` I mean the front which contains the offset which is closest to the latest point\n in the onset front. By `after`, I mean that the offset must contain only offsets which\n occur after the latest onset in the onset front.\n\n If there is no appropriate offset front, the id returned is -1."} {"_id": "q_8024", "text": "Gets an onset_front and an offset_front such that they both occupy at least some of the same\n frequency channels, then returns the portion of each that overlaps with the other."} {"_id": "q_8025", "text": "Returns an updated segmentation mask such that the input `segmentation_mask` has been updated by segmenting between\n `onset_front_id` and `offset_front_id`, as found in `onset_fronts` and `offset_fronts`, respectively.\n\n This function also returns the onset_fronts and offset_fronts matrices, updated so that any fronts that are of\n less than 3 channels wide are removed.\n\n This function also returns a boolean value indicating whether the onset channel went to completion.\n\n Specifically, segments by doing the following:\n\n - Going across frequencies in the onset_front,\n - add the segment mask ID (the onset front ID) to all samples between the onset_front and the offset_front,\n if the offset_front is in that frequency.\n\n Possible scenarios:\n\n Fronts line up completely:\n\n ::\n\n | | S S S\n | | => S S S\n | | S S S\n | | S S S\n\n Onset front starts before offset front:\n\n ::\n\n | |\n | | S S S\n | | => S S S\n | | S S S\n\n Onset front ends after offset front:\n\n ::\n\n | | S S S\n | | => S S S\n | | S S S\n | |\n\n Onset front starts before and ends after offset front:\n\n ::\n\n | |\n | | => S S S\n | | S S S\n | |\n\n The above three options in reverse:\n\n ::\n\n | |S S| |\n |S S| |S S| |S S|\n |S S| |S S| |S S|\n |S S| | |\n\n There is one last scenario:\n\n ::\n\n | |\n \\ /\n \\ /\n / \\\n | |\n\n Where the offset and onset fronts cross one another. If this happens, we simply\n reverse the indices and accept:\n\n ::\n\n |sss|\n \\sss/\n \\s/\n /s\\\n |sss|\n\n The other option would be to destroy the offset front from the crossover point on, and\n then search for a new offset front for the rest of the onset front."} {"_id": "q_8026", "text": "Removes all points in the fronts that overlap with the segmentation mask."} {"_id": "q_8027", "text": "Removes all fronts from `fronts` which are strictly smaller than\n `size` consecutive frequencies in length."} {"_id": "q_8028", "text": "For each onset front, for each frequency in that front, break the onset front if the signals\n between this frequency's onset and the next frequency's onset are not similar enough.\n\n Specifically:\n If we have the following two frequency channels, and the two O's are part of the same onset front,\n\n ::\n\n [ . O . . . . . . . . . . ]\n [ . . . . O . . . . . . . ]\n\n We compare the signals x and y:\n\n ::\n\n [ . x x x x . . . . . . . ]\n [ . y y y y . . . . . . . ]\n\n And if they are not sufficiently similar (via a DSP correlation algorithm), we break the onset\n front between these two channels.\n\n Once this is done, remove any onset fronts that are less than 3 channels wide."} {"_id": "q_8029", "text": "Returns a list of segmentation masks each of the same dimension as the input one,\n but where they each have exactly one segment in them and all other samples in them\n are zeroed.\n\n Only bothers to return segments that are larger in total area than `threshold * mask.size`."} {"_id": "q_8030", "text": "Worker for the ASA algorithm's multiprocessing step."} {"_id": "q_8031", "text": "Does a lowpass filter over the given data.\n\n :param data: The data (numpy array) to be filtered.\n :param cutoff: The high cutoff in Hz.\n :param fs: The sample rate in Hz of the data.\n :param order: The order of the filter. The higher the order, the tighter the roll-off.\n :returns: Filtered data (numpy array)."} {"_id": "q_8032", "text": "Launch a Process, return his pid"} {"_id": "q_8033", "text": "Update the list of the running process and return the list"} {"_id": "q_8034", "text": "Give an IP, maybe a date, get the ASN.\n This is the fastest command.\n\n :param ip: IP address to search for\n :param announce_date: Date of the announcement\n\n :rtype: String, ASN."} {"_id": "q_8035", "text": "Get the full history of an IP. It takes time.\n\n :param ip: IP address to search for\n :param days_limit: Max amount of days to query. (None means no limit)\n\n :rtype: list. For each day in the database: day, asn, block"} {"_id": "q_8036", "text": "Get the full history of an IP, aggregate the result instead of\n returning one line per day.\n\n :param ip: IP address to search for\n :param days_limit: Max amount of days to query. (None means no limit)\n\n :rtype: list. For each change: FirstDay, LastDay, ASN, Block"} {"_id": "q_8037", "text": "Inconditianilly download the URL in a temporary directory.\n When finished, the file is moved in the real directory.\n Like this an other process will not attempt to extract an inclomplete file."} {"_id": "q_8038", "text": "Verify that the file has not already been downloaded."} {"_id": "q_8039", "text": "Separates the outcome feature from the data and creates the onehot vector for each row."} {"_id": "q_8040", "text": "Used to check whether the two edge lists have the same edges \n when elements are neither hashable nor sortable."} {"_id": "q_8041", "text": "Given a list of audit files, rank them using the `measurer` and\n return the features that never deviate more than `similarity_bound`\n across repairs."} {"_id": "q_8042", "text": "Loads a confusion matrix in a two-level dictionary format.\n\n For example, the confusion matrix of a 75%-accurate model\n that predicted 15 values (and mis-classified 5) may look like:\n {\"A\": {\"A\":10, \"B\": 5}, \"B\": {\"B\":5}}\n\n Note that raw boolean values are translated into strings, such that\n a value that was the boolean True will be returned as the string \"True\"."} {"_id": "q_8043", "text": "Separates the outcome feature from the data."} {"_id": "q_8044", "text": "Renders a Page object as a Twitter Bootstrap styled pagination bar.\n Compatible with Bootstrap 3.x and 4.x only.\n\n Example::\n\n {% bootstrap_paginate page_obj range=10 %}\n\n\n Named Parameters::\n\n range - The size of the pagination bar (ie, if set to 10 then, at most,\n 10 page numbers will display at any given time) Defaults to\n None, which shows all pages.\n\n\n size - Accepts \"small\", and \"large\". Defaults to\n None which is the standard size.\n\n show_prev_next - Accepts \"true\" or \"false\". Determines whether or not\n to show the previous and next page links. Defaults to\n \"true\"\n\n\n show_first_last - Accepts \"true\" or \"false\". Determines whether or not\n to show the first and last page links. Defaults to\n \"false\"\n\n previous_label - The text to display for the previous page link.\n Defaults to \"←\"\n\n next_label - The text to display for the next page link. Defaults to\n \"→\"\n\n first_label - The text to display for the first page link. Defaults to\n \"«\"\n\n last_label - The text to display for the last page link. Defaults to\n \"»\"\n\n url_view_name - The named URL to use. Defaults to None. If None, then the\n default template simply appends the url parameter as a\n relative URL link, eg: 1\n\n url_param_name - The name of the parameter to use in the URL. If\n url_view_name is set to None, this string is used as the\n parameter name in the relative URL path. If a URL\n name is specified, this string is used as the\n parameter name passed into the reverse() method for\n the URL.\n\n url_extra_args - This is used only in conjunction with url_view_name.\n When referencing a URL, additional arguments may be\n passed in as a list.\n\n url_extra_kwargs - This is used only in conjunction with url_view_name.\n When referencing a URL, additional named arguments\n may be passed in as a dictionary.\n\n url_get_params - The other get parameters to pass, only the page\n number will be overwritten. Use this to preserve\n filters.\n\n url_anchor - The anchor to use in URLs. Defaults to None.\n\n extra_pagination_classes - A space separated list of CSS class names\n that will be added to the top level
    \n HTML element. In particular, this can be\n utilized in Bootstrap 4 installatinos to\n add the appropriate alignment classes from\n Flexbox utilites, eg: justify-content-center"} {"_id": "q_8045", "text": "Checks for alternative index-url in pip.conf"} {"_id": "q_8046", "text": "For each package and target check if it is a regression.\n\n This is the case if the main repo contains a package version which is\n higher then in any of the other repos or if any of the other repos does not\n contain that package at all.\n\n :return: a dict indexed by package names containing\n dicts indexed by targets containing a boolean flag"} {"_id": "q_8047", "text": "Remove trailing junk from the version number.\n\n >>> strip_version_suffix('')\n ''\n >>> strip_version_suffix('None')\n 'None'\n >>> strip_version_suffix('1.2.3-4trusty-20140131-1359-+0000')\n '1.2.3-4'\n >>> strip_version_suffix('1.2.3-foo')\n '1.2.3'"} {"_id": "q_8048", "text": "For each package check if the version in one repo is equal for all targets.\n\n The version could be different in different repos though.\n\n :return: a dict indexed by package names containing a boolean flag"} {"_id": "q_8049", "text": "Get the number of packages per target and repository.\n\n :return: a dict indexed by targets containing\n a list of integer values (one for each repo)"} {"_id": "q_8050", "text": "Get the Jenkins job urls for each target.\n\n The placeholder {pkg} needs to be replaced with the ROS package name.\n\n :return: a dict indexed by targets containing a string"} {"_id": "q_8051", "text": "Configure all Jenkins CI jobs."} {"_id": "q_8052", "text": "Resolve all streams on the network.\n\n This function returns all currently available streams from any outlet on \n the network. The network is usually the subnet specified at the local \n router, but may also include a group of machines visible to each other via \n multicast packets (given that the network supports it), or list of \n hostnames. These details may optionally be customized by the experimenter \n in a configuration file (see Network Connectivity in the LSL wiki). \n \n Keyword arguments:\n wait_time -- The waiting time for the operation, in seconds, to search for \n streams. Warning: If this is too short (<0.5s) only a subset \n (or none) of the outlets that are present on the network may \n be returned. (default 1.0)\n \n Returns a list of StreamInfo objects (with empty desc field), any of which \n can subsequently be used to open an inlet. The full description can be\n retrieved from the inlet."} {"_id": "q_8053", "text": "Resolve all streams with a specific value for a given property.\n\n If the goal is to resolve a specific stream, this method is preferred over \n resolving all streams and then selecting the desired one.\n \n Keyword arguments:\n prop -- The StreamInfo property that should have a specific value (e.g., \n \"name\", \"type\", \"source_id\", or \"desc/manufaturer\").\n value -- The string value that the property should have (e.g., \"EEG\" as \n the type property).\n minimum -- Return at least this many streams. (default 1)\n timeout -- Optionally a timeout of the operation, in seconds. If the \n timeout expires, less than the desired number of streams \n (possibly none) will be returned. (default FOREVER)\n \n Returns a list of matching StreamInfo objects (with empty desc field), any \n of which can subsequently be used to open an inlet.\n \n Example: results = resolve_Stream_byprop(\"type\",\"EEG\")"} {"_id": "q_8054", "text": "Resolve all streams that match a given predicate.\n\n Advanced query that allows to impose more conditions on the retrieved \n streams; the given string is an XPath 1.0 predicate for the \n node (omitting the surrounding []'s), see also\n http://en.wikipedia.org/w/index.php?title=XPath_1.0&oldid=474981951.\n \n Keyword arguments:\n predicate -- The predicate string, e.g. \"name='BioSemi'\" or \n \"type='EEG' and starts-with(name,'BioSemi') and \n count(description/desc/channels/channel)=32\"\n minimum -- Return at least this many streams. (default 1)\n timeout -- Optionally a timeout of the operation, in seconds. If the \n timeout expires, less than the desired number of streams \n (possibly none) will be returned. (default FOREVER)\n \n Returns a list of matching StreamInfo objects (with empty desc field), any \n of which can subsequently be used to open an inlet."} {"_id": "q_8055", "text": "Error handler function. Translates an error code into an exception."} {"_id": "q_8056", "text": "Get a child with a specified name."} {"_id": "q_8057", "text": "Get the previous sibling in the children list of the parent node.\n\n If a name is provided, the previous sibling with the given name is\n returned."} {"_id": "q_8058", "text": "Set the element's name. Returns False if the node is empty."} {"_id": "q_8059", "text": "Set the element's value. Returns False if the node is empty."} {"_id": "q_8060", "text": "Append a copy of the specified element as a child."} {"_id": "q_8061", "text": "Remove a given child element, specified by name or as element."} {"_id": "q_8062", "text": "Obtain the set of currently present streams on the network.\n\n Returns a list of matching StreamInfo objects (with empty desc\n field), any of which can subsequently be used to open an inlet."} {"_id": "q_8063", "text": "See all token associated with a given token.\n PAIR lilas"} {"_id": "q_8064", "text": "Shows autocomplete results for a given token."} {"_id": "q_8065", "text": "Compute fuzzy extensions of word.\n FUZZY lilas"} {"_id": "q_8066", "text": "Compute fuzzy extensions of word that exist in index.\n FUZZYINDEX lilas"} {"_id": "q_8067", "text": "Try to extract the bigger group of interlinked tokens.\n\n Should generally be used at last in the collectors chain."} {"_id": "q_8068", "text": "Display this help message."} {"_id": "q_8069", "text": "Print some useful infos from Redis DB."} {"_id": "q_8070", "text": "Print raw content of a DB key.\n DBKEY g|u09tyzfe"} {"_id": "q_8071", "text": "Compute a geohash from latitude and longitude.\n GEOHASH 48.1234 2.9876"} {"_id": "q_8072", "text": "Get index details for a document by its id.\n INDEX 772210180J"} {"_id": "q_8073", "text": "Return document linked to word with higher score.\n BESTSCORE lilas"} {"_id": "q_8074", "text": "Print the distance score between two strings. Use |\u00a0as separator.\n STRDISTANCE rue des lilas|porte des lilas"} {"_id": "q_8075", "text": "Just sends the request using its send method and returns its response."} {"_id": "q_8076", "text": "Concurrently converts a list of Requests to Responses.\n\n :param requests: a collection of Request objects.\n :param stream: If False, the content will not be downloaded immediately.\n :param size: Specifies the number of workers to run at a time. If 1, no parallel processing.\n :param exception_handler: Callback function, called when exception occured. Params: Request, Exception"} {"_id": "q_8077", "text": "Removes PEM-encoding from a public key, private key or certificate. If the\n private key is encrypted, the password will be used to decrypt it.\n\n :param data:\n A byte string of the PEM-encoded data\n\n :param password:\n A byte string of the encryption password, or None\n\n :return:\n A 3-element tuple in the format: (key_type, algorithm, der_bytes). The\n key_type will be a unicode string of \"public key\", \"private key\" or\n \"certificate\". The algorithm will be a unicode string of \"rsa\", \"dsa\"\n or \"ec\"."} {"_id": "q_8078", "text": "Decrypts encrypted ASN.1 data\n\n :param encryption_algorithm_info:\n An instance of asn1crypto.pkcs5.Pkcs5EncryptionAlgorithm\n\n :param encrypted_content:\n A byte string of the encrypted content\n\n :param password:\n A byte string of the encrypted content's password\n\n :return:\n A byte string of the decrypted plaintext"} {"_id": "q_8079", "text": "Creates an EVP_CIPHER pointer object and determines the buffer size\n necessary for the parameter specified.\n\n :param evp_cipher_ctx:\n An EVP_CIPHER_CTX pointer\n\n :param cipher:\n A unicode string of \"aes128\", \"aes192\", \"aes256\", \"des\",\n \"tripledes_2key\", \"tripledes_3key\", \"rc2\", \"rc4\"\n\n :param key:\n The key byte string\n\n :param data:\n The plaintext or ciphertext as a byte string\n\n :param padding:\n If padding is to be used\n\n :return:\n A 2-element tuple with the first element being an EVP_CIPHER pointer\n and the second being an integer that is the required buffer size"} {"_id": "q_8080", "text": "Takes a CryptoAPI RSA private key blob and converts it into the ASN.1\n structures for the public and private keys\n\n :param bit_size:\n The integer bit size of the key\n\n :param blob_struct:\n An instance of the advapi32.RSAPUBKEY struct\n\n :param blob:\n A byte string of the binary data after the header\n\n :return:\n A 2-element tuple of (asn1crypto.keys.PublicKeyInfo,\n asn1crypto.keys.PrivateKeyInfo)"} {"_id": "q_8081", "text": "Takes a CryptoAPI DSS private key blob and converts it into the ASN.1\n structures for the public and private keys\n\n :param bit_size:\n The integer bit size of the key\n\n :param public_blob:\n A byte string of the binary data after the public key header\n\n :param private_blob:\n A byte string of the binary data after the private key header\n\n :return:\n A 2-element tuple of (asn1crypto.keys.PublicKeyInfo,\n asn1crypto.keys.PrivateKeyInfo)"} {"_id": "q_8082", "text": "Generates a DSA signature\n\n :param private_key:\n The PrivateKey to generate the signature with\n\n :param data:\n A byte string of the data the signature is for\n\n :param hash_algorithm:\n A unicode string of \"md5\", \"sha1\", \"sha256\", \"sha384\" or \"sha512\"\n\n :raises:\n ValueError - when any of the parameters contain an invalid value\n TypeError - when any of the parameters are of the wrong type\n OSError - when an error is returned by the OS crypto library\n\n :return:\n A byte string of the signature"} {"_id": "q_8083", "text": "Generates an ECDSA signature\n\n :param private_key:\n The PrivateKey to generate the signature with\n\n :param data:\n A byte string of the data the signature is for\n\n :param hash_algorithm:\n A unicode string of \"md5\", \"sha1\", \"sha256\", \"sha384\" or \"sha512\"\n\n :raises:\n ValueError - when any of the parameters contain an invalid value\n TypeError - when any of the parameters are of the wrong type\n OSError - when an error is returned by the OS crypto library\n\n :return:\n A byte string of the signature"} {"_id": "q_8084", "text": "Generates an RSA, DSA or ECDSA signature via CryptoAPI\n\n :param private_key:\n The PrivateKey to generate the signature with\n\n :param data:\n A byte string of the data the signature is for\n\n :param hash_algorithm:\n A unicode string of \"md5\", \"sha1\", \"sha256\", \"sha384\", \"sha512\" or \"raw\"\n\n :param rsa_pss_padding:\n If PSS padding should be used for RSA keys\n\n :raises:\n ValueError - when any of the parameters contain an invalid value\n TypeError - when any of the parameters are of the wrong type\n OSError - when an error is returned by the OS crypto library\n\n :return:\n A byte string of the signature"} {"_id": "q_8085", "text": "Generates an RSA, DSA or ECDSA signature via CNG\n\n :param private_key:\n The PrivateKey to generate the signature with\n\n :param data:\n A byte string of the data the signature is for\n\n :param hash_algorithm:\n A unicode string of \"md5\", \"sha1\", \"sha256\", \"sha384\", \"sha512\" or \"raw\"\n\n :param rsa_pss_padding:\n If PSS padding should be used for RSA keys\n\n :raises:\n ValueError - when any of the parameters contain an invalid value\n TypeError - when any of the parameters are of the wrong type\n OSError - when an error is returned by the OS crypto library\n\n :return:\n A byte string of the signature"} {"_id": "q_8086", "text": "Encrypts a value using an RSA public key via CNG\n\n :param certificate_or_public_key:\n A Certificate or PublicKey instance to encrypt with\n\n :param data:\n A byte string of the data to encrypt\n\n :param rsa_oaep_padding:\n If OAEP padding should be used instead of PKCS#1 v1.5\n\n :raises:\n ValueError - when any of the parameters contain an invalid value\n TypeError - when any of the parameters are of the wrong type\n OSError - when an error is returned by the OS crypto library\n\n :return:\n A byte string of the ciphertext"} {"_id": "q_8087", "text": "Encrypts a value using an RSA private key via CryptoAPI\n\n :param private_key:\n A PrivateKey instance to decrypt with\n\n :param ciphertext:\n A byte string of the data to decrypt\n\n :param rsa_oaep_padding:\n If OAEP padding should be used instead of PKCS#1 v1.5\n\n :raises:\n ValueError - when any of the parameters contain an invalid value\n TypeError - when any of the parameters are of the wrong type\n OSError - when an error is returned by the OS crypto library\n\n :return:\n A byte string of the plaintext"} {"_id": "q_8088", "text": "Blocks until the socket is ready to be read from, or the timeout is hit\n\n :param timeout:\n A float - the period of time to wait for data to be read. None for\n no time limit.\n\n :return:\n A boolean - if data is ready to be read. Will only be False if\n timeout is not None."} {"_id": "q_8089", "text": "Reads exactly the specified number of bytes from the socket\n\n :param num_bytes:\n An integer - the exact number of bytes to read\n\n :return:\n A byte string of the data that was read"} {"_id": "q_8090", "text": "Reads data from the socket and writes it to the memory bio\n used by libssl to decrypt the data. Returns the unencrypted\n data for the purpose of debugging handshakes.\n\n :return:\n A byte string of ciphertext from the socket. Used for\n debugging the handshake only."} {"_id": "q_8091", "text": "Takes ciphertext from the memory bio and writes it to the\n socket.\n\n :return:\n A byte string of ciphertext going to the socket. Used\n for debugging the handshake only."} {"_id": "q_8092", "text": "Encrypts plaintext via CNG\n\n :param cipher:\n A unicode string of \"aes\", \"des\", \"tripledes_2key\", \"tripledes_3key\",\n \"rc2\", \"rc4\"\n\n :param key:\n The encryption key - a byte string 5-16 bytes long\n\n :param data:\n The plaintext - a byte string\n\n :param iv:\n The initialization vector - a byte string - unused for RC4\n\n :param padding:\n Boolean, if padding should be used - unused for RC4\n\n :raises:\n ValueError - when any of the parameters contain an invalid value\n TypeError - when any of the parameters are of the wrong type\n OSError - when an error is returned by the OS crypto library\n\n :return:\n A byte string of the ciphertext"} {"_id": "q_8093", "text": "Checks if an error occured, and if so throws an OSError containing the\n last OpenSSL error message\n\n :param result:\n An integer result code - 1 or greater indicates success\n\n :param exception_class:\n The exception class to use for the exception if an error occurred\n\n :raises:\n OSError - when an OpenSSL error occurs"} {"_id": "q_8094", "text": "Return the certificate and a hash of it\n\n :param cert_pointer:\n A SecCertificateRef\n\n :return:\n A 2-element tuple:\n - [0]: A byte string of the SHA1 hash of the cert\n - [1]: A byte string of the DER-encoded contents of the cert"} {"_id": "q_8095", "text": "Extracts the last OS error message into a python unicode string\n\n :return:\n A unicode string error message"} {"_id": "q_8096", "text": "Converts a CFDictionary object into a python dictionary\n\n :param dictionary:\n The CFDictionary to convert\n\n :return:\n A python dict"} {"_id": "q_8097", "text": "Extracts the function signature and description of a Python function\n\n :param docstring:\n A unicode string of the docstring for the function\n\n :param def_lineno:\n An integer line number that function was defined on\n\n :param code_lines:\n A list of unicode string lines from the source file the function was\n defined in\n\n :param prefix:\n A prefix to prepend to all output lines\n\n :return:\n A 2-element tuple:\n\n - [0] A unicode string of the function signature with a docstring of\n parameter info\n - [1] A markdown snippet of the function description"} {"_id": "q_8098", "text": "Walks through a CommonMark AST to find section headers that delineate\n content that should be updated by this script\n\n :param md_ast:\n The AST of the markdown document\n\n :param sections:\n A dict to store the start and end lines of a section. The key will be\n a two-element tuple of the section type (\"class\", \"function\",\n \"method\" or \"attribute\") and identifier. The values are a two-element\n tuple of the start and end line number in the markdown document of the\n section.\n\n :param last:\n A dict containing information about the last section header seen.\n Includes the keys \"type_name\", \"identifier\", \"start_line\".\n\n :param last_class:\n A unicode string of the name of the last class found - used when\n processing methods and attributes.\n\n :param total_lines:\n An integer of the total number of lines in the markdown document -\n used to work around a bug in the API of the Python port of CommonMark"} {"_id": "q_8099", "text": "A callback used to walk the Python AST looking for classes, functions,\n methods and attributes. Generates chunks of markdown markup to replace\n the existing content.\n\n :param node:\n An _ast module node object\n\n :param code_lines:\n A list of unicode strings - the source lines of the Python file\n\n :param sections:\n A dict of markdown document sections that need to be updated. The key\n will be a two-element tuple of the section type (\"class\", \"function\",\n \"method\" or \"attribute\") and identifier. The values are a two-element\n tuple of the start and end line number in the markdown document of the\n section.\n\n :param md_chunks:\n A dict with keys from the sections param and the values being a unicode\n string containing a chunk of markdown markup."} {"_id": "q_8100", "text": "Tries to find a CA certs bundle in common locations\n\n :raises:\n OSError - when no valid CA certs bundle was found on the filesystem\n\n :return:\n The full filesystem path to a CA certs bundle file"} {"_id": "q_8101", "text": "Extracts trusted CA certs from the system CA cert bundle\n\n :param cert_callback:\n A callback that is called once for each certificate in the trust store.\n It should accept two parameters: an asn1crypto.x509.Certificate object,\n and a reason. The reason will be None if the certificate is being\n exported, otherwise it will be a unicode string of the reason it won't.\n\n :param callback_only_on_failure:\n A boolean - if the callback should only be called when a certificate is\n not exported.\n\n :return:\n A list of 3-element tuples:\n - 0: a byte string of a DER-encoded certificate\n - 1: a set of unicode strings that are OIDs of purposes to trust the\n certificate for\n - 2: a set of unicode strings that are OIDs of purposes to reject the\n certificate for"} {"_id": "q_8102", "text": "Parse the TLS handshake from the client to the server to extract information\n including the cipher suite selected, if compression is enabled, the\n session id and if a new or reused session ticket exists.\n\n :param server_handshake_bytes:\n A byte string of the handshake data received from the server\n\n :param client_handshake_bytes:\n A byte string of the handshake data sent to the server\n\n :return:\n A dict with the following keys:\n - \"protocol\": unicode string\n - \"cipher_suite\": unicode string\n - \"compression\": boolean\n - \"session_id\": \"new\", \"reused\" or None\n - \"session_ticket: \"new\", \"reused\" or None"} {"_id": "q_8103", "text": "Creates a generator returning tuples of information about each record\n in a byte string of data from a TLS client or server. Stops as soon as it\n find a ChangeCipherSpec message since all data from then on is encrypted.\n\n :param data:\n A byte string of TLS records\n\n :return:\n A generator that yields 3-element tuples:\n [0] Byte string of record type\n [1] Byte string of protocol version\n [2] Byte string of record data"} {"_id": "q_8104", "text": "Creates a generator returning tuples of information about each message in\n a byte string of data from a TLS handshake record\n\n :param data:\n A byte string of a TLS handshake record data\n\n :return:\n A generator that yields 2-element tuples:\n [0] Byte string of message type\n [1] Byte string of message data"} {"_id": "q_8105", "text": "Creates a generator returning tuples of information about each extension\n from a byte string of extension data contained in a ServerHello ores\n ClientHello message\n\n :param data:\n A byte string of a extension data from a TLS ServerHello or ClientHello\n message\n\n :return:\n A generator that yields 2-element tuples:\n [0] Byte string of extension type\n [1] Byte string of extension data"} {"_id": "q_8106", "text": "Raises a TLSVerificationError due to a hostname mismatch\n\n :param certificate:\n An asn1crypto.x509.Certificate object\n\n :raises:\n TLSVerificationError"} {"_id": "q_8107", "text": "Raises a TLSVerificationError due to certificate being expired, or not yet\n being valid\n\n :param certificate:\n An asn1crypto.x509.Certificate object\n\n :raises:\n TLSVerificationError"} {"_id": "q_8108", "text": "Looks at the server handshake bytes to try and detect a different protocol\n\n :param server_handshake_bytes:\n A byte string of the handshake data received from the server\n\n :return:\n None, or a unicode string of \"ftp\", \"http\", \"imap\", \"pop3\", \"smtp\""} {"_id": "q_8109", "text": "Reads everything available from the socket - used for debugging when there\n is a protocol error\n\n :param socket:\n The socket to read from\n\n :return:\n A byte string of the remaining data"} {"_id": "q_8110", "text": "Takes a set of unicode string OIDs and converts vendor-specific OIDs into\n generics OIDs from RFCs.\n\n - 1.2.840.113635.100.1.3 (apple_ssl) -> 1.3.6.1.5.5.7.3.1 (server_auth)\n - 1.2.840.113635.100.1.3 (apple_ssl) -> 1.3.6.1.5.5.7.3.2 (client_auth)\n - 1.2.840.113635.100.1.8 (apple_smime) -> 1.3.6.1.5.5.7.3.4 (email_protection)\n - 1.2.840.113635.100.1.9 (apple_eap) -> 1.3.6.1.5.5.7.3.13 (eap_over_ppp)\n - 1.2.840.113635.100.1.9 (apple_eap) -> 1.3.6.1.5.5.7.3.14 (eap_over_lan)\n - 1.2.840.113635.100.1.11 (apple_ipsec) -> 1.3.6.1.5.5.7.3.5 (ipsec_end_system)\n - 1.2.840.113635.100.1.11 (apple_ipsec) -> 1.3.6.1.5.5.7.3.6 (ipsec_tunnel)\n - 1.2.840.113635.100.1.11 (apple_ipsec) -> 1.3.6.1.5.5.7.3.7 (ipsec_user)\n - 1.2.840.113635.100.1.11 (apple_ipsec) -> 1.3.6.1.5.5.7.3.17 (ipsec_ike)\n - 1.2.840.113635.100.1.16 (apple_code_signing) -> 1.3.6.1.5.5.7.3.3 (code_signing)\n - 1.2.840.113635.100.1.20 (apple_time_stamping) -> 1.3.6.1.5.5.7.3.8 (time_stamping)\n - 1.3.6.1.4.1.311.10.3.2 (microsoft_time_stamp_signing) -> 1.3.6.1.5.5.7.3.8 (time_stamping)\n\n :param oids:\n A set of unicode strings\n\n :return:\n The original set of OIDs with any mapped OIDs added"} {"_id": "q_8111", "text": "Checks to see if a cache file needs to be refreshed\n\n :param ca_path:\n A unicode string of the path to the cache file\n\n :param cache_length:\n An integer representing the number of hours the cache is valid for\n\n :return:\n A boolean - True if the cache needs to be updated, False if the file\n is up-to-date"} {"_id": "q_8112", "text": "Gets value of bits between selected range from memory\n\n :param start: bit address of start of bit of bits\n :param end: bit address of first bit behind bits\n :return: instance of BitsVal (derived from SimBits type) which contains\n copy of selected bits"} {"_id": "q_8113", "text": "Cast HArray signal or value to signal or value of type Bits"} {"_id": "q_8114", "text": "Hdl convertible in operator, check if any of items\n in \"iterable\" equals \"sigOrVal\""} {"_id": "q_8115", "text": "Logical shift left"} {"_id": "q_8116", "text": "Returns no of bits required to store x-1\n for example x=8 returns 3"} {"_id": "q_8117", "text": "c-like case of switch statement"} {"_id": "q_8118", "text": "c-like default of switch statement"} {"_id": "q_8119", "text": "Register signals from interfaces for Interface or Unit instances"} {"_id": "q_8120", "text": "This method is called for every value change of any signal."} {"_id": "q_8121", "text": "Serialize HWProcess instance\n\n :param scope: name scope to prevent name collisions"} {"_id": "q_8122", "text": "Walk all interfaces on unit and instantiate agent for every interface.\n\n :return: all monitor/driver functions which should be added to simulation\n as processes"} {"_id": "q_8123", "text": "If interface has associated clk return it otherwise\n try to find clk on parent recursively"} {"_id": "q_8124", "text": "same like itertools.groupby\n\n :note: This function does not needs initial sorting like itertools.groupby\n\n :attention: Order of pairs is not deterministic."} {"_id": "q_8125", "text": "Flatten nested lists, tuples, generators and maps\n\n :param level: maximum depth of flattening"} {"_id": "q_8126", "text": "If signal is not driving anything remove it"} {"_id": "q_8127", "text": "Try merge procB into procA\n\n :raise IncompatibleStructure: if merge is not possible\n :attention: procA is now result if merge has succeed\n :return: procA which is now result of merge"} {"_id": "q_8128", "text": "on writeReqRecieved in monitor mode"} {"_id": "q_8129", "text": "Convert unit to RTL using specified serializer\n\n :param unitOrCls: unit instance or class, which should be converted\n :param name: name override of top unit (if is None name is derived\n form class name)\n :param serializer: serializer which should be used for to RTL conversion\n :param targetPlatform: metainformatins about target platform, distributed\n on every unit under _targetPlatform attribute\n before Unit._impl() is called\n :param saveTo: directory where files should be stored\n If None RTL is returned as string.\n :raturn: if saveTo returns RTL string else returns list of file names\n which were created"} {"_id": "q_8130", "text": "Create new signal in this context\n\n :param clk: clk signal, if specified signal is synthesized\n as SyncSignal\n :param syncRst: synchronous reset signal"} {"_id": "q_8131", "text": "Get maximum _instId from all assigments in statement"} {"_id": "q_8132", "text": "get max statement id,\n used for sorting of processes in architecture"} {"_id": "q_8133", "text": "write data to interface"} {"_id": "q_8134", "text": "Note that this interface will be master\n\n :return: self"} {"_id": "q_8135", "text": "load declaratoins from _declr method\n This function is called first for parent and then for children"} {"_id": "q_8136", "text": "generate _sig for each interface which has no subinterface\n if already has _sig return it instead\n\n :param context: instance of RtlNetlist where signals should be created\n :param prefix: name prefix for created signals\n :param typeTransform: optional function (type) returns modified type\n for signal"} {"_id": "q_8137", "text": "Get name in HDL"} {"_id": "q_8138", "text": "Load all operands and process them by self._evalFn"} {"_id": "q_8139", "text": "Cast signed-unsigned, to int or bool"} {"_id": "q_8140", "text": "Reinterpret signal of type Bits to signal of type HStruct"} {"_id": "q_8141", "text": "Group transaction parts splited on words to words\n\n :param transaction: TransTmpl instance which parts\n should be grupped into words\n :return: generator of tuples (wordIndex, list of transaction parts\n in this word)"} {"_id": "q_8142", "text": "Pretty print interface"} {"_id": "q_8143", "text": "Convert transaction template into FrameTmpls\n\n :param transaction: transaction template used which are FrameTmpls\n created from\n :param wordWidth: width of data signal in target interface\n where frames will be used\n :param maxFrameLen: maximum length of frame in bits,\n if exceeded another frame will be created\n :param maxPaddingWords: maximum of continual padding words in frame,\n if exceed frame is split and words are cut of\n :attention: if maxPaddingWords tuple(valueHasChangedFlag, nextVal)"} {"_id": "q_8164", "text": "Create value updater for simulation for value of array type\n\n :param nextVal: instance of Value which will be asssiggned to signal\n :param indexes: tuple on indexes where value should be updated\n in target array\n\n :return: function(value) -> tuple(valueHasChangedFlag, nextVal)"} {"_id": "q_8165", "text": "set value of this param"} {"_id": "q_8166", "text": "Resolve ports of discovered memories"} {"_id": "q_8167", "text": "Construct value of this type.\n Delegated on value class for this type"} {"_id": "q_8168", "text": "Cast value or signal of this type to another type of same size.\n\n :param sigOrVal: instance of signal or value to cast\n :param toType: instance of HdlType to cast into"} {"_id": "q_8169", "text": "Concatenate all signals to one big signal, recursively\n\n :param masterDirEqTo: only signals with this direction are packed\n :param exclude: sequence of signals/interfaces to exclude"} {"_id": "q_8170", "text": "Return sig and val reduced by & operator or None\n if it is not possible to statically reduce expression"} {"_id": "q_8171", "text": "Return sig and val reduced by ^ operator or None\n if it is not possible to statically reduce expression"} {"_id": "q_8172", "text": "Get root of name space"} {"_id": "q_8173", "text": "Decide if this unit should be serialized or not eventually fix name\n to fit same already serialized unit\n\n :param obj: object to serialize\n :param serializedClasses: dict {unitCls : unitobj}\n :param serializedConfiguredUnits: (unitCls, paramsValues) : unitObj\n where paramsValues are named tuple name:value"} {"_id": "q_8174", "text": "Serialize HdlType instance"} {"_id": "q_8175", "text": "Srialize IfContainer instance"} {"_id": "q_8176", "text": "Get constant name for value\n name of constant is reused if same value was used before"} {"_id": "q_8177", "text": "Cut off statements which are driver of specified signal"} {"_id": "q_8178", "text": "Parse HArray type to this transaction template instance\n\n :return: address of it's end"} {"_id": "q_8179", "text": "Only for transactions derived from HArray\n\n :return: width of item in original array"} {"_id": "q_8180", "text": "Walk fields in instance of TransTmpl\n\n :param offset: optional offset for all children in this TransTmpl\n :param shouldEnterFn: function (transTmpl) which returns True\n when field should be split on it's children\n :param shouldEnterFn: function(transTmpl) which should return\n (shouldEnter, shouldUse) where shouldEnter is flag that means\n iterator should look inside of this actual object\n and shouldUse flag means that this field should be used\n (=generator should yield it)\n :return: generator of tuples ((startBitAddress, endBitAddress),\n TransTmpl instance)"} {"_id": "q_8181", "text": "Merge other statement to this statement"} {"_id": "q_8182", "text": "Cached indent getter function"} {"_id": "q_8183", "text": "Check if not redefining property on obj"} {"_id": "q_8184", "text": "Register interface object on interface level object"} {"_id": "q_8185", "text": "Register array of items on interface level object"} {"_id": "q_8186", "text": "Returns a first driver if signal has only one driver."} {"_id": "q_8187", "text": "Recursively statistically evaluate result of this operator"} {"_id": "q_8188", "text": "Create operator with result signal\n\n :ivar resT: data type of result signal\n :ivar outputs: iterable of singnals which are outputs\n from this operator"} {"_id": "q_8189", "text": "Try connect src to interface of specified name on unit.\n Ignore if interface is not present or if it already has driver."} {"_id": "q_8190", "text": "Propagate \"clk\" clock signal to all subcomponents"} {"_id": "q_8191", "text": "Propagate \"clk\" clock and negative reset \"rst_n\" signal\n to all subcomponents"} {"_id": "q_8192", "text": "Propagate reset \"rst\" signal\n to all subcomponents"} {"_id": "q_8193", "text": "Iterate over bits in vector\n\n :param sigOrVal: signal or value to iterate over\n :param bitsInOne: number of bits in one part\n :param skipPadding: if true padding is skipped in dense types"} {"_id": "q_8194", "text": "Always decide not to serialize obj\n\n :param priv: private data for this function first unit of this class\n :return: tuple (do serialize this object, next priv)"} {"_id": "q_8195", "text": "Decide to serialize only first obj of it's class\n\n :param priv: private data for this function\n (first object with class == obj.__class__)\n\n :return: tuple (do serialize this object, next priv)\n where priv is private data for this function\n (first object with class == obj.__class__)"} {"_id": "q_8196", "text": "Decide to serialize only objs with uniq parameters and class\n\n :param priv: private data for this function\n ({frozen_params: obj})\n\n :return: tuple (do serialize this object, next priv)"} {"_id": "q_8197", "text": "Delegate _make_association on items\n\n :note: doc in :func:`~hwt.synthesizer.interfaceLevel.propDeclCollector._make_association`"} {"_id": "q_8198", "text": "Create a simulation model for unit\n\n :param unit: interface level unit which you wont prepare for simulation\n :param targetPlatform: target platform for this synthes\n :param dumpModelIn: folder to where put sim model files\n (otherwise sim model will be constructed only in memory)"} {"_id": "q_8199", "text": "Reconnect model signals to unit to run simulation with simulation model\n but use original unit interfaces for communication\n\n :param synthesisedUnitOrIntf: interface where should be signals\n replaced from signals from modelCls\n :param modelCls: simulation model form where signals\n for synthesisedUnitOrIntf should be taken"} {"_id": "q_8200", "text": "Syntax sugar\n If outputFile is string try to open it as file\n\n :return: hdl simulator object"} {"_id": "q_8201", "text": "Process for injecting of this callback loop into simulator"} {"_id": "q_8202", "text": "Connect internal signal to port item,\n this connection is used by simulator and only output port items\n will be connected"} {"_id": "q_8203", "text": "connet signal from internal side of of this component to this port"} {"_id": "q_8204", "text": "return signal inside unit which has this port"} {"_id": "q_8205", "text": "Schedule process on actual time with specified priority"} {"_id": "q_8206", "text": "Add hdl process to execution queue\n\n :param trigger: instance of SimSignal\n :param proc: python generator function representing HDL process"} {"_id": "q_8207", "text": "Schedule combUpdateDoneEv event to let agents know that current\n delta step is ending and values from combinational logic are stable"} {"_id": "q_8208", "text": "Apply stashed values to signals"} {"_id": "q_8209", "text": "This functions resolves write conflicts for signal\n\n :param actionSet: set of actions made by process"} {"_id": "q_8210", "text": "Delta step for combinational processes"} {"_id": "q_8211", "text": "Read value from signal or interface"} {"_id": "q_8212", "text": "Write value to signal or interface."} {"_id": "q_8213", "text": "Convert all ternary operators to IfContainers"} {"_id": "q_8214", "text": "Create a new version under this service."} {"_id": "q_8215", "text": "Create a new VCL under this version."} {"_id": "q_8216", "text": "Converts the column to a dictionary representation accepted\n by the Citrination server.\n\n :return: Dictionary with basic options, plus any column type specific\n options held under the \"options\" key\n :rtype: dict"} {"_id": "q_8217", "text": "Add a descriptor column.\n\n :param descriptor: A Descriptor instance (e.g., RealDescriptor, InorganicDescriptor, etc.)\n :param role: Specify a role (input, output, latentVariable, or ignore)\n :param group_by_key: Whether or not to group by this key during cross validation"} {"_id": "q_8218", "text": "Checks to see that the query will not exceed the max query depth\n\n :param returning_query: The PIF system or Dataset query to execute.\n :type returning_query: :class:`PifSystemReturningQuery` or :class: `DatasetReturningQuery`"} {"_id": "q_8219", "text": "Run each in a list of PIF queries against Citrination.\n\n :param multi_query: :class:`MultiQuery` object to execute.\n :return: :class:`PifMultiSearchResult` object with the results of the query."} {"_id": "q_8220", "text": "Updates an existing data view from the search template and ml template given\n\n :param id: Identifier for the data view. This returned from the create method.\n :param configuration: Information to construct the data view from (eg descriptors, datasets etc)\n :param name: Name of the data view\n :param description: Description for the data view"} {"_id": "q_8221", "text": "Gets basic information about a view\n\n :param data_view_id: Identifier of the data view\n :return: Metadata about the view as JSON"} {"_id": "q_8222", "text": "Creates an ml configuration from dataset_ids and extract_as_keys\n\n :param dataset_ids: Array of dataset identifiers to make search template from\n :return: An identifier used to request the status of the builder job (get_ml_configuration_status)"} {"_id": "q_8223", "text": "Utility function to turn the result object from the configuration builder endpoint into something that\n can be used directly as a configuration.\n\n :param result_blob: Nested dicts representing the possible descriptors\n :param dataset_ids: Array of dataset identifiers to make search template from\n :return: An object suitable to be used as a parameter to data view create"} {"_id": "q_8224", "text": "After invoking the create_ml_configuration async method, you can use this method to\n check on the status of the builder job.\n\n :param job_id: The identifier returned from create_ml_configuration\n :return: Job status"} {"_id": "q_8225", "text": "Get the t-SNE projection, including responses and tags.\n\n :param data_view_id: The ID of the data view to retrieve TSNE from\n :type data_view_id: int\n :return: The TSNE analysis\n :rtype: :class:`Tsne`"} {"_id": "q_8226", "text": "Submits an async prediction request.\n\n :param data_view_id: The id returned from create\n :param candidates: Array of candidates\n :param prediction_source: 'scalar' or 'scalar_from_distribution'\n :param use_prior: True to use prior prediction, otherwise False\n :return: Predict request Id (used to check status)"} {"_id": "q_8227", "text": "Returns a string indicating the status of the prediction job\n\n :param view_id: The data view id returned from data view create\n :param predict_request_id: The id returned from predict\n :return: Status data, also includes results if state is finished"} {"_id": "q_8228", "text": "Submits a new experimental design run.\n\n :param data_view_id: The ID number of the data view to which the\n run belongs, as a string\n :type data_view_id: str\n :param num_candidates: The number of candidates to return\n :type num_candidates: int\n :param target: An :class:``Target`` instance representing\n the design run optimization target\n :type target: :class:``Target``\n :param constraints: An array of design constraints (instances of\n objects which extend :class:``BaseConstraint``)\n :type constraints: list of :class:``BaseConstraint``\n :param sampler: The name of the sampler to use during the design run:\n either \"Default\" or \"This view\"\n :type sampler: str\n :return: A :class:`DesignRun` instance containing the UID of the\n new run"} {"_id": "q_8229", "text": "Retrieves a summary of information for a given data view\n - view id\n - name\n - description\n - columns\n\n :param data_view_id: The ID number of the data view to which the\n run belongs, as a string\n :type data_view_id: str"} {"_id": "q_8230", "text": "Given a filepath, loads the file as a dictionary from YAML\n\n :param path: The path to a YAML file"} {"_id": "q_8231", "text": "Extracts credentials from the yaml formatted credential filepath\n passed in. Uses the default profile if the CITRINATION_PROFILE env var\n is not set, otherwise looks for a profile with that name in the credentials file.\n\n :param filepath: The path of the credentials file"} {"_id": "q_8232", "text": "Given an API key, a site url and a credentials file path, runs through a prioritized list of credential sources to find credentials.\n\n Specifically, this method ranks credential priority as follows:\n 1. Those passed in as the first two parameters to this method\n 2. Those found in the environment as variables\n 3. Those found in the credentials file at the profile specified\n by the profile environment variable\n 4. Those found in the default stanza in the credentials file\n\n :param api_key: A Citrination API Key or None\n :param site: A Citrination site URL or None\n :param cred_file: The path to a credentials file"} {"_id": "q_8233", "text": "Returns the number of files matching a pattern in a dataset.\n\n :param dataset_id: The ID of the dataset to search for files.\n :type dataset_id: int\n :param glob: A pattern which will be matched against files in the dataset.\n :type glob: str\n :param is_dir: A boolean indicating whether or not the pattern should match against the beginning of paths in the dataset.\n :type is_dir: bool\n :return: The number of matching files\n :rtype: int"} {"_id": "q_8234", "text": "Retrieves a PIF from a given dataset.\n\n :param dataset_id: The id of the dataset to retrieve PIF from\n :type dataset_id: int\n :param uid: The uid of the PIF to retrieve\n :type uid: str\n :param dataset_version: The dataset version to look for the PIF in. If nothing is supplied, the latest dataset version will be searched\n :type dataset_version: int\n :return: A :class:`Pif` object\n :rtype: :class:`Pif`"} {"_id": "q_8235", "text": "Retrieves the set of columns from the combination of dataset ids given\n\n :param dataset_ids: The id of the dataset to retrieve columns from\n :type dataset_ids: list of int\n :return: A list of column names from the dataset ids given.\n :rtype: list of str"} {"_id": "q_8236", "text": "Generates a default search templates from the available columns in the dataset ids given.\n\n :param dataset_ids: The id of the dataset to retrieve files from\n :type dataset_ids: list of int\n :return: A search template based on the columns in the datasets given"} {"_id": "q_8237", "text": "Returns a new search template, but the new template has only the extract_as_keys given.\n\n :param extract_as_keys: List of extract as keys to keep\n :param search_template: The search template to prune\n :return: New search template with pruned columns"} {"_id": "q_8238", "text": "Make a copy of a dictionary with all keys converted to camel case. This is just calls to_camel_case on each of the keys in the dictionary and returns a new dictionary.\n\n :param obj: Dictionary to convert keys to camel case.\n :return: Dictionary with the input values and all keys in camel case"} {"_id": "q_8239", "text": "Runs the template against the validation endpoint, returns a message indicating status of the templte\n\n :param ml_template: Template to validate\n :return: OK or error message if validation failed"} {"_id": "q_8240", "text": "Compute the hamming distance between two hashes"} {"_id": "q_8241", "text": "Compute the average hash of the given image."} {"_id": "q_8242", "text": "Set up the Vizio media player platform."} {"_id": "q_8243", "text": "Retrieve latest state of the device."} {"_id": "q_8244", "text": "Mute the volume."} {"_id": "q_8245", "text": "Increasing volume of the device."} {"_id": "q_8246", "text": "Decreasing volume of the device."} {"_id": "q_8247", "text": "Restores the starting position."} {"_id": "q_8248", "text": "Gets the piece at the given square."} {"_id": "q_8249", "text": "Removes a piece from the given square if present."} {"_id": "q_8250", "text": "Sets a piece at the given square. An existing piece is replaced."} {"_id": "q_8251", "text": "Checks if the given move would move would leave the king in check or\n put it into check."} {"_id": "q_8252", "text": "Checks if the king of the other side is attacked. Such a position is not\n valid and could only be reached by an illegal move."} {"_id": "q_8253", "text": "Checks if the current position is a checkmate."} {"_id": "q_8254", "text": "a game is ended if a position occurs for the fourth time\n on consecutive alternating moves."} {"_id": "q_8255", "text": "Restores the previous position and returns the last move from the stack."} {"_id": "q_8256", "text": "Gets an SFEN representation of the current position."} {"_id": "q_8257", "text": "Parses a move in standard coordinate notation, makes the move and puts\n it on the the move stack.\n Raises `ValueError` if neither legal nor a null move.\n Returns the move."} {"_id": "q_8258", "text": "Returns a Zobrist hash of the current position."} {"_id": "q_8259", "text": "Gets the symbol `p`, `l`, `n`, etc."} {"_id": "q_8260", "text": "Creates a piece instance from a piece symbol.\n Raises `ValueError` if the symbol is invalid."} {"_id": "q_8261", "text": "Parses an USI string.\n Raises `ValueError` if the USI string is invalid."} {"_id": "q_8262", "text": "Accept a string and parse it into many commits.\n Parse and yield each commit-dictionary.\n This function is a generator."} {"_id": "q_8263", "text": "Accept a parsed single commit. Some of the named groups\n require further processing, so parse those groups.\n Return a dictionary representing the completely parsed\n commit."} {"_id": "q_8264", "text": "Adds a organization-course link to the system"} {"_id": "q_8265", "text": "Course key object validation"} {"_id": "q_8266", "text": "Inactivates an activated organization as well as any active relationships"} {"_id": "q_8267", "text": "Activates an inactive organization-course relationship"} {"_id": "q_8268", "text": "Inactivates an active organization-course relationship"} {"_id": "q_8269", "text": "Retrieves the set of courses currently linked to the specified organization"} {"_id": "q_8270", "text": "Retrieves the organizations linked to the specified course"} {"_id": "q_8271", "text": "Organization dict-to-object serialization"} {"_id": "q_8272", "text": "Load's config then runs Django's execute_from_command_line"} {"_id": "q_8273", "text": "Adds argument for config to existing argparser"} {"_id": "q_8274", "text": "Find config file and set values"} {"_id": "q_8275", "text": "Dumps initial config in YAML"} {"_id": "q_8276", "text": "Documents values in markdown"} {"_id": "q_8277", "text": "converts string to type requested by `cast_as`"} {"_id": "q_8278", "text": "\\\n loop through all the images and find the ones\n that have the best bytez to even make them a candidate"} {"_id": "q_8279", "text": "\\\n checks to see if we were able to\n find open link_src on this page"} {"_id": "q_8280", "text": "\\\n returns the bytes of the image file on disk"} {"_id": "q_8281", "text": "Create a video object from a video embed"} {"_id": "q_8282", "text": "adds any siblings that may have a decent score to this node"} {"_id": "q_8283", "text": "\\\n returns a list of nodes we want to search\n on like paragraphs and tables"} {"_id": "q_8284", "text": "\\\n remove any divs that looks like non-content,\n clusters of links, or paras with no gusto"} {"_id": "q_8285", "text": "\\\n Fetch the article title and analyze it"} {"_id": "q_8286", "text": "if the article has meta canonical link set in the url"} {"_id": "q_8287", "text": "Close the network connection and perform any other required cleanup\n\n Note:\n Auto closed when using goose as a context manager or when garbage collected"} {"_id": "q_8288", "text": "Extract the most likely article content from the html page\n\n Args:\n url (str): URL to pull and parse\n raw_html (str): String representation of the HTML page\n Returns:\n Article: Representation of the article contents \\\n including other parsed and extracted metadata"} {"_id": "q_8289", "text": "Returns a unicode object representing 's'. Treats bytestrings using the\n 'encoding' codec.\n\n If strings_only is True, don't convert (some) non-string-like objects."} {"_id": "q_8290", "text": "Returns a bytestring version of 's', encoded as specified in 'encoding'.\n\n If strings_only is True, don't convert (some) non-string-like objects."} {"_id": "q_8291", "text": "Add URLs needed to handle image uploads."} {"_id": "q_8292", "text": "Handle file uploads from WYSIWYG."} {"_id": "q_8293", "text": "Render the Quill WYSIWYG."} {"_id": "q_8294", "text": "Get the form for field."} {"_id": "q_8295", "text": "Resize an image for metadata tags, and return an absolute URL to it."} {"_id": "q_8296", "text": "Check if ``mdrun`` finished successfully.\n\n Analyses the output from ``mdrun`` in *logfile*. Right now we are\n simply looking for the line \"Finished mdrun on node\" in the last 1kb of\n the file. (The file must be seeakable.)\n\n :Arguments:\n *logfile* : filename\n Logfile produced by ``mdrun``.\n\n :Returns: ``True`` if all ok, ``False`` if not finished, and\n ``None`` if the *logfile* cannot be opened"} {"_id": "q_8297", "text": "Launch local smpd."} {"_id": "q_8298", "text": "Find files from a continuation run"} {"_id": "q_8299", "text": "Run ``gromacs.grompp`` and return the total charge of the system.\n\n :Arguments:\n The arguments are the ones one would pass to :func:`gromacs.grompp`.\n :Returns:\n The total charge as reported\n\n Some things to keep in mind:\n\n * The stdout output of grompp is only shown when an error occurs. For\n debugging, look at the log file or screen output and try running the\n normal :func:`gromacs.grompp` command and analyze the output if the\n debugging messages are not sufficient.\n\n * Check that ``qtot`` is correct. Because the function is based on pattern\n matching of the informative output of :program:`grompp` it can break when\n the output format changes. This version recognizes lines like ::\n\n ' System has non-zero total charge: -4.000001e+00'\n\n using the regular expression\n :regexp:`System has non-zero total charge: *(?P[-+]?\\d*\\.\\d+([eE][-+]\\d+)?)`."} {"_id": "q_8300", "text": "Create a processed topology.\n\n The processed (or portable) topology file does not contain any\n ``#include`` statements and hence can be easily copied around. It\n also makes it possible to re-grompp without having any special itp\n files available.\n\n :Arguments:\n *topol*\n topology file\n *struct*\n coordinat (structure) file\n\n :Keywords:\n *processed*\n name of the new topology file; if not set then it is named like\n *topol* but with ``pp_`` prepended\n *includes*\n path or list of paths of directories in which itp files are\n searched for\n *grompp_kwargs**\n other options for :program:`grompp` such as ``maxwarn=2`` can\n also be supplied\n\n :Returns: full path to the processed topology"} {"_id": "q_8301", "text": "Primitive text file stream editor.\n\n This function can be used to edit free-form text files such as the\n topology file. By default it does an **in-place edit** of\n *filename*. If *newname* is supplied then the edited\n file is written to *newname*.\n\n :Arguments:\n *filename*\n input text file\n *substitutions*\n substitution commands (see below for format)\n *newname*\n output filename; if ``None`` then *filename* is changed in\n place [``None``]\n\n *substitutions* is a list of triplets; the first two elements are regular\n expression strings, the last is the substitution value. It mimics\n ``sed`` search and replace. The rules for *substitutions*:\n\n .. productionlist::\n substitutions: \"[\" search_replace_tuple, ... \"]\"\n search_replace_tuple: \"(\" line_match_RE \",\" search_RE \",\" replacement \")\"\n line_match_RE: regular expression that selects the line (uses match)\n search_RE: regular expression that is searched in the line\n replacement: replacement string for search_RE\n\n Running :func:`edit_txt` does pretty much what a simple ::\n\n sed /line_match_RE/s/search_RE/replacement/\n\n with repeated substitution commands does.\n\n Special replacement values:\n - ``None``: the rule is ignored\n - ``False``: the line is deleted (even if other rules match)\n\n .. note::\n\n * No sanity checks are performed and the substitutions must be supplied\n exactly as shown.\n * All substitutions are applied to a line; thus the order of the substitution\n commands may matter when one substitution generates a match for a subsequent rule.\n * If replacement is set to ``None`` then the whole expression is ignored and\n whatever is in the template is used. To unset values you must provided an\n empty string or similar.\n * Delete a matching line if replacement=``False``."} {"_id": "q_8302", "text": "Delete all frames."} {"_id": "q_8303", "text": "Returns resid in the Gromacs index by transforming with offset."} {"_id": "q_8304", "text": "Combine individual groups into a single one and write output.\n\n :Keywords:\n name_all : string\n Name of the combined group, ``None`` generates a name. [``None``]\n out_ndx : filename\n Name of the output file that will contain the individual groups\n and the combined group. If ``None`` then default from the class\n constructor is used. [``None``]\n operation : character\n Logical operation that is used to generate the combined group from\n the individual groups: \"|\" (OR) or \"&\" (AND); if set to ``False``\n then no combined group is created and only the individual groups\n are written. [\"|\"]\n defaultgroups : bool\n ``True``: append everything to the default groups produced by\n :program:`make_ndx` (or rather, the groups provided in the ndx file on\n initialization --- if this was ``None`` then these are truly default groups);\n ``False``: only use the generated groups\n\n :Returns:\n ``(combinedgroup_name, output_ndx)``, a tuple showing the\n actual group name and the name of the file; useful when all names are autogenerated.\n\n .. Warning:: The order of the atom numbers in the combined group is\n *not* guaranteed to be the same as the selections on input because\n ``make_ndx`` sorts them ascending. Thus you should be careful when\n using these index files for calculations of angles and dihedrals.\n Use :class:`gromacs.formats.NDX` in these cases.\n\n .. SeeAlso:: :meth:`IndexBuilder.write`."} {"_id": "q_8305", "text": "Concatenate input index files.\n\n Generate a new index file that contains the default Gromacs index\n groups (if a structure file was defined) and all index groups from the\n input index files.\n\n :Arguments:\n out_ndx : filename\n Name of the output index file; if ``None`` then use the default\n provided to the constructore. [``None``]."} {"_id": "q_8306", "text": "Process ``make_ndx`` command and return name and temp index file."} {"_id": "q_8307", "text": "Process a range selection.\n\n (\"S234\", \"A300\", \"CA\") --> selected all CA in this range\n (\"S234\", \"A300\") --> selected all atoms in this range\n\n .. Note:: Ignores residue type, only cares about the resid (but still required)"} {"_id": "q_8308", "text": "Translate selection for a single res to make_ndx syntax."} {"_id": "q_8309", "text": "Simple tests to flag problems with a ``make_ndx`` run."} {"_id": "q_8310", "text": "Write compact xtc that is fitted to the tpr reference structure.\n\n See :func:`gromacs.cbook.trj_fitandcenter` for details and\n description of *kwargs* (including *input*, *input1*, *n* and\n *n1* for how to supply custom index groups). The most important ones are listed\n here but in most cases the defaults should work.\n\n :Keywords:\n *s*\n Input structure (typically the default tpr file but can be set to\n some other file with a different conformation for fitting)\n *n*\n Alternative index file.\n *o*\n Name of the output trajectory.\n *xy* : Boolean\n If ``True`` then only fit in xy-plane (useful for a membrane normal\n to z). The default is ``False``.\n *force*\n - ``True``: overwrite existing trajectories\n - ``False``: throw a IOError exception\n - ``None``: skip existing and log a warning [default]\n\n :Returns:\n dictionary with keys *tpr*, *xtc*, which are the names of the\n the new files"} {"_id": "q_8311", "text": "Write xtc that is fitted to the tpr reference structure.\n\n Runs :class:`gromacs.tools.trjconv` with appropriate arguments\n for fitting. The most important *kwargs* are listed\n here but in most cases the defaults should work.\n\n Note that the default settings do *not* include centering or\n periodic boundary treatment as this often does not work well\n with fitting. It is better to do this as a separate step (see\n :meth:`center_fit` or :func:`gromacs.cbook.trj_fitandcenter`)\n\n :Keywords:\n *s*\n Input structure (typically the default tpr file but can be set to\n some other file with a different conformation for fitting)\n *n*\n Alternative index file.\n *o*\n Name of the output trajectory. A default name is created.\n If e.g. *dt* = 100 is one of the *kwargs* then the default name includes\n \"_dt100ps\".\n *xy* : boolean\n If ``True`` then only do a rot+trans fit in the xy plane\n (good for membrane simulations); default is ``False``.\n *force*\n ``True``: overwrite existing trajectories\n ``False``: throw a IOError exception\n ``None``: skip existing and log a warning [default]\n *fitgroup*\n index group to fit on [\"backbone\"]\n\n .. Note:: If keyword *input* is supplied then it will override\n *fitgroup*; *input* = ``[fitgroup, outgroup]``\n *kwargs*\n kwargs are passed to :func:`~gromacs.cbook.trj_xyfitted`\n\n :Returns:\n dictionary with keys *tpr*, *xtc*, which are the names of the\n the new files"} {"_id": "q_8312", "text": "Create a top level logger.\n\n - The file logger logs everything (including DEBUG).\n - The console logger only logs INFO and above.\n\n Logging to a file and the console.\n \n See http://docs.python.org/library/logging.html?#logging-to-multiple-destinations\n \n The top level logger of the library is named 'gromacs'. Note that\n we are configuring this logger with console output. If the root\n logger also does this then we will get two output lines to the\n console. We'll live with this because this is a simple\n convenience library..."} {"_id": "q_8313", "text": "Get tool names from all configured groups.\n\n :return: list of tool names"} {"_id": "q_8314", "text": "Dict of variables that we make available as globals in the module.\n\n Can be used as ::\n\n globals().update(GMXConfigParser.configuration) # update configdir, templatesdir ..."} {"_id": "q_8315", "text": "Return the textual representation of logging level 'option' or the number.\n\n Note that option is always interpreted as an UPPERCASE string\n and hence integer log levels will not be recognized.\n\n .. SeeAlso: :mod:`logging` and :func:`logging.getLevelName`"} {"_id": "q_8316", "text": "Use .collection as extension unless provided"} {"_id": "q_8317", "text": "Scale dihedral angles"} {"_id": "q_8318", "text": "Scale improper dihedrals"} {"_id": "q_8319", "text": "Convert string x to the most useful type, i.e. int, float or unicode string.\n\n If x is a quoted string (single or double quotes) then the quotes\n are stripped and the enclosed string returned.\n\n .. Note::\n\n Strings will be returned as Unicode strings (using :func:`to_unicode`).\n\n .. versionchanged:: 0.7.0\n removed `encoding keyword argument"} {"_id": "q_8320", "text": "Return view of the recarray with all int32 cast to int64."} {"_id": "q_8321", "text": "Parse colour specification"} {"_id": "q_8322", "text": "Transform arguments and return them as a list suitable for Popen."} {"_id": "q_8323", "text": "Print help; same as using ``?`` in ``ipython``. long=True also gives call signature."} {"_id": "q_8324", "text": "Add switches as 'options' with value True to the options dict."} {"_id": "q_8325", "text": "Extract standard gromacs doc\n\n Extract by running the program and chopping the header to keep from\n 'DESCRIPTION' onwards."} {"_id": "q_8326", "text": "Convert input to a numerical type if possible.\n\n 1. A non-string object is returned as it is\n 2. Try conversion to int, float, str."} {"_id": "q_8327", "text": "Remove legend for axes or gca.\n\n See http://osdir.com/ml/python.matplotlib.general/2005-07/msg00285.html"} {"_id": "q_8328", "text": "If a file exists then continue with the action specified in ``resolve``.\n\n ``resolve`` must be one of\n\n \"ignore\"\n always return ``False``\n \"indicate\"\n return ``True`` if it exists\n \"warn\"\n indicate and issue a :exc:`UserWarning`\n \"exception\"\n raise :exc:`IOError` if it exists\n\n Alternatively, set *force* for the following behaviour (which\n ignores *resolve*):\n\n ``True``\n same as *resolve* = \"ignore\" (will allow overwriting of files)\n ``False``\n same as *resolve* = \"exception\" (will prevent overwriting of files)\n ``None``\n ignored, do whatever *resolve* says"} {"_id": "q_8329", "text": "Load Gromacs 4.x tools automatically using some heuristic.\n\n Tries to load tools (1) in configured tool groups (2) and fails back to\n automatic detection from ``GMXBIN`` (3) then to a prefilled list.\n\n Also load any extra tool configured in ``~/.gromacswrapper.cfg``\n\n :return: dict mapping tool names to GromacsCommand classes"} {"_id": "q_8330", "text": "Create a array which masks jumps >= threshold.\n\n Extra points are inserted between two subsequent values whose\n absolute difference differs by more than threshold (default is\n pi).\n\n Other can be a secondary array which is also masked according to\n *a*.\n\n Returns (*a_masked*, *other_masked*) (where *other_masked* can be\n ``None``)"} {"_id": "q_8331", "text": "Correlation \"time\" of data.\n\n The 0-th column of the data is interpreted as a time and the\n decay of the data is computed from the autocorrelation\n function (using FFT).\n\n .. SeeAlso:: :func:`numkit.timeseries.tcorrel`"} {"_id": "q_8332", "text": "Set and change the parameters for calculations with correlation functions.\n\n The parameters persist until explicitly changed.\n\n :Keywords:\n *nstep*\n only process every *nstep* data point to speed up the FFT; if\n left empty a default is chosen that produces roughly 25,000 data\n points (or whatever is set in *ncorrel*)\n *ncorrel*\n If no *nstep* is supplied, aim at using *ncorrel* data points for\n the FFT; sets :attr:`XVG.ncorrel` [25000]\n *force*\n force recalculating correlation data even if cached values are\n available\n *kwargs*\n see :func:`numkit.timeseries.tcorrel` for other options\n\n .. SeeAlso: :attr:`XVG.error` for details and references."} {"_id": "q_8333", "text": "Read and cache the file as a numpy array.\n\n Store every *stride* line of data; if ``None`` then the class default is used.\n\n The array is returned with column-first indexing, i.e. for a data file with\n columns X Y1 Y2 Y3 ... the array a will be a[0] = X, a[1] = Y1, ... ."} {"_id": "q_8334", "text": "Plot xvg file data.\n\n The first column of the data is always taken as the abscissa\n X. Additional columns are plotted as ordinates Y1, Y2, ...\n\n In the special case that there is only a single column then this column\n is plotted against the index, i.e. (N, Y).\n\n :Keywords:\n *columns* : list\n Select the columns of the data to be plotted; the list\n is used as a numpy.array extended slice. The default is\n to use all columns. Columns are selected *after* a transform.\n *transform* : function\n function ``transform(array) -> array`` which transforms\n the original array; must return a 2D numpy array of\n shape [X, Y1, Y2, ...] where X, Y1, ... are column\n vectors. By default the transformation is the\n identity [``lambda x: x``].\n *maxpoints* : int\n limit the total number of data points; matplotlib has issues processing\n png files with >100,000 points and pdfs take forever to display. Set to\n ``None`` if really all data should be displayed. At the moment we simply\n decimate the data at regular intervals. [10000]\n *method*\n method to decimate the data to *maxpoints*, see :meth:`XVG.decimate`\n for details\n *color*\n single color (used for all plots); sequence of colors\n (will be repeated as necessary); or a matplotlib\n colormap (e.g. \"jet\", see :mod:`matplotlib.cm`). The\n default is to use the :attr:`XVG.default_color_cycle`.\n *ax*\n plot into given axes or create new one if ``None`` [``None``]\n *kwargs*\n All other keyword arguments are passed on to :func:`matplotlib.pyplot.plot`.\n\n :Returns:\n *ax*\n axes instance"} {"_id": "q_8335", "text": "Find vdwradii.dat and add special entries for lipids.\n\n See :data:`gromacs.setup.vdw_lipid_resnames` for lipid\n resnames. Add more if necessary."} {"_id": "q_8336", "text": "Put protein into box, add water, add counter-ions.\n\n Currently this really only supports solutes in water. If you need\n to embedd a protein in a membrane then you will require more\n sophisticated approaches.\n\n However, you *can* supply a protein already inserted in a\n bilayer. In this case you will probably want to set *distance* =\n ``None`` and also enable *with_membrane* = ``True`` (using extra\n big vdw radii for typical lipids).\n\n .. Note:: The defaults are suitable for solvating a globular\n protein in a fairly tight (increase *distance*!) dodecahedral\n box.\n\n :Arguments:\n *struct* : filename\n pdb or gro input structure\n *top* : filename\n Gromacs topology\n *distance* : float\n When solvating with water, make the box big enough so that\n at least *distance* nm water are between the solute *struct*\n and the box boundary.\n Set *boxtype* to ``None`` in order to use a box size in the input\n file (gro or pdb).\n *boxtype* or *bt*: string\n Any of the box types supported by :class:`~gromacs.tools.Editconf`\n (triclinic, cubic, dodecahedron, octahedron). Set the box dimensions\n either with *distance* or the *box* and *angle* keywords.\n\n If set to ``None`` it will ignore *distance* and use the box\n inside the *struct* file.\n\n *bt* overrides the value of *boxtype*.\n *box*\n List of three box lengths [A,B,C] that are used by :class:`~gromacs.tools.Editconf`\n in combination with *boxtype* (``bt`` in :program:`editconf`) and *angles*.\n Setting *box* overrides *distance*.\n *angles*\n List of three angles (only necessary for triclinic boxes).\n *concentration* : float\n Concentration of the free ions in mol/l. Note that counter\n ions are added in excess of this concentration.\n *cation* and *anion* : string\n Molecule names of the ions. This depends on the chosen force field.\n *water* : string\n Name of the water model; one of \"spc\", \"spce\", \"tip3p\",\n \"tip4p\". This should be appropriate for the chosen force\n field. If an alternative solvent is required, simply supply the path to a box\n with solvent molecules (used by :func:`~gromacs.genbox`'s *cs* argument)\n and also supply the molecule name via *solvent_name*.\n *solvent_name*\n Name of the molecules that make up the solvent (as set in the itp/top).\n Typically needs to be changed when using non-standard/non-water solvents.\n [\"SOL\"]\n *with_membrane* : bool\n ``True``: use special ``vdwradii.dat`` with 0.1 nm-increased radii on\n lipids. Default is ``False``.\n *ndx* : filename\n How to name the index file that is produced by this function.\n *mainselection* : string\n A string that is fed to :class:`~gromacs.tools.Make_ndx` and\n which should select the solute.\n *dirname* : directory name\n Name of the directory in which all files for the solvation stage are stored.\n *includes*\n List of additional directories to add to the mdp include path\n *kwargs*\n Additional arguments are passed on to\n :class:`~gromacs.tools.Editconf` or are interpreted as parameters to be\n changed in the mdp file."} {"_id": "q_8337", "text": "Run multiple energy minimizations one after each other.\n\n :Keywords:\n *integrators*\n list of integrators (from 'l-bfgs', 'cg', 'steep')\n [['bfgs', 'steep']]\n *nsteps*\n list of maximum number of steps; one for each integrator in\n in the *integrators* list [[100,1000]]\n *kwargs*\n mostly passed to :func:`gromacs.setup.energy_minimize`\n\n :Returns: dictionary with paths to final structure ('struct') and\n other files\n\n :Example:\n Conduct three minimizations:\n 1. low memory Broyden-Goldfarb-Fletcher-Shannon (BFGS) for 30 steps\n 2. steepest descent for 200 steps\n 3. finish with BFGS for another 30 steps\n We also do a multi-processor minimization when possible (i.e. for steep\n (and conjugate gradient) by using a :class:`gromacs.run.MDrunner` class\n for a :program:`mdrun` executable compiled for OpenMP in 64 bit (see\n :mod:`gromacs.run` for details)::\n\n import gromacs.run\n gromacs.setup.em_schedule(struct='solvate/ionized.gro',\n mdrunner=gromacs.run.MDrunnerOpenMP64,\n integrators=['l-bfgs', 'steep', 'l-bfgs'],\n nsteps=[50,200, 50])\n\n .. Note:: You might have to prepare the mdp file carefully because at the\n moment one can only modify the *nsteps* parameter on a\n per-minimizer basis."} {"_id": "q_8338", "text": "Set up MD with position restraints.\n\n Additional itp files should be in the same directory as the top file.\n\n Many of the keyword arguments below already have sensible values. Note that\n setting *mainselection* = ``None`` will disable many of the automated\n choices and is often recommended when using your own mdp file.\n\n :Keywords:\n *dirname*\n set up under directory dirname [MD_POSRES]\n *struct*\n input structure (gro, pdb, ...) [em/em.pdb]\n *top*\n topology file [top/system.top]\n *mdp*\n mdp file (or use the template) [templates/md.mdp]\n *ndx*\n index file (supply when using a custom mdp)\n *includes*\n additional directories to search for itp files\n *mainselection*\n :program:`make_ndx` selection to select main group [\"Protein\"]\n (If ``None`` then no canonical index file is generated and\n it is the user's responsibility to set *tc_grps*,\n *tau_t*, and *ref_t* as keyword arguments, or provide the mdp template\n with all parameter pre-set in *mdp* and probably also your own *ndx*\n index file.)\n *deffnm*\n default filename for Gromacs run [md]\n *runtime*\n total length of the simulation in ps [1000]\n *dt*\n integration time step in ps [0.002]\n *qscript*\n script to submit to the queuing system; by default\n uses the template :data:`gromacs.config.qscript_template`, which can\n be manually set to another template from :data:`gromacs.config.templates`;\n can also be a list of template names.\n *qname*\n name to be used for the job in the queuing system [PR_GMX]\n *mdrun_opts*\n option flags for the :program:`mdrun` command in the queuing system\n scripts such as \"-stepout 100\". [\"\"]\n *kwargs*\n remaining key/value pairs that should be changed in the template mdp\n file, eg ``nstxtcout=250, nstfout=250`` or command line options for\n ``grompp` such as ``maxwarn=1``.\n\n In particular one can also set **define** and activate\n whichever position restraints have been coded into the itp\n and top file. For instance one could have\n\n *define* = \"-DPOSRES_MainChain -DPOSRES_LIGAND\"\n\n if these preprocessor constructs exist. Note that there\n **must not be any space between \"-D\" and the value.**\n\n By default *define* is set to \"-DPOSRES\".\n\n :Returns: a dict that can be fed into :func:`gromacs.setup.MD`\n (but check, just in case, especially if you want to\n change the ``define`` parameter in the mdp file)\n\n .. Note:: The output frequency is drastically reduced for position\n restraint runs by default. Set the corresponding ``nst*``\n variables if you require more output. The `pressure coupling`_\n option *refcoord_scaling* is set to \"com\" by default (but can\n be changed via *kwargs*) and the pressure coupling\n algorithm itself is set to *Pcoupl* = \"Berendsen\" to\n run a stable simulation.\n\n .. _`pressure coupling`: http://manual.gromacs.org/online/mdp_opt.html#pc"} {"_id": "q_8339", "text": "Set up equilibrium MD.\n\n Additional itp files should be in the same directory as the top file.\n\n Many of the keyword arguments below already have sensible values. Note that\n setting *mainselection* = ``None`` will disable many of the automated\n choices and is often recommended when using your own mdp file.\n\n :Keywords:\n *dirname*\n set up under directory dirname [MD]\n *struct*\n input structure (gro, pdb, ...) [MD_POSRES/md_posres.pdb]\n *top*\n topology file [top/system.top]\n *mdp*\n mdp file (or use the template) [templates/md.mdp]\n *ndx*\n index file (supply when using a custom mdp)\n *includes*\n additional directories to search for itp files\n *mainselection*\n ``make_ndx`` selection to select main group [\"Protein\"]\n (If ``None`` then no canonical index file is generated and\n it is the user's responsibility to set *tc_grps*,\n *tau_t*, and *ref_t* as keyword arguments, or provide the mdp template\n with all parameter pre-set in *mdp* and probably also your own *ndx*\n index file.)\n *deffnm*\n default filename for Gromacs run [md]\n *runtime*\n total length of the simulation in ps [1000]\n *dt*\n integration time step in ps [0.002]\n *qscript*\n script to submit to the queuing system; by default\n uses the template :data:`gromacs.config.qscript_template`, which can\n be manually set to another template from :data:`gromacs.config.templates`;\n can also be a list of template names.\n *qname*\n name to be used for the job in the queuing system [MD_GMX]\n *mdrun_opts*\n option flags for the :program:`mdrun` command in the queuing system\n scripts such as \"-stepout 100 -dgdl\". [\"\"]\n *kwargs*\n remaining key/value pairs that should be changed in the template mdp\n file, e.g. ``nstxtcout=250, nstfout=250`` or command line options for\n :program`grompp` such as ``maxwarn=1``.\n\n :Returns: a dict that can be fed into :func:`gromacs.setup.MD`\n (but check, just in case, especially if you want to\n change the *define* parameter in the mdp file)"} {"_id": "q_8340", "text": "Write scripts for queuing systems.\n\n\n This sets up queuing system run scripts with a simple search and replace in\n templates. See :func:`gromacs.cbook.edit_txt` for details. Shell scripts\n are made executable.\n\n :Arguments:\n *templates*\n Template file or list of template files. The \"files\" can also be names\n or symbolic names for templates in the templates directory. See\n :mod:`gromacs.config` for details and rules for writing templates.\n *prefix*\n Prefix for the final run script filename; by default the filename will be\n the same as the template. [None]\n *dirname*\n Directory in which to place the submit scripts. [.]\n *deffnm*\n Default filename prefix for :program:`mdrun` ``-deffnm`` [md]\n *jobname*\n Name of the job in the queuing system. [MD]\n *budget*\n Which budget to book the runtime on [None]\n *startdir*\n Explicit path on the remote system (for run scripts that need to `cd`\n into this directory at the beginning of execution) [None]\n *mdrun_opts*\n String of additional options for :program:`mdrun`.\n *walltime*\n Maximum runtime of the job in hours. [1]\n *npme*\n number of PME nodes\n *jobarray_string*\n Multi-line string that is spliced in for job array functionality\n (see :func:`gromacs.qsub.generate_submit_array`; do not use manually)\n *kwargs*\n all other kwargs are ignored\n\n :Returns: list of generated run scripts"} {"_id": "q_8341", "text": "Primitive queuing system detection; only looks at suffix at the moment."} {"_id": "q_8342", "text": "Returns all dates from first to last included."} {"_id": "q_8343", "text": "Fill missing rates of a currency.\n\n This is done by linear interpolation of the two closest available rates.\n\n :param str currency: The currency to fill missing rates for."} {"_id": "q_8344", "text": "Convert amount from a currency to another one.\n\n :param float amount: The amount of `currency` to convert.\n :param str currency: The currency to convert from.\n :param str new_currency: The currency to convert to.\n :param datetime.date date: Use the conversion rate of this date. If this\n is not given, the most recent rate is used.\n\n :return: The value of `amount` in `new_currency`.\n :rtype: float\n\n >>> from datetime import date\n >>> c = CurrencyConverter()\n >>> c.convert(100, 'EUR', 'USD', date=date(2014, 3, 28))\n 137.5...\n >>> c.convert(100, 'USD', date=date(2014, 3, 28))\n 72.67...\n >>> c.convert(100, 'BGN', date=date(2010, 11, 21))\n Traceback (most recent call last):\n RateNotFoundError: BGN has no rate for 2010-11-21"} {"_id": "q_8345", "text": "Animate given frame for set number of iterations.\n\n Parameters\n ----------\n frames : list\n Frames for animating\n interval : float\n Interval between two frames\n name : str\n Name of animation\n iterations : int, optional\n Number of loops for animations"} {"_id": "q_8346", "text": "Compute the total number of unmasked regular pixels in a masks."} {"_id": "q_8347", "text": "Compute an annular masks from an input inner and outer masks radius and regular shape."} {"_id": "q_8348", "text": "Compute a blurring masks from an input masks and psf shape.\n\n The blurring masks corresponds to all pixels which are outside of the masks but will have a fraction of their \\\n light blur into the masked region due to PSF convolution."} {"_id": "q_8349", "text": "Compute a 1D array listing all edge pixel indexes in the masks. An edge pixel is a pixel which is not fully \\\n surrounding by False masks values i.e. it is on an edge."} {"_id": "q_8350", "text": "Output the figure, either as an image on the screen or to the hard-disk as a .png or .fits file.\n\n Parameters\n -----------\n array : ndarray\n The 2D array of image to be output, required for outputting the image as a fits file.\n as_subplot : bool\n Whether the figure is part of subplot, in which case the figure is not output so that the entire subplot can \\\n be output instead using the *output_subplot_array* function.\n output_path : str\n The path on the hard-disk where the figure is output.\n output_filename : str\n The filename of the figure that is output.\n output_format : str\n The format the figue is output:\n 'show' - display on computer screen.\n 'png' - output to hard-disk as a png.\n 'fits' - output to hard-disk as a fits file.'"} {"_id": "q_8351", "text": "Output a figure which consists of a set of subplot,, either as an image on the screen or to the hard-disk as a \\\n .png file.\n\n Parameters\n -----------\n output_path : str\n The path on the hard-disk where the figure is output.\n output_filename : str\n The filename of the figure that is output.\n output_format : str\n The format the figue is output:\n 'show' - display on computer screen.\n 'png' - output to hard-disk as a png."} {"_id": "q_8352", "text": "Generate an image psf shape tag, to customize phase names based on size of the image PSF that the original PSF \\\n is trimmed to for faster run times.\n\n This changes the phase name 'phase_name' as follows:\n\n image_psf_shape = 1 -> phase_name\n image_psf_shape = 2 -> phase_name_image_psf_shape_2\n image_psf_shape = 2 -> phase_name_image_psf_shape_2"} {"_id": "q_8353", "text": "Generate an inversion psf shape tag, to customize phase names based on size of the inversion PSF that the \\\n original PSF is trimmed to for faster run times.\n\n This changes the phase name 'phase_name' as follows:\n\n inversion_psf_shape = 1 -> phase_name\n inversion_psf_shape = 2 -> phase_name_inversion_psf_shape_2\n inversion_psf_shape = 2 -> phase_name_inversion_psf_shape_2"} {"_id": "q_8354", "text": "This function determines whether the tracer should compute the deflections at the next plane.\n\n This is True if there is another plane after this plane, else it is False..\n\n Parameters\n -----------\n plane_index : int\n The index of the plane we are deciding if we should compute its deflections.\n total_planes : int\n The total number of planes."} {"_id": "q_8355", "text": "Given a plane and scaling factor, compute a set of scaled deflections.\n\n Parameters\n -----------\n plane : plane.Plane\n The plane whose deflection stack is scaled.\n scaling_factor : float\n The factor the deflection angles are scaled by, which is typically the scaling factor between redshifts for \\\n multi-plane lensing."} {"_id": "q_8356", "text": "From the pixel-neighbors, setup the regularization matrix using the constant regularization scheme.\n\n Parameters\n ----------\n coefficients : tuple\n The regularization coefficients which controls the degree of smoothing of the inversion reconstruction.\n pixel_neighbors : ndarray\n An array of length (total_pixels) which provides the index of all neighbors of every pixel in \\\n the Voronoi grid (entries of -1 correspond to no neighbor).\n pixel_neighbors_size : ndarrayy\n An array of length (total_pixels) which gives the number of neighbors of every pixel in the \\\n Voronoi grid."} {"_id": "q_8357", "text": "Setup the colorbar of the figure, specifically its ticksize and the size is appears relative to the figure.\n\n Parameters\n -----------\n cb_ticksize : int\n The size of the tick labels on the colorbar.\n cb_fraction : float\n The fraction of the figure that the colorbar takes up, which resizes the colorbar relative to the figure.\n cb_pad : float\n Pads the color bar in the figure, which resizes the colorbar relative to the figure.\n cb_tick_values : [float]\n Manually specified values of where the colorbar tick labels appear on the colorbar.\n cb_tick_labels : [float]\n Manually specified labels of the color bar tick labels, which appear where specified by cb_tick_values."} {"_id": "q_8358", "text": "Plot the mask of the array on the figure.\n\n Parameters\n -----------\n mask : ndarray of data.array.mask.Mask\n The mask applied to the array, the edge of which is plotted as a set of points over the plotted array.\n units : str\n The units of the y / x axis of the plots, in arc-seconds ('arcsec') or kiloparsecs ('kpc').\n kpc_per_arcsec : float or None\n The conversion factor between arc-seconds and kiloparsecs, required to plot the units in kpc.\n pointsize : int\n The size of the points plotted to show the mask."} {"_id": "q_8359", "text": "Plot the borders of the mask or the array on the figure.\n\n Parameters\n -----------t.\n mask : ndarray of data.array.mask.Mask\n The mask applied to the array, the edge of which is plotted as a set of points over the plotted array.\n should_plot_border : bool\n If a mask is supplied, its borders pixels (e.g. the exterior edge) is plotted if this is *True*.\n units : str\n The units of the y / x axis of the plots, in arc-seconds ('arcsec') or kiloparsecs ('kpc').\n kpc_per_arcsec : float or None\n The conversion factor between arc-seconds and kiloparsecs, required to plot the units in kpc.\n border_pointsize : int\n The size of the points plotted to show the borders."} {"_id": "q_8360", "text": "Plot a grid of points over the array of data on the figure.\n\n Parameters\n -----------.\n grid_arcsec : ndarray or data.array.grids.RegularGrid\n A grid of (y,x) coordinates in arc-seconds which may be plotted over the array.\n array : data.array.scaled_array.ScaledArray\n The 2D array of data which is plotted.\n units : str\n The units of the y / x axis of the plots, in arc-seconds ('arcsec') or kiloparsecs ('kpc').\n kpc_per_arcsec : float or None\n The conversion factor between arc-seconds and kiloparsecs, required to plot the units in kpc.\n grid_pointsize : int\n The size of the points plotted to show the grid."} {"_id": "q_8361", "text": "The mapping matrix is a matrix representing the mapping between every unmasked pixel of a grid and \\\n the pixels of a pixelization. Non-zero entries signify a mapping, whereas zeros signify no mapping.\n\n For example, if the regular grid has 5 pixels and the pixelization 3 pixels, with the following mappings:\n\n regular pixel 0 -> pixelization pixel 0\n regular pixel 1 -> pixelization pixel 0\n regular pixel 2 -> pixelization pixel 1\n regular pixel 3 -> pixelization pixel 1\n regular pixel 4 -> pixelization pixel 2\n\n The mapping matrix (which is of dimensions regular_pixels x pixelization_pixels) would appear as follows:\n\n [1, 0, 0] [0->0]\n [1, 0, 0] [1->0]\n [0, 1, 0] [2->1]\n [0, 1, 0] [3->1]\n [0, 0, 1] [4->2]\n\n The mapping matrix is in fact built using the sub-grid of the grid-stack, whereby each regular-pixel is \\\n divided into a regular grid of sub-pixels which are all paired to pixels in the pixelization. The entires \\\n in the mapping matrix now become fractional values dependent on the sub-grid size. For example, for a 2x2 \\\n sub-grid in each pixel (which means the fraction value is 1.0/(2.0^2) = 0.25, if we have the following mappings:\n\n regular pixel 0 -> sub pixel 0 -> pixelization pixel 0\n regular pixel 0 -> sub pixel 1 -> pixelization pixel 1\n regular pixel 0 -> sub pixel 2 -> pixelization pixel 1\n regular pixel 0 -> sub pixel 3 -> pixelization pixel 1\n regular pixel 1 -> sub pixel 0 -> pixelization pixel 1\n regular pixel 1 -> sub pixel 1 -> pixelization pixel 1\n regular pixel 1 -> sub pixel 2 -> pixelization pixel 1\n regular pixel 1 -> sub pixel 3 -> pixelization pixel 1\n regular pixel 2 -> sub pixel 0 -> pixelization pixel 2\n regular pixel 2 -> sub pixel 1 -> pixelization pixel 2\n regular pixel 2 -> sub pixel 2 -> pixelization pixel 3\n regular pixel 2 -> sub pixel 3 -> pixelization pixel 3\n\n The mapping matrix (which is still of dimensions regular_pixels x source_pixels) would appear as follows:\n\n [0.25, 0.75, 0.0, 0.0] [1 sub-pixel maps to pixel 0, 3 map to pixel 1]\n [ 0.0, 1.0, 0.0, 0.0] [All sub-pixels map to pixel 1]\n [ 0.0, 0.0, 0.5, 0.5] [2 sub-pixels map to pixel 2, 2 map to pixel 3]"} {"_id": "q_8362", "text": "Compute the mappings between a pixelization's pixels and the unmasked regular-grid pixels. These mappings \\\n are determined after the regular-grid is used to determine the pixelization.\n\n The pixelization's pixels map to different number of regular-grid pixels, thus a list of lists is used to \\\n represent these mappings"} {"_id": "q_8363", "text": "The 1D index mappings between the regular pixels and Voronoi pixelization pixels."} {"_id": "q_8364", "text": "The 1D index mappings between the sub pixels and Voronoi pixelization pixels."} {"_id": "q_8365", "text": "Generate a two-dimensional poisson noise_maps-mappers from an image.\n\n Values are computed from a Poisson distribution using the image's input values in units of counts.\n\n Parameters\n ----------\n image : ndarray\n The 2D image, whose values in counts are used to draw Poisson noise_maps values.\n exposure_time_map : Union(ndarray, int)\n 2D array of the exposure time in each pixel used to convert to / from counts and electrons per second.\n seed : int\n The seed of the random number generator, used for the random noise_maps maps.\n\n Returns\n -------\n poisson_noise_map: ndarray\n An array describing simulated poisson noise_maps"} {"_id": "q_8366", "text": "Factory for loading the background noise-map from a .fits file.\n\n This factory also includes a number of routines for converting the background noise-map from from other units (e.g. \\\n a weight map).\n\n Parameters\n ----------\n background_noise_map_path : str\n The path to the background_noise_map .fits file containing the background noise-map \\\n (e.g. '/path/to/background_noise_map.fits')\n background_noise_map_hdu : int\n The hdu the background_noise_map is contained in the .fits file specified by *background_noise_map_path*.\n pixel_scale : float\n The size of each pixel in arc seconds.\n convert_background_noise_map_from_weight_map : bool\n If True, the bacground noise-map loaded from the .fits file is converted from a weight-map to a noise-map (see \\\n *NoiseMap.from_weight_map).\n convert_background_noise_map_from_inverse_noise_map : bool\n If True, the background noise-map loaded from the .fits file is converted from an inverse noise-map to a \\\n noise-map (see *NoiseMap.from_inverse_noise_map)."} {"_id": "q_8367", "text": "Factory for loading the psf from a .fits file.\n\n Parameters\n ----------\n psf_path : str\n The path to the psf .fits file containing the psf (e.g. '/path/to/psf.fits')\n psf_hdu : int\n The hdu the psf is contained in the .fits file specified by *psf_path*.\n pixel_scale : float\n The size of each pixel in arc seconds.\n renormalize : bool\n If True, the PSF is renoralized such that all elements sum to 1.0."} {"_id": "q_8368", "text": "Factory for loading the exposure time map from a .fits file.\n\n This factory also includes a number of routines for computing the exposure-time map from other unblurred_image_1d \\\n (e.g. the background noise-map).\n\n Parameters\n ----------\n exposure_time_map_path : str\n The path to the exposure_time_map .fits file containing the exposure time map \\\n (e.g. '/path/to/exposure_time_map.fits')\n exposure_time_map_hdu : int\n The hdu the exposure_time_map is contained in the .fits file specified by *exposure_time_map_path*.\n pixel_scale : float\n The size of each pixel in arc seconds.\n shape : (int, int)\n The shape of the image, required if a single value is used to calculate the exposure time map.\n exposure_time : float\n The exposure-time used to compute the expsure-time map if only a single value is used.\n exposure_time_map_from_inverse_noise_map : bool\n If True, the exposure-time map is computed from the background noise_map map \\\n (see *ExposureTimeMap.from_background_noise_map*)\n inverse_noise_map : ndarray\n The background noise-map, which the Poisson noise-map can be calculated using."} {"_id": "q_8369", "text": "Factory for loading the background sky from a .fits file.\n\n Parameters\n ----------\n background_sky_map_path : str\n The path to the background_sky_map .fits file containing the background sky map \\\n (e.g. '/path/to/background_sky_map.fits').\n background_sky_map_hdu : int\n The hdu the background_sky_map is contained in the .fits file specified by *background_sky_map_path*.\n pixel_scale : float\n The size of each pixel in arc seconds."} {"_id": "q_8370", "text": "The estimated absolute_signal-to-noise_maps mappers of the image."} {"_id": "q_8371", "text": "Simulate the PSF as an elliptical Gaussian profile."} {"_id": "q_8372", "text": "Loads a PSF from fits and renormalizes it\n\n Parameters\n ----------\n pixel_scale\n file_path: String\n The path to the file containing the PSF\n hdu : int\n The HDU the PSF is stored in the .fits file.\n\n Returns\n -------\n psf: PSF\n A renormalized PSF instance"} {"_id": "q_8373", "text": "Loads the PSF from a .fits file.\n\n Parameters\n ----------\n pixel_scale\n file_path: String\n The path to the file containing the PSF\n hdu : int\n The HDU the PSF is stored in the .fits file."} {"_id": "q_8374", "text": "Renormalize the PSF such that its data_vector values sum to unity."} {"_id": "q_8375", "text": "Convolve an array with this PSF\n\n Parameters\n ----------\n image : ndarray\n An array representing the image the PSF is convolved with.\n\n Returns\n -------\n convolved_image : ndarray\n An array representing the image after convolution.\n\n Raises\n ------\n KernelException if either PSF psf dimension is odd"} {"_id": "q_8376", "text": "Compute the Voronoi grid of the pixelization, using the pixel centers.\n\n Parameters\n ----------\n pixel_centers : ndarray\n The (y,x) centre of every Voronoi pixel."} {"_id": "q_8377", "text": "Compute the neighbors of every Voronoi pixel as an ndarray of the pixel index's each pixel shares a \\\n vertex with.\n\n The ridge points of the Voronoi grid are used to derive this.\n\n Parameters\n ----------\n ridge_points : scipy.spatial.Voronoi.ridge_points\n Each Voronoi-ridge (two indexes representing a pixel mapping_matrix)."} {"_id": "q_8378", "text": "Set the x and y labels of the figure, and set the fontsize of those labels.\n\n The x and y labels are always the distance scales, thus the labels are either arc-seconds or kpc and depend on the \\\n units the figure is plotted in.\n\n Parameters\n -----------\n units : str\n The units of the y / x axis of the plots, in arc-seconds ('arcsec') or kiloparsecs ('kpc').\n kpc_per_arcsec : float\n The conversion factor between arc-seconds and kiloparsecs, required to plot the units in kpc.\n xlabelsize : int\n The fontsize of the x axes label.\n ylabelsize : int\n The fontsize of the y axes label.\n xyticksize : int\n The font size of the x and y ticks on the figure axes."} {"_id": "q_8379", "text": "Decorate a profile method that accepts a coordinate grid and returns a data grid.\n\n If an interpolator attribute is associated with the input grid then that interpolator is used to down sample the\n coordinate grid prior to calling the function and up sample the result of the function.\n\n If no interpolator attribute is associated with the input grid then the function is called as normal.\n\n Parameters\n ----------\n func\n Some method that accepts a grid\n\n Returns\n -------\n decorated_function\n The function with optional interpolation"} {"_id": "q_8380", "text": "For a padded grid-stack and psf, compute an unmasked blurred image from an unmasked unblurred image.\n\n This relies on using the lens data's padded-grid, which is a grid of (y,x) coordinates which extends over the \\\n entire image as opposed to just the masked region.\n\n Parameters\n ----------\n psf : ccd.PSF\n The PSF of the image used for convolution.\n unmasked_image_1d : ndarray\n The 1D unmasked image which is blurred."} {"_id": "q_8381", "text": "Setup a grid-stack of grid_stack from a 2D array shape, a pixel scale and a sub-grid size.\n \n This grid corresponds to a fully unmasked 2D array.\n\n Parameters\n -----------\n shape : (int, int)\n The 2D shape of the array, where all pixels are used to generate the grid-stack's grid_stack.\n pixel_scale : float\n The size of each pixel in arc seconds. \n sub_grid_size : int\n The size of a sub-pixel's sub-grid (sub_grid_size x sub_grid_size)."} {"_id": "q_8382", "text": "Setup a grid-stack of masked grid_stack from a mask, sub-grid size and psf-shape.\n\n Parameters\n -----------\n mask : Mask\n The mask whose masked pixels the grid-stack are setup using.\n sub_grid_size : int\n The size of a sub-pixels sub-grid (sub_grid_size x sub_grid_size).\n psf_shape : (int, int)\n The shape of the PSF used in the analysis, which defines the mask's blurring-region."} {"_id": "q_8383", "text": "Compute the xticks labels of this grid, used for plotting the x-axis ticks when visualizing a regular"} {"_id": "q_8384", "text": "For an input sub-gridded array, map its hyper-values from the sub-gridded values to a 1D regular grid of \\\n values by summing each set of each set of sub-pixels values and dividing by the total number of sub-pixels.\n\n Parameters\n -----------\n sub_array_1d : ndarray\n A 1D sub-gridded array of values (e.g. the intensities, surface-densities, potential) which is mapped to\n a 1d regular array."} {"_id": "q_8385", "text": "The 1D index mappings between the regular-grid and masked sparse-grid."} {"_id": "q_8386", "text": "Compute a 2D padded blurred image from a 1D padded image.\n\n Parameters\n ----------\n padded_image_1d : ndarray\n A 1D unmasked image which is blurred with the PSF.\n psf : ndarray\n An array describing the PSF kernel of the image."} {"_id": "q_8387", "text": "Map a padded 1D array of values to its padded 2D array.\n\n Parameters\n -----------\n padded_array_1d : ndarray\n A 1D array of values which were computed using the *PaddedRegularGrid*."} {"_id": "q_8388", "text": "Determine a set of relocated grid_stack from an input set of grid_stack, by relocating their pixels based on the \\\n borders.\n\n The blurring-grid does not have its coordinates relocated, as it is only used for computing analytic \\\n light-profiles and not inversion-grid_stack.\n\n Parameters\n -----------\n grid_stack : GridStack\n The grid-stack, whose grid_stack coordinates are relocated."} {"_id": "q_8389", "text": "Run a fit for each galaxy from the previous phase.\n\n Parameters\n ----------\n data: LensData\n results: ResultsCollection\n Results from all previous phases\n mask: Mask\n The mask\n positions\n\n Returns\n -------\n results: HyperGalaxyResults\n A collection of results, with one item per a galaxy"} {"_id": "q_8390", "text": "Determine the mapping between every masked pixelization-grid pixel and pixelization-grid pixel. This is\n performed by checking whether each pixelization-grid pixel is within the regular-masks, and mapping the indexes.\n\n Parameters\n -----------\n total_sparse_pixels : int\n The total number of pixels in the pixelization grid which fall within the regular-masks.\n mask : ccd.masks.Mask\n The regular-masks within which pixelization pixels must be inside\n unmasked_sparse_grid_pixel_centres : ndarray\n The centres of the unmasked pixelization grid pixels."} {"_id": "q_8391", "text": "Use the central arc-second coordinate of every unmasked pixelization grid's pixels and mapping between each\n pixelization pixel and unmasked pixelization pixel to compute the central arc-second coordinate of every masked\n pixelization grid pixel.\n\n Parameters\n -----------\n unmasked_sparse_grid : ndarray\n The (y,x) arc-second centre of every unmasked pixelization grid pixel.\n sparse_to_unmasked_sparse : ndarray\n The index mapping between every pixelization pixel and masked pixelization pixel."} {"_id": "q_8392", "text": "Resize an array to a new size around a central pixel.\n\n If the origin (e.g. the central pixel) of the resized array is not specified, the central pixel of the array is \\\n calculated automatically. For example, a (5,5) array's central pixel is (2,2). For even dimensions the central \\\n pixel is assumed to be the lower indexed value, e.g. a (6,4) array's central pixel is calculated as (2,1).\n\n The default origin is (-1, -1) because numba requires that the function input is the same type throughout the \\\n function, thus a default 'None' value cannot be used.\n\n Parameters\n ----------\n array_2d : ndarray\n The 2D array that is resized.\n resized_shape : (int, int)\n The (y,x) new pixel dimension of the trimmed array.\n origin : (int, int)\n The oigin of the resized array, e.g. the central pixel around which the array is extracted.\n\n Returns\n -------\n ndarray\n The resized 2D array from the input 2D array.\n\n Examples\n --------\n array_2d = np.ones((5,5))\n resize_array = resize_array_2d(array_2d=array_2d, new_shape=(2,2), origin=(2, 2))"} {"_id": "q_8393", "text": "Bin up an array to coarser resolution, by binning up groups of pixels and using their mean value to determine \\\n the value of the new pixel.\n\n If an array of shape (8,8) is input and the bin up size is 2, this would return a new array of size (4,4) where \\\n every pixel was the mean of each collection of 2x2 pixels on the (8,8) array.\n\n If binning up the array leads to an edge being cut (e.g. a (9,9) array binned up by 2), an array is first \\\n extracted around the centre of that array.\n\n\n Parameters\n ----------\n array_2d : ndarray\n The 2D array that is resized.\n new_shape : (int, int)\n The (y,x) new pixel dimension of the trimmed array.\n origin : (int, int)\n The oigin of the resized array, e.g. the central pixel around which the array is extracted.\n\n Returns\n -------\n ndarray\n The resized 2D array from the input 2D array.\n\n Examples\n --------\n array_2d = np.ones((5,5))\n resize_array = resize_array_2d(array_2d=array_2d, new_shape=(2,2), origin=(2, 2))"} {"_id": "q_8394", "text": "For a given inversion mapping matrix, convolve every pixel's mapped regular with the PSF kernel.\n\n A mapping matrix provides non-zero entries in all elements which map two pixels to one another\n (see *inversions.mappers*).\n\n For example, lets take an regular which is masked using a 'cross' of 5 pixels:\n\n [[ True, False, True]],\n [[False, False, False]],\n [[ True, False, True]]\n\n As example mapping matrix of this cross is as follows (5 regular pixels x 3 source pixels):\n\n [1, 0, 0] [0->0]\n [1, 0, 0] [1->0]\n [0, 1, 0] [2->1]\n [0, 1, 0] [3->1]\n [0, 0, 1] [4->2]\n\n For each source-pixel, we can create an regular of its unit-surface brightnesses by mapping the non-zero\n entries back to masks. For example, doing this for source pixel 1 gives:\n\n [[0.0, 1.0, 0.0]],\n [[1.0, 0.0, 0.0]]\n [[0.0, 0.0, 0.0]]\n\n And source pixel 2:\n\n [[0.0, 0.0, 0.0]],\n [[0.0, 1.0, 1.0]]\n [[0.0, 0.0, 0.0]]\n\n We then convolve each of these regular with our PSF kernel, in 2 dimensions, like we would a normal regular. For\n example, using the kernel below:\n\n kernel:\n\n [[0.0, 0.1, 0.0]]\n [[0.1, 0.6, 0.1]]\n [[0.0, 0.1, 0.0]]\n\n Blurred Source Pixel 1 (we don't need to perform the convolution into masked pixels):\n\n [[0.0, 0.6, 0.0]],\n [[0.6, 0.0, 0.0]],\n [[0.0, 0.0, 0.0]]\n\n Blurred Source pixel 2:\n\n [[0.0, 0.0, 0.0]],\n [[0.0, 0.7, 0.7]],\n [[0.0, 0.0, 0.0]]\n\n Finally, we map each of these blurred regular back to a blurred mapping matrix, which is analogous to the\n mapping matrix.\n\n [0.6, 0.0, 0.0] [0->0]\n [0.6, 0.0, 0.0] [1->0]\n [0.0, 0.7, 0.0] [2->1]\n [0.0, 0.7, 0.0] [3->1]\n [0.0, 0.0, 0.6] [4->2]\n\n If the mapping matrix is sub-gridded, we perform the convolution on the fractional surface brightnesses in an\n identical fashion to above.\n\n Parameters\n -----------\n mapping_matrix : ndarray\n The 2D mapping matix describing how every inversion pixel maps to an datas_ pixel."} {"_id": "q_8395", "text": "Integrate the mass profiles's convergence profile to compute the total angular mass within an ellipse of \\\n specified major axis. This is centred on the mass profile.\n\n The following units for mass can be specified and output:\n\n - Dimensionless angular units (default) - 'angular'.\n - Solar masses - 'angular' (multiplies the angular mass by the critical surface mass density)\n\n Parameters\n ----------\n major_axis : float\n The major-axis radius of the ellipse.\n unit_mass : str\n The units the mass is returned in (angular | angular).\n critical_surface_density : float or None\n The critical surface mass density of the strong lens configuration, which converts mass from angular \\\n units to phsical units (e.g. solar masses)."} {"_id": "q_8396", "text": "Routine to integrate an elliptical light profiles - set axis ratio to 1 to compute the luminosity within a \\\n circle"} {"_id": "q_8397", "text": "Calculate the mass between two circular annuli and compute the density by dividing by the annuli surface\n area.\n\n The value returned by the mass integral is dimensionless, therefore the density between annuli is returned in \\\n units of inverse radius squared. A conversion factor can be specified to convert this to a physical value \\\n (e.g. the critical surface mass density).\n\n Parameters\n -----------\n inner_annuli_radius : float\n The radius of the inner annulus outside of which the density are estimated.\n outer_annuli_radius : float\n The radius of the outer annulus inside of which the density is estimated."} {"_id": "q_8398", "text": "Rescale the einstein radius by slope and axis_ratio, to reduce its degeneracy with other mass-profiles\n parameters"} {"_id": "q_8399", "text": "Calculate the projected convergence at a given set of arc-second gridded coordinates.\n\n Parameters\n ----------\n grid : grids.RegularGrid\n The grid of (y,x) arc-second coordinates the surface density is computed on."} {"_id": "q_8400", "text": "Tabulate an integral over the surface density of deflection potential of a mass profile. This is used in \\\n the GeneralizedNFW profile classes to speed up the integration procedure.\n\n Parameters\n -----------\n grid : grids.RegularGrid\n The grid of (y,x) arc-second coordinates the potential / deflection_stacks are computed on.\n tabulate_bins : int\n The number of bins to tabulate the inner integral of this profile."} {"_id": "q_8401", "text": "Compute the intensity of the profile at a given radius.\n\n Parameters\n ----------\n radius : float\n The distance from the centre of the profile."} {"_id": "q_8402", "text": "Compute the total luminosity of the galaxy's light profiles within a circle of specified radius.\n\n See *light_profiles.luminosity_within_circle* for details of how this is performed.\n\n Parameters\n ----------\n radius : float\n The radius of the circle to compute the dimensionless mass within.\n unit_luminosity : str\n The units the luminosity is returned in (eps | counts).\n exposure_time : float\n The exposure time of the observation, which converts luminosity from electrons per second units to counts."} {"_id": "q_8403", "text": "Compute the total angular mass of the galaxy's mass profiles within a circle of specified radius.\n\n See *profiles.mass_profiles.mass_within_circle* for details of how this is performed.\n\n Parameters\n ----------\n radius : float\n The radius of the circle to compute the dimensionless mass within.\n unit_mass : str\n The units the mass is returned in (angular | solMass).\n critical_surface_density : float\n The critical surface mass density of the strong lens configuration, which converts mass from angulalr \\\n units to physical units (e.g. solar masses)."} {"_id": "q_8404", "text": "The Einstein Mass of this galaxy, which is the sum of Einstein Radii of its mass profiles.\n\n If the galaxy is composed of multiple ellipitcal profiles with different axis-ratios, this Einstein Mass \\\n may be inaccurate. This is because the differently oriented ellipses of each mass profile"} {"_id": "q_8405", "text": "Compute a scaled galaxy hyper noise-map from a baseline noise-map.\n\n This uses the galaxy contribution map and the *noise_factor* and *noise_power* hyper-parameters.\n\n Parameters\n -----------\n noise_map : ndarray\n The observed noise-map (before scaling).\n contributions : ndarray\n The galaxy contribution map."} {"_id": "q_8406", "text": "For a given 1D regular array and blurring array, convolve the two using this convolver.\n\n Parameters\n -----------\n image_array : ndarray\n 1D array of the regular values which are to be blurred with the convolver's PSF.\n blurring_array : ndarray\n 1D array of the blurring regular values which blur into the regular-array after PSF convolution."} {"_id": "q_8407", "text": "Compute the intensities of a list of galaxies from an input grid, by summing the individual intensities \\\n of each galaxy's light profile.\n\n If the input grid is a *grids.SubGrid*, the intensites is calculated on the sub-grid and binned-up to the \\\n original regular grid by taking the mean value of every set of sub-pixels.\n\n If no galaxies are entered into the function, an array of all zeros is returned.\n\n Parameters\n -----------\n grid : RegularGrid\n The grid (regular or sub) of (y,x) arc-second coordinates at the centre of every unmasked pixel which the \\\n intensities are calculated on.\n galaxies : [galaxy.Galaxy]\n The galaxies whose light profiles are used to compute the surface densities."} {"_id": "q_8408", "text": "Compute the convergence of a list of galaxies from an input grid, by summing the individual convergence \\\n of each galaxy's mass profile.\n\n If the input grid is a *grids.SubGrid*, the convergence is calculated on the sub-grid and binned-up to the \\\n original regular grid by taking the mean value of every set of sub-pixels.\n\n If no galaxies are entered into the function, an array of all zeros is returned.\n\n Parameters\n -----------\n grid : RegularGrid\n The grid (regular or sub) of (y,x) arc-second coordinates at the centre of every unmasked pixel which the \\\n convergence is calculated on.\n galaxies : [galaxy.Galaxy]\n The galaxies whose mass profiles are used to compute the convergence."} {"_id": "q_8409", "text": "Compute the potential of a list of galaxies from an input grid, by summing the individual potential \\\n of each galaxy's mass profile.\n\n If the input grid is a *grids.SubGrid*, the surface-density is calculated on the sub-grid and binned-up to the \\\n original regular grid by taking the mean value of every set of sub-pixels.\n\n If no galaxies are entered into the function, an array of all zeros is returned.\n\n Parameters\n -----------\n grid : RegularGrid\n The grid (regular or sub) of (y,x) arc-second coordinates at the centre of every unmasked pixel which the \\\n potential is calculated on.\n galaxies : [galaxy.Galaxy]\n The galaxies whose mass profiles are used to compute the surface densities."} {"_id": "q_8410", "text": "Compute the deflections of a list of galaxies from an input sub-grid, by summing the individual deflections \\\n of each galaxy's mass profile.\n\n The deflections are calculated on the sub-grid and binned-up to the original regular grid by taking the mean value \\\n of every set of sub-pixels.\n\n If no galaxies are entered into the function, an array of all zeros is returned.\n\n Parameters\n -----------\n sub_grid : RegularGrid\n The grid (regular or sub) of (y,x) arc-second coordinates at the centre of every unmasked pixel which the \\\n deflections is calculated on.\n galaxies : [galaxy.Galaxy]\n The galaxies whose mass profiles are used to compute the surface densities."} {"_id": "q_8411", "text": "For a fitting hyper_galaxy_image, hyper_galaxy model image, list of hyper galaxies images and model hyper galaxies, compute\n their contribution maps, which are used to compute a scaled-noise_map map. All quantities are masked 1D arrays.\n\n The reason this is separate from the *contributions_from_fitting_hyper_images_and_hyper_galaxies* function is that\n each hyper_galaxy image has a list of hyper galaxies images and associated hyper galaxies (one for each galaxy). Thus,\n this function breaks down the calculation of each 1D masked contribution map and returns them in the same datas\n structure (2 lists with indexes [image_index][contribution_map_index].\n\n Parameters\n ----------\n hyper_model_image_1d : ndarray\n The best-fit model image to the datas (e.g. from a previous analysis phase).\n hyper_galaxy_images_1d : [ndarray]\n The best-fit model image of each hyper galaxy to the datas (e.g. from a previous analysis phase).\n hyper_galaxies : [galaxy.Galaxy]\n The hyper galaxies which represent the model components used to scale the noise_map, which correspond to\n individual galaxies in the image.\n hyper_minimum_values : [float]\n The minimum value of each hyper_galaxy-image contribution map, which ensure zero's don't impact the scaled noise-map."} {"_id": "q_8412", "text": "For a contribution map and noise-map, use the model hyper galaxies to compute a scaled noise-map.\n\n Parameters\n -----------\n contribution_maps : ndarray\n The image's list of 1D masked contribution maps (e.g. one for each hyper galaxy)\n hyper_galaxies : [galaxy.Galaxy]\n The hyper galaxies which represent the model components used to scale the noise_map, which correspond to\n individual galaxies in the image.\n noise_map : ccd.NoiseMap or ndarray\n An array describing the RMS standard deviation error in each pixel, preferably in units of electrons per\n second."} {"_id": "q_8413", "text": "Wrap the function in a function that checks whether the coordinates have been transformed. If they have not \\ \n been transformed then they are transformed.\n\n Parameters\n ----------\n func : (profiles, *args, **kwargs) -> Object\n A function that requires transformed coordinates\n\n Returns\n -------\n A function that can except cartesian or transformed coordinates"} {"_id": "q_8414", "text": "Caches results of a call to a grid function. If a grid that evaluates to the same byte value is passed into the same\n function of the same instance as previously then the cached result is returned.\n\n Parameters\n ----------\n func\n Some instance method that takes a grid as its argument\n\n Returns\n -------\n result\n Some result, either newly calculated or recovered from the cache"} {"_id": "q_8415", "text": "Determine the sin and cosine of the angle between the profile's ellipse and the positive x-axis, \\\n counter-clockwise."} {"_id": "q_8416", "text": "The angle between each angle theta on the grid and the profile, in radians.\n\n Parameters\n -----------\n grid_thetas : ndarray\n The angle theta counter-clockwise from the positive x-axis to each coordinate in radians."} {"_id": "q_8417", "text": "Compute the mappings between a set of regular-grid pixels and pixelization pixels, using information on \\\n how regular pixels map to their closest pixelization pixel on the image-plane pix-grid and the pixelization's \\\n pixel centres.\n\n To determine the complete set of regular-pixel to pixelization pixel mappings, we must pair every regular-pixel to \\\n its nearest pixel. Using a full nearest neighbor search to do this is slow, thus the pixel neighbors (derived via \\\n the Voronoi grid) are used to localize each nearest neighbor search via a graph search.\n\n Parameters\n ----------\n regular_grid : RegularGrid\n The grid of (y,x) arc-second coordinates at the centre of every unmasked pixel, which has been traced to \\\n to an irregular grid via lens.\n regular_to_nearest_pix : ndarray\n A 1D array that maps every regular-grid pixel to its nearest pix-grid pixel (as determined on the unlensed \\\n 2D array).\n pixel_centres : ndarray\n The (y,x) centre of every Voronoi pixel in arc-seconds.\n pixel_neighbors : ndarray\n An array of length (voronoi_pixels) which provides the index of all neighbors of every pixel in \\\n the Voronoi grid (entries of -1 correspond to no neighbor).\n pixel_neighbors_size : ndarray\n An array of length (voronoi_pixels) which gives the number of neighbors of every pixel in the \\\n Voronoi grid."} {"_id": "q_8418", "text": "Compute the mappings between a set of sub-grid pixels and pixelization pixels, using information on \\\n how the regular pixels hosting each sub-pixel map to their closest pixelization pixel on the image-plane pix-grid \\\n and the pixelization's pixel centres.\n\n To determine the complete set of sub-pixel to pixelization pixel mappings, we must pair every sub-pixel to \\\n its nearest pixel. Using a full nearest neighbor search to do this is slow, thus the pixel neighbors (derived via \\\n the Voronoi grid) are used to localize each nearest neighbor search by using a graph search.\n\n Parameters\n ----------\n regular_grid : RegularGrid\n The grid of (y,x) arc-second coordinates at the centre of every unmasked pixel, which has been traced to \\\n to an irregular grid via lens.\n regular_to_nearest_pix : ndarray\n A 1D array that maps every regular-grid pixel to its nearest pix-grid pixel (as determined on the unlensed \\\n 2D array).\n pixel_centres : (float, float)\n The (y,x) centre of every Voronoi pixel in arc-seconds.\n pixel_neighbors : ndarray\n An array of length (voronoi_pixels) which provides the index of all neighbors of every pixel in \\\n the Voronoi grid (entries of -1 correspond to no neighbor).\n pixel_neighbors_size : ndarray\n An array of length (voronoi_pixels) which gives the number of neighbors of every pixel in the \\\n Voronoi grid."} {"_id": "q_8419", "text": "Integrate the light profile to compute the total luminosity within a circle of specified radius. This is \\\n centred on the light profile's centre.\n\n The following units for mass can be specified and output:\n\n - Electrons per second (default) - 'eps'.\n - Counts - 'counts' (multiplies the luminosity in electrons per second by the exposure time).\n\n Parameters\n ----------\n radius : float\n The radius of the circle to compute the dimensionless mass within.\n unit_luminosity : str\n The units the luminosity is returned in (eps | counts).\n exposure_time : float or None\n The exposure time of the observation, which converts luminosity from electrons per second units to counts."} {"_id": "q_8420", "text": "Integrate the light profiles to compute the total luminosity within an ellipse of specified major axis. \\\n This is centred on the light profile's centre.\n\n The following units for mass can be specified and output:\n\n - Electrons per second (default) - 'eps'.\n - Counts - 'counts' (multiplies the luminosity in electrons per second by the exposure time).\n\n Parameters\n ----------\n major_axis : float\n The major-axis radius of the ellipse.\n unit_luminosity : str\n The units the luminosity is returned in (eps | counts).\n exposure_time : float or None\n The exposure time of the observation, which converts luminosity from electrons per second units to counts."} {"_id": "q_8421", "text": "Routine to integrate the luminosity of an elliptical light profile.\n\n The axis ratio is set to 1.0 for computing the luminosity within a circle"} {"_id": "q_8422", "text": "Calculate the intensity of the Gaussian light profile on a grid of radial coordinates.\n\n Parameters\n ----------\n grid_radii : float\n The radial distance from the centre of the profile. for each coordinate on the grid."} {"_id": "q_8423", "text": "Compute the total luminosity of all galaxies in this plane within a circle of specified radius.\n\n See *galaxy.light_within_circle* and *light_profiles.light_within_circle* for details \\\n of how this is performed.\n\n Parameters\n ----------\n radius : float\n The radius of the circle to compute the dimensionless mass within.\n units_luminosity : str\n The units the luminosity is returned in (eps | counts).\n exposure_time : float\n The exposure time of the observation, which converts luminosity from electrons per second units to counts."} {"_id": "q_8424", "text": "Compute the total luminosity of all galaxies in this plane within a ellipse of specified major-axis.\n\n The value returned by this integral is dimensionless, and a conversion factor can be specified to convert it \\\n to a physical value (e.g. the photometric zeropoint).\n\n See *galaxy.light_within_ellipse* and *light_profiles.light_within_ellipse* for details\n of how this is performed.\n\n Parameters\n ----------\n major_axis : float\n The major-axis radius of the ellipse.\n units_luminosity : str\n The units the luminosity is returned in (eps | counts).\n exposure_time : float\n The exposure time of the observation, which converts luminosity from electrons per second units to counts."} {"_id": "q_8425", "text": "Compute the total mass of all galaxies in this plane within a circle of specified radius.\n\n See *galaxy.angular_mass_within_circle* and *mass_profiles.angular_mass_within_circle* for details\n of how this is performed.\n\n Parameters\n ----------\n radius : float\n The radius of the circle to compute the dimensionless mass within.\n units_mass : str\n The units the mass is returned in (angular | solMass).\n critical_surface_density : float\n The critical surface mass density of the strong lens configuration, which converts mass from angulalr \\\n units to physical units (e.g. solar masses)."} {"_id": "q_8426", "text": "Compute the total mass of all galaxies in this plane within a ellipse of specified major-axis.\n\n See *galaxy.angular_mass_within_ellipse* and *mass_profiles.angular_mass_within_ellipse* for details \\\n of how this is performed.\n\n Parameters\n ----------\n major_axis : float\n The major-axis radius of the ellipse.\n units_luminosity : str\n The units the luminosity is returned in (eps | counts).\n exposure_time : float\n The exposure time of the observation, which converts luminosity from electrons per second units to counts."} {"_id": "q_8427", "text": "Compute the xticks labels of this grid_stack, used for plotting the x-axis ticks when visualizing an \\\n image"} {"_id": "q_8428", "text": "This is a utility function for the function above, which performs the iteration over each plane's galaxies \\\n and computes each galaxy's unmasked blurred image.\n\n Parameters\n ----------\n padded_grid_stack\n psf : ccd.PSF\n The PSF of the image used for convolution."} {"_id": "q_8429", "text": "Trace the positions to the next plane."} {"_id": "q_8430", "text": "Creates an instance of Array and fills it with a single value\n\n Parameters\n ----------\n value: float\n The value with which the array should be filled\n shape: (int, int)\n The shape of the array\n pixel_scale: float\n The scale of a pixel in arc seconds\n\n Returns\n -------\n array: ScaledSquarePixelArray\n An array filled with a single value"} {"_id": "q_8431", "text": "Extract the 2D region of an array corresponding to the rectangle encompassing all unmasked values.\n\n This is used to extract and visualize only the region of an image that is used in an analysis.\n\n Parameters\n ----------\n mask : mask.Mask\n The mask around which the scaled array is extracted.\n buffer : int\n The buffer of pixels around the extraction."} {"_id": "q_8432", "text": "resized the array to a new shape and at a new origin.\n\n Parameters\n -----------\n new_shape : (int, int)\n The new two-dimensional shape of the array."} {"_id": "q_8433", "text": "Fit lens data with a normal tracer and sensitivity tracer, to determine our sensitivity to a selection of \\ \n galaxy components. This factory automatically determines the type of fit based on the properties of the galaxies \\\n in the tracers.\n\n Parameters\n -----------\n lens_data : lens_data.LensData or lens_data.LensDataHyper\n The lens-images that is fitted.\n tracer_normal : ray_tracing.AbstractTracer\n A tracer whose galaxies have the same model components (e.g. light profiles, mass profiles) as the \\\n lens data that we are fitting.\n tracer_sensitive : ray_tracing.AbstractTracerNonStack\n A tracer whose galaxies have the same model components (e.g. light profiles, mass profiles) as the \\\n lens data that we are fitting, but also addition components (e.g. mass clumps) which we measure \\\n how sensitive we are too."} {"_id": "q_8434", "text": "Setup a mask where unmasked pixels are within a circle of an input arc second radius and centre.\n\n Parameters\n ----------\n shape: (int, int)\n The (y,x) shape of the mask in units of pixels.\n pixel_scale: float\n The arc-second to pixel conversion factor of each pixel.\n radius_arcsec : float\n The radius (in arc seconds) of the circle within which pixels unmasked.\n centre: (float, float)\n The centre of the circle used to mask pixels."} {"_id": "q_8435", "text": "Setup a mask where unmasked pixels are within an annulus of input inner and outer arc second radii and \\\n centre.\n\n Parameters\n ----------\n shape : (int, int)\n The (y,x) shape of the mask in units of pixels.\n pixel_scale: float\n The arc-second to pixel conversion factor of each pixel.\n inner_radius_arcsec : float\n The radius (in arc seconds) of the inner circle outside of which pixels are unmasked.\n outer_radius_arcsec : float\n The radius (in arc seconds) of the outer circle within which pixels are unmasked.\n centre: (float, float)\n The centre of the annulus used to mask pixels."} {"_id": "q_8436", "text": "Setup a mask where unmasked pixels are within an ellipse of an input arc second major-axis and centre.\n\n Parameters\n ----------\n shape: (int, int)\n The (y,x) shape of the mask in units of pixels.\n pixel_scale: float\n The arc-second to pixel conversion factor of each pixel.\n major_axis_radius_arcsec : float\n The major-axis (in arc seconds) of the ellipse within which pixels are unmasked.\n axis_ratio : float\n The axis-ratio of the ellipse within which pixels are unmasked.\n phi : float\n The rotation angle of the ellipse within which pixels are unmasked, (counter-clockwise from the positive \\\n x-axis).\n centre: (float, float)\n The centre of the ellipse used to mask pixels."} {"_id": "q_8437", "text": "Setup a mask where unmasked pixels are within an elliptical annulus of input inner and outer arc second \\\n major-axis and centre.\n\n Parameters\n ----------\n shape: (int, int)\n The (y,x) shape of the mask in units of pixels.\n pixel_scale: float\n The arc-second to pixel conversion factor of each pixel.\n inner_major_axis_radius_arcsec : float\n The major-axis (in arc seconds) of the inner ellipse within which pixels are masked.\n inner_axis_ratio : float\n The axis-ratio of the inner ellipse within which pixels are masked.\n inner_phi : float\n The rotation angle of the inner ellipse within which pixels are masked, (counter-clockwise from the \\\n positive x-axis).\n outer_major_axis_radius_arcsec : float\n The major-axis (in arc seconds) of the outer ellipse within which pixels are unmasked.\n outer_axis_ratio : float\n The axis-ratio of the outer ellipse within which pixels are unmasked.\n outer_phi : float\n The rotation angle of the outer ellipse within which pixels are unmasked, (counter-clockwise from the \\\n positive x-axis).\n centre: (float, float)\n The centre of the elliptical annuli used to mask pixels."} {"_id": "q_8438", "text": "The zoomed rectangular region corresponding to the square encompassing all unmasked values.\n\n This is used to zoom in on the region of an image that is used in an analysis for visualization."} {"_id": "q_8439", "text": "Create an instance of the associated class for a set of arguments\n\n Parameters\n ----------\n arguments: {Prior: value}\n Dictionary mapping_matrix priors to attribute analysis_path and value pairs\n\n Returns\n -------\n An instance of the class"} {"_id": "q_8440", "text": "Create a new galaxy prior from a set of arguments, replacing the priors of some of this galaxy prior's prior\n models with new arguments.\n\n Parameters\n ----------\n arguments: dict\n A dictionary mapping_matrix between old priors and their replacements.\n\n Returns\n -------\n new_model: GalaxyModel\n A model with some or all priors replaced."} {"_id": "q_8441", "text": "Plot the observed image of the ccd data.\n\n Set *autolens.data.array.plotters.array_plotters* for a description of all input parameters not described below.\n\n Parameters\n -----------\n image : ScaledSquarePixelArray\n The image of the data.\n plot_origin : True\n If true, the origin of the data's coordinate system is plotted as a 'x'.\n image_plane_pix_grid : ndarray or data.array.grid_stacks.PixGrid\n If an adaptive pixelization whose pixels are formed by tracing pixels from the data, this plots those pixels \\\n over the immage."} {"_id": "q_8442", "text": "Write `data` to file record `n`; records are indexed from 1."} {"_id": "q_8443", "text": "Return a memory-map of the elements `start` through `end`.\n\n The memory map will offer the 8-byte double-precision floats\n (\"elements\") in the file from index `start` through to the index\n `end`, inclusive, both counting the first float as element 1.\n Memory maps must begin on a page boundary, so `skip` returns the\n number of extra bytes at the beginning of the return value."} {"_id": "q_8444", "text": "Return the text inside the comment area of the file."} {"_id": "q_8445", "text": "Compute the component values for the time `tdb` plus `tdb2`."} {"_id": "q_8446", "text": "Close this file."} {"_id": "q_8447", "text": "Map the coefficients into memory using a NumPy array."} {"_id": "q_8448", "text": "Generate angles and derivatives for time `tdb` plus `tdb2`.\n\n If ``derivative`` is true, return a tuple containing both the\n angle and its derivative; otherwise simply return the angles."} {"_id": "q_8449", "text": "Normalise and check a backend path.\n\n Ensure that the requested backend path is specified as a relative path,\n and resolves to a location under the given source tree.\n\n Return an absolute version of the requested path."} {"_id": "q_8450", "text": "Visit a function call.\n\n We expect every logging statement and string format to be a function call."} {"_id": "q_8451", "text": "Process binary operations while processing the first logging argument."} {"_id": "q_8452", "text": "Process keyword arguments."} {"_id": "q_8453", "text": "Helper to get the exception name from an ExceptHandler node in both py2 and py3."} {"_id": "q_8454", "text": "Check if value has id attribute and return it.\n\n :param value: The value to get id from.\n :return: The value.id."} {"_id": "q_8455", "text": "Checks if the node is a bare exception name from an except block."} {"_id": "q_8456", "text": "Reports a violation if exc_info keyword is used with logging.error or logging.exception."} {"_id": "q_8457", "text": "Get a json dict of the attributes of this object."} {"_id": "q_8458", "text": "Convenience method to create a file from a string.\n\n This file object's metadata will have the id 'inlined_input'.\n\n Inputs\n ------\n content -- the content of the file (a string).\n position -- (default 1) rank among all files of the model while parsing\n see FileMetadata\n file_id -- (default 'inlined_input') the file_id that will be used by\n kappa."} {"_id": "q_8459", "text": "Convience method to create a kappa file object from a file on disk\n\n Inputs\n ------\n fpath -- path to the file on disk\n position -- (default 1) rank among all files of the model while parsing\n see FileMetadata\n file_id -- (default = fpath) the file_id that will be used by kappa."} {"_id": "q_8460", "text": "Add a kappa model given in a string to the project."} {"_id": "q_8461", "text": "Add a kappa model from a file at given path to the project."} {"_id": "q_8462", "text": "Delete file from database only if needed.\n\n When editing and the filefield is a new file,\n deletes the previous file (if any) from the database.\n Call this function immediately BEFORE saving the instance."} {"_id": "q_8463", "text": "Edit the download-link inner text."} {"_id": "q_8464", "text": "Checks the input and output to see if they are valid"} {"_id": "q_8465", "text": "Append new samples to the data_capture array and increment the sample counter\r\n If length reaches Tcapture, then the newest samples will be kept. If Tcapture = 0 \r\n then new values are not appended to the data_capture array."} {"_id": "q_8466", "text": "Append new samples to the data_capture_left array and the data_capture_right\r\n array and increment the sample counter. If length reaches Tcapture, then the \r\n newest samples will be kept. If Tcapture = 0 then new values are not appended \r\n to the data_capture array."} {"_id": "q_8467", "text": "Add new tic time to the DSP_tic list. Will not be called if\r\n Tcapture = 0."} {"_id": "q_8468", "text": "Add new toc time to the DSP_toc list. Will not be called if\r\n Tcapture = 0."} {"_id": "q_8469", "text": "The average of the root values is used when multiplicity \r\n is greater than one.\r\n\r\n Mark Wickert October 2016"} {"_id": "q_8470", "text": "Cruise control with PI controller and hill disturbance.\n\n This function returns various system function configurations\n for a the cruise control Case Study example found in \n the supplementary article. The plant model is obtained by the\n linearizing the equations of motion and the controller contains a\n proportional and integral gain term set via the closed-loop parameters\n natuarl frequency wn (rad/s) and damping zeta.\n\n Parameters\n ----------\n wn : closed-loop natural frequency in rad/s, nominally 0.1\n zeta : closed-loop damping factor, nominally 1.0\n T : vehicle time constant, nominally 10 s\n vcruise : cruise velocity set point, nominally 75 mph\n vmax : maximum vehicle velocity, nominally 120 mph\n tf_mode : 'H', 'HE', 'HVW', or 'HED' controls the system function returned by the function \n 'H' : closed-loop system function V(s)/R(s)\n 'HE' : closed-loop system function E(s)/R(s)\n 'HVW' : closed-loop system function V(s)/W(s)\n 'HED' : closed-loop system function E(s)/D(s), where D is the hill disturbance input\n\n Returns\n -------\n b : numerator coefficient ndarray\n a : denominator coefficient ndarray \n\n Examples\n --------\n >>> # return the closed-loop system function output/input velocity\n >>> b,a = cruise_control(wn,zeta,T,vcruise,vmax,tf_mode='H')\n >>> # return the closed-loop system function loop error/hill disturbance\n >>> b,a = cruise_control(wn,zeta,T,vcruise,vmax,tf_mode='HED')"} {"_id": "q_8471", "text": "Stereo demod from complex baseband at sampling rate fs.\r\n Assume fs is 2400 ksps\r\n \r\n Mark Wickert July 2017"} {"_id": "q_8472", "text": "Write IIR SOS Header Files\r\n File format is compatible with CMSIS-DSP IIR \r\n Directform II Filter Functions\r\n \r\n Mark Wickert March 2015-October 2016"} {"_id": "q_8473", "text": "Eye pattern plot of a baseband digital communications waveform.\n\n The signal must be real, but can be multivalued in terms of the underlying\n modulation scheme. Used for BPSK eye plots in the Case Study article.\n\n Parameters\n ----------\n x : ndarray of the real input data vector/array\n L : display length in samples (usually two symbols)\n S : start index\n\n Returns\n -------\n None : A plot window opens containing the eye plot\n \n Notes\n -----\n Increase S to eliminate filter transients.\n \n Examples\n --------\n 1000 bits at 10 samples per bit with 'rc' shaping.\n\n >>> import matplotlib.pyplot as plt\n >>> from sk_dsp_comm import digitalcom as dc\n >>> x,b, data = dc.NRZ_bits(1000,10,'rc')\n >>> dc.eye_plot(x,20,60)\n >>> plt.show()"} {"_id": "q_8474", "text": "Sample a baseband digital communications waveform at the symbol spacing.\n\n Parameters\n ----------\n x : ndarray of the input digital comm signal\n Ns : number of samples per symbol (bit)\n start : the array index to start the sampling\n\n Returns\n -------\n xI : ndarray of the real part of x following sampling\n xQ : ndarray of the imaginary part of x following sampling\n\n Notes\n -----\n Normally the signal is complex, so the scatter plot contains \n clusters at point in the complex plane. For a binary signal \n such as BPSK, the point centers are nominally +/-1 on the real\n axis. Start is used to eliminate transients from the FIR\n pulse shaping filters from appearing in the scatter plot.\n\n Examples\n --------\n >>> import matplotlib.pyplot as plt\n >>> from sk_dsp_comm import digitalcom as dc\n >>> x,b, data = dc.NRZ_bits(1000,10,'rc')\n\n Add some noise so points are now scattered about +/-1.\n\n >>> y = dc.cpx_AWGN(x,20,10)\n >>> yI,yQ = dc.scatter(y,10,60)\n >>> plt.plot(yI,yQ,'.')\n >>> plt.grid()\n >>> plt.xlabel('In-Phase')\n >>> plt.ylabel('Quadrature')\n >>> plt.axis('equal')\n >>> plt.show()"} {"_id": "q_8475", "text": "This function generates"} {"_id": "q_8476", "text": "A truncated square root raised cosine pulse used in digital communications.\n\n The pulse shaping factor :math:`0 < \\\\alpha < 1` is required as well as the\n truncation factor M which sets the pulse duration to be :math:`2*M*T_{symbol}`.\n \n\n Parameters\n ----------\n Ns : number of samples per symbol\n alpha : excess bandwidth factor on (0, 1), e.g., 0.35\n M : equals RC one-sided symbol truncation factor\n\n Returns\n -------\n b : ndarray containing the pulse shape\n\n Notes\n -----\n The pulse shape b is typically used as the FIR filter coefficients\n when forming a pulse shaped digital communications waveform. When \n square root raised cosine (SRC) pulse is used to generate Tx signals and\n at the receiver used as a matched filter (receiver FIR filter), the \n received signal is now raised cosine shaped, thus having zero\n intersymbol interference and the optimum removal of additive white \n noise if present at the receiver input.\n\n Examples\n --------\n Ten samples per symbol and :math:`\\\\alpha = 0.35`.\n\n >>> import matplotlib.pyplot as plt\n >>> from numpy import arange\n >>> from sk_dsp_comm.digitalcom import sqrt_rc_imp\n >>> b = sqrt_rc_imp(10,0.35)\n >>> n = arange(-10*6,10*6+1)\n >>> plt.stem(n,b)\n >>> plt.show()"} {"_id": "q_8477", "text": "Convert an unsigned integer to a numpy binary array with the first\n element the MSB and the last element the LSB."} {"_id": "q_8478", "text": "Convert binary array back a nonnegative integer. The array length is \n the bit width. The first input index holds the MSB and the last holds the LSB."} {"_id": "q_8479", "text": "Filter the signal"} {"_id": "q_8480", "text": "Filter the signal using second-order sections"} {"_id": "q_8481", "text": "Celery task decorator. Forces the task to have only one running instance at a time.\n\n Use with binded tasks (@celery.task(bind=True)).\n\n Modeled after:\n http://loose-bits.com/2010/10/distributed-task-locking-in-celery.html\n http://blogs.it.ox.ac.uk/inapickle/2012/01/05/python-decorators-with-optional-arguments/\n\n Written by @Robpol86.\n\n :raise OtherInstanceError: If another instance is already running.\n\n :param function func: The function to decorate, must be also decorated by @celery.task.\n :param int lock_timeout: Lock timeout in seconds plus five more seconds, in-case the task crashes and fails to\n release the lock. If not specified, the values of the task's soft/hard limits are used. If all else fails,\n timeout will be 5 minutes.\n :param bool include_args: Include the md5 checksum of the arguments passed to the task in the Redis key. This allows\n the same task to run with different arguments, only stopping a task from running if another instance of it is\n running with the same arguments."} {"_id": "q_8482", "text": "Removed the lock regardless of timeout."} {"_id": "q_8483", "text": "Iterator used to iterate in chunks over an array of size `num_samples`.\n At each iteration returns `chunksize` except for the last iteration."} {"_id": "q_8484", "text": "Reduce with `func`, chunk by chunk, the passed pytable `array`."} {"_id": "q_8485", "text": "Load the array `data` in the .mat file `fname`."} {"_id": "q_8486", "text": "Check whether the git executable is found."} {"_id": "q_8487", "text": "Get the Git version."} {"_id": "q_8488", "text": "Returns whether there are uncommitted changes in the working dir."} {"_id": "q_8489", "text": "Get one-line description of HEAD commit for repository in current dir."} {"_id": "q_8490", "text": "Get the HEAD commit SHA1 of repository in current dir."} {"_id": "q_8491", "text": "Print the last commit line and eventual uncommitted changes."} {"_id": "q_8492", "text": "Store parameters in `params` in `h5file.root.parameters`.\n\n `nparams` (dict)\n A dict as returned by `get_params()` in `ParticlesSimulation()`\n The format is:\n keys:\n used as parameter name\n values: (2-elements tuple)\n first element is the parameter value\n second element is a string used as \"title\" (description)\n `attr_params` (dict)\n A dict whole items are stored as attributes in '/parameters'"} {"_id": "q_8493", "text": "Return pathlib.Path for a data-file with given hash and prefix."} {"_id": "q_8494", "text": "Return a RandomState, equal to the input unless rs is None.\n\n When rs is None, try to get the random state from the\n 'last_random_state' attribute in `group`. When not available,\n use `seed` to generate a random state. When seed is None the returned\n random state will have a random seed."} {"_id": "q_8495", "text": "Compact representation of all simulation parameters"} {"_id": "q_8496", "text": "A dict containing all the simulation numeric-parameters.\n\n The values are 2-element tuples: first element is the value and\n second element is a string describing the parameter (metadata)."} {"_id": "q_8497", "text": "Print on-disk array sizes required for current set of parameters."} {"_id": "q_8498", "text": "Simulate Brownian motion trajectories and emission rates.\n\n This method performs the Brownian motion simulation using the current\n set of parameters. Before running this method you can check the\n disk-space requirements using :method:`print_sizes`.\n\n Results are stored to disk in HDF5 format and are accessible in\n in `self.emission`, `self.emission_tot` and `self.position` as\n pytables arrays.\n\n Arguments:\n save_pos (bool): if True, save the particles 3D trajectories\n total_emission (bool): if True, store only the total emission array\n containing the sum of emission of all the particles.\n rs (RandomState object): random state object used as random number\n generator. If None, use a random state initialized from seed.\n seed (uint): when `rs` is None, `seed` is used to initialize the\n random state, otherwise is ignored.\n wrap_func (function): the function used to apply the boundary\n condition (use :func:`wrap_periodic` or :func:`wrap_mirror`).\n path (string): a folder where simulation data is saved.\n verbose (bool): if False, prints no output."} {"_id": "q_8499", "text": "Simulate timestamps from emission trajectories.\n\n Uses attributes: `.t_step`.\n\n Returns:\n A tuple of two arrays: timestamps and particles."} {"_id": "q_8500", "text": "Compute one timestamps array for a mixture of N populations.\n\n Timestamp data are saved to disk and accessible as pytables arrays in\n `._timestamps` and `._tparticles`.\n The background generated timestamps are assigned a\n conventional particle number (last particle index + 1).\n\n Arguments:\n max_rates (list): list of the peak max emission rate for each\n population.\n populations (list of slices): slices to `self.particles`\n defining each population.\n bg_rate (float, cps): rate for a Poisson background process\n rs (RandomState object): random state object used as random number\n generator. If None, use a random state initialized from seed.\n seed (uint): when `rs` is None, `seed` is used to initialize the\n random state, otherwise is ignored.\n chunksize (int): chunk size used for the on-disk timestamp array\n comp_filter (tables.Filter or None): compression filter to use\n for the on-disk `timestamps` and `tparticles` arrays.\n If None use default compression.\n overwrite (bool): if True, overwrite any pre-existing timestamps\n array. If False, never overwrite. The outcome of simulating an\n existing array is controlled by `skip_existing` flag.\n skip_existing (bool): if True, skip simulation if the same\n timestamps array is already present.\n scale (int): `self.t_step` is multiplied by `scale` to obtain the\n timestamps units in seconds.\n path (string): folder where to save the data.\n timeslice (float or None): timestamps are simulated until\n `timeslice` seconds. If None, simulate until `self.t_max`."} {"_id": "q_8501", "text": "Merge donor and acceptor timestamps and particle arrays.\n\n Parameters:\n ts_d (array): donor timestamp array\n ts_par_d (array): donor particles array\n ts_a (array): acceptor timestamp array\n ts_par_a (array): acceptor particles array\n\n Returns:\n Arrays: timestamps, acceptor bool mask, timestamp particle"} {"_id": "q_8502", "text": "Diffusion coefficients of the two specified populations."} {"_id": "q_8503", "text": "2-tuple of slices for selection of two populations."} {"_id": "q_8504", "text": "Compute hash of D and A timestamps for single-step D+A case."} {"_id": "q_8505", "text": "Merge donor and acceptor timestamps, computes `ts`, `a_ch`, `part`."} {"_id": "q_8506", "text": "Create a smFRET Photon-HDF5 file with current timestamps."} {"_id": "q_8507", "text": "Print the HDF5 attributes for `node_name`.\n\n Parameters:\n data_file (pytables HDF5 file object): the data file to print\n node_name (string): name of the path inside the file to be printed.\n Can be either a group or a leaf-node. Default: '/', the root node.\n which (string): Valid values are 'user' for user-defined attributes,\n 'sys' for pytables-specific attributes and 'all' to print both\n groups of attributes. Default 'user'.\n compress (bool): if True displays at most a line for each attribute.\n Default False."} {"_id": "q_8508", "text": "Print all the sub-groups in `group` and leaf-nodes children of `group`.\n\n Parameters:\n data_file (pytables HDF5 file object): the data file to print\n group (string): path name of the group to be printed.\n Default: '/', the root node."} {"_id": "q_8509", "text": "Train model on given training examples and return the list of costs after each minibatch is processed.\n\n Args:\n trX (list) -- Inputs\n trY (list) -- Outputs\n batch_size (int, optional) -- number of examples in a minibatch (default 64)\n n_epochs (int, optional) -- number of epochs to train for (default 1)\n len_filter (object, optional) -- object to filter training example by length (default LenFilter())\n snapshot_freq (int, optional) -- number of epochs between saving model snapshots (default 1)\n path (str, optional) -- prefix of path where model snapshots are saved.\n If None, no snapshots are saved (default None)\n\n Returns:\n list -- costs of model after processing each minibatch"} {"_id": "q_8510", "text": "Sets defaults for ``class Meta`` declarations.\n\n Arguments can either be extracted from a `module` (in that case\n all attributes starting from `prefix` are used):\n\n >>> import foo\n >>> configure(foo)\n\n or passed explicictly as keyword arguments:\n\n >>> configure(database='foo')\n\n .. warning:: Current implementation is by no means thread-safe --\n use it wisely."} {"_id": "q_8511", "text": "Converts a given string from CamelCase to under_score.\n\n >>> to_underscore('FooBar')\n 'foo_bar'"} {"_id": "q_8512", "text": "Generates a plane on the xz axis of a specific size and resolution.\n Normals and texture coordinates are also included.\n\n Args:\n size: (x, y) tuple\n resolution: (x, y) tuple\n\n Returns:\n A :py:class:`demosys.opengl.vao.VAO` instance"} {"_id": "q_8513", "text": "Deferred loading of the scene\n\n :param scene: The scene object\n :param file: Resolved path if changed by finder"} {"_id": "q_8514", "text": "Loads a binary gltf file"} {"_id": "q_8515", "text": "Pre-parse buffer mappings for each VBO to detect interleaved data for a primitive"} {"_id": "q_8516", "text": "Does the buffer interleave with this one?"} {"_id": "q_8517", "text": "Create the VBO"} {"_id": "q_8518", "text": "Set the 3D position of the camera\n\n :param x: float\n :param y: float\n :param z: float"} {"_id": "q_8519", "text": "Look at a specific point\n\n :param vec: Vector3 position\n :param pos: python list [x, y, x]\n :return: Camera matrix"} {"_id": "q_8520", "text": "The standard lookAt method\n\n :param pos: current position\n :param target: target position to look at\n :param up: direction up"} {"_id": "q_8521", "text": "Set the camera position move state\n\n :param direction: What direction to update\n :param activate: Start or stop moving in the direction"} {"_id": "q_8522", "text": "Translate string into character texture positions"} {"_id": "q_8523", "text": "Initialize, load and run\n\n :param manager: The effect manager to use"} {"_id": "q_8524", "text": "Draw scene and mesh bounding boxes"} {"_id": "q_8525", "text": "Applies mesh programs to meshes"} {"_id": "q_8526", "text": "Calculate scene bbox"} {"_id": "q_8527", "text": "Generates random positions inside a confied box.\n\n Args:\n count (int): Number of points to generate\n\n Keyword Args:\n range_x (tuple): min-max range for x axis: Example (-10.0. 10.0)\n range_y (tuple): min-max range for y axis: Example (-10.0. 10.0)\n range_z (tuple): min-max range for z axis: Example (-10.0. 10.0)\n seed (int): The random seed\n\n Returns:\n A :py:class:`demosys.opengl.vao.VAO` instance"} {"_id": "q_8528", "text": "Play the music"} {"_id": "q_8529", "text": "Draw framebuffers for debug purposes.\n We need to supply near and far plane so the depth buffer can be linearized when visualizing.\n\n :param near: Projection near value\n :param far: Projection far value"} {"_id": "q_8530", "text": "Render light volumes"} {"_id": "q_8531", "text": "Render outlines of light volumes"} {"_id": "q_8532", "text": "Load a single shader"} {"_id": "q_8533", "text": "Load a texture array"} {"_id": "q_8534", "text": "Draw the mesh using the assigned mesh program\n\n :param projection_matrix: projection_matrix (bytes)\n :param view_matrix: view_matrix (bytes)\n :param camera_matrix: camera_matrix (bytes)"} {"_id": "q_8535", "text": "Set the current time jumping in the timeline.\n\n Args:\n value (float): The new time"} {"_id": "q_8536", "text": "Draw function called by the system every frame when the effect is active.\n This method raises ``NotImplementedError`` unless implemented.\n\n Args:\n time (float): The current time in seconds.\n frametime (float): The time the previous frame used to render in seconds.\n target (``moderngl.Framebuffer``): The target FBO for the effect."} {"_id": "q_8537", "text": "Get a program by its label\n\n Args:\n label (str): The label for the program\n\n Returns: py:class:`moderngl.Program` instance"} {"_id": "q_8538", "text": "Create a projection matrix with the following parameters.\n When ``aspect_ratio`` is not provided the configured aspect\n ratio for the window will be used.\n\n Args:\n fov (float): Field of view (float)\n near (float): Camera near value\n far (float): Camrea far value\n\n Keyword Args:\n aspect_ratio (float): Aspect ratio of the viewport\n\n Returns:\n The projection matrix as a float32 :py:class:`numpy.array`"} {"_id": "q_8539", "text": "Creates a transformation matrix woth rotations and translation.\n\n Args:\n rotation: 3 component vector as a list, tuple, or :py:class:`pyrr.Vector3`\n translation: 3 component vector as a list, tuple, or :py:class:`pyrr.Vector3`\n\n Returns:\n A 4x4 matrix as a :py:class:`numpy.array`"} {"_id": "q_8540", "text": "Creates a normal matrix from modelview matrix\n\n Args:\n modelview: The modelview matrix\n\n Returns:\n A 3x3 Normal matrix as a :py:class:`numpy.array`"} {"_id": "q_8541", "text": "Scan for available templates in effect_templates"} {"_id": "q_8542", "text": "Get the absolute path to the root of the demosys package"} {"_id": "q_8543", "text": "Load a file in text mode"} {"_id": "q_8544", "text": "Get a finder class from an import path.\n Raises ``demosys.core.exceptions.ImproperlyConfigured`` if the finder is not found.\n This function uses an lru cache.\n\n :param import_path: string representing an import path\n :return: An instance of the finder"} {"_id": "q_8545", "text": "Find a file in the path. The file may exist in multiple\n paths. The last found file will be returned.\n\n :param path: The path to find\n :return: The absolute path to the file or None if not found"} {"_id": "q_8546", "text": "Update the internal projection matrix based on current values\n or values passed in if specified.\n\n :param aspect_ratio: New aspect ratio\n :param fov: New field of view\n :param near: New near value\n :param far: New far value"} {"_id": "q_8547", "text": "Swaps buffers, incement the framecounter and pull events."} {"_id": "q_8548", "text": "Ensure glfw library version is compatible"} {"_id": "q_8549", "text": "Translate the buffer format"} {"_id": "q_8550", "text": "Set the current time. This can be used to jump in the timeline.\n\n Args:\n value (float): The new time"} {"_id": "q_8551", "text": "Resolve scene loader based on file extension"} {"_id": "q_8552", "text": "Pyglet specific callback for window resize events."} {"_id": "q_8553", "text": "Swap buffers, increment frame counter and pull events"} {"_id": "q_8554", "text": "Creates a sphere.\n\n Keyword Args:\n radius (float): Radius or the sphere\n rings (int): number or horizontal rings\n sectors (int): number of vertical segments\n\n Returns:\n A :py:class:`demosys.opengl.vao.VAO` instance"} {"_id": "q_8555", "text": "Attempts to assign a loader class to a resource description\n\n :param meta: The resource description instance"} {"_id": "q_8556", "text": "Attempts to get a loader\n\n :param meta: The resource description instance\n :param raise_on_error: Raise ImproperlyConfigured if the loader cannot be resolved\n :returns: The requested loader class"} {"_id": "q_8557", "text": "Pyqt specific resize callback."} {"_id": "q_8558", "text": "Draws a frame. Internally it calls the\n configured timeline's draw method.\n\n Args:\n current_time (float): The current time (preferrably always from the configured timer class)\n frame_time (float): The duration of the previous frame in seconds"} {"_id": "q_8559", "text": "Sets the clear values for the window buffer.\n\n Args:\n red (float): red compoent\n green (float): green compoent\n blue (float): blue compoent\n alpha (float): alpha compoent\n depth (float): depth value"} {"_id": "q_8560", "text": "Handles the standard keyboard events such as camera movements,\n taking a screenshot, closing the window etc.\n\n Can be overriden add new keyboard events. Ensure this method\n is also called if you want to keep the standard features.\n\n Arguments:\n key: The key that was pressed or released\n action: The key action. Can be `ACTION_PRESS` or `ACTION_RELEASE`\n modifier: Modifiers such as holding shift or ctrl"} {"_id": "q_8561", "text": "The standard mouse movement event method.\n Can be overriden to add new functionality.\n By default this feeds the system camera with new values.\n\n Args:\n x: The current mouse x position\n y: The current mouse y position\n dx: Delta x postion (x position difference from the previous event)\n dy: Delta y postion (y position difference from the previous event)"} {"_id": "q_8562", "text": "Start the timer"} {"_id": "q_8563", "text": "Toggle pause mode"} {"_id": "q_8564", "text": "Check if the loader has a supported file extension"} {"_id": "q_8565", "text": "Get or create a Track object.\n\n :param name: Name of the track\n :return: Track object"} {"_id": "q_8566", "text": "Get all command names in the a folder\n\n :return: List of commands names"} {"_id": "q_8567", "text": "Override settings values"} {"_id": "q_8568", "text": "Hack in program directory"} {"_id": "q_8569", "text": "Hack in texture directory"} {"_id": "q_8570", "text": "Render the VAO.\n\n Args:\n program: The ``moderngl.Program``\n\n Keyword Args:\n mode: Override the draw mode (``TRIANGLES`` etc)\n vertices (int): The number of vertices to transform\n first (int): The index of the first vertex to start with\n instances (int): The number of instances"} {"_id": "q_8571", "text": "Obtain the ``moderngl.VertexArray`` instance for the program.\n The instance is only created once and cached internally.\n\n Returns: ``moderngl.VertexArray`` instance"} {"_id": "q_8572", "text": "Draw code for the mesh. Should be overriden.\n\n :param projection_matrix: projection_matrix (bytes)\n :param view_matrix: view_matrix (bytes)\n :param camera_matrix: camera_matrix (bytes)\n :param time: The current time"} {"_id": "q_8573", "text": "Parse the effect package string.\n Can contain the package python path or path to effect class in an effect package.\n\n Examples::\n\n # Path to effect pacakge\n examples.cubes\n\n # Path to effect class\n examples.cubes.Cubes\n\n Args:\n path: python path to effect package. May also include effect class name.\n\n Returns:\n tuple: (package_path, effect_class)"} {"_id": "q_8574", "text": "Get all resources registed in effect packages.\n These are typically located in ``resources.py``"} {"_id": "q_8575", "text": "Registers a single package\n\n :param name: (str) The effect package to add"} {"_id": "q_8576", "text": "Get a package by python path. Can also contain path to an effect.\n\n Args:\n name (str): Path to effect package or effect\n\n Returns:\n The requested EffectPackage\n\n Raises:\n EffectError when no package is found"} {"_id": "q_8577", "text": "Returns the runnable effect in the package"} {"_id": "q_8578", "text": "FInd the effect package"} {"_id": "q_8579", "text": "Iterate the module attributes picking out effects"} {"_id": "q_8580", "text": "Fetch the resource list"} {"_id": "q_8581", "text": "Fetch track value for every runnable effect.\r\n If the value is > 0.5 we draw it."} {"_id": "q_8582", "text": "Load a 2d texture"} {"_id": "q_8583", "text": "Initialize a single glsl string containing all shaders"} {"_id": "q_8584", "text": "Initialize multiple shader strings"} {"_id": "q_8585", "text": "Loads this project instance"} {"_id": "q_8586", "text": "Reload all shader programs with the reloadable flag set"} {"_id": "q_8587", "text": "Get components and bytes for an image"} {"_id": "q_8588", "text": "Write manage.py in the current directory"} {"_id": "q_8589", "text": "Returns the absolute path to template directory"} {"_id": "q_8590", "text": "Resolve program loader"} {"_id": "q_8591", "text": "Encode a text using arithmetic coding with the provided probabilities.\n\n This is a wrapper for :py:meth:`Arithmetic.encode`.\n\n Parameters\n ----------\n text : str\n A string to encode\n probs : dict\n A probability statistics dictionary generated by\n :py:meth:`Arithmetic.train`\n\n Returns\n -------\n tuple\n The arithmetically coded text\n\n Example\n -------\n >>> pr = ac_train('the quick brown fox jumped over the lazy dog')\n >>> ac_encode('align', pr)\n (16720586181, 34)"} {"_id": "q_8592", "text": "r\"\"\"Generate a probability dict from the provided text.\n\n Text to 0-order probability statistics as a dict\n\n Parameters\n ----------\n text : str\n The text data over which to calculate probability statistics. This\n must not contain the NUL (0x00) character because that is used to\n indicate the end of data.\n\n Example\n -------\n >>> ac = Arithmetic()\n >>> ac.train('the quick brown fox jumped over the lazy dog')\n >>> ac.get_probs()\n {' ': (Fraction(0, 1), Fraction(8, 45)),\n 'o': (Fraction(8, 45), Fraction(4, 15)),\n 'e': (Fraction(4, 15), Fraction(16, 45)),\n 'u': (Fraction(16, 45), Fraction(2, 5)),\n 't': (Fraction(2, 5), Fraction(4, 9)),\n 'r': (Fraction(4, 9), Fraction(22, 45)),\n 'h': (Fraction(22, 45), Fraction(8, 15)),\n 'd': (Fraction(8, 15), Fraction(26, 45)),\n 'z': (Fraction(26, 45), Fraction(3, 5)),\n 'y': (Fraction(3, 5), Fraction(28, 45)),\n 'x': (Fraction(28, 45), Fraction(29, 45)),\n 'w': (Fraction(29, 45), Fraction(2, 3)),\n 'v': (Fraction(2, 3), Fraction(31, 45)),\n 'q': (Fraction(31, 45), Fraction(32, 45)),\n 'p': (Fraction(32, 45), Fraction(11, 15)),\n 'n': (Fraction(11, 15), Fraction(34, 45)),\n 'm': (Fraction(34, 45), Fraction(7, 9)),\n 'l': (Fraction(7, 9), Fraction(4, 5)),\n 'k': (Fraction(4, 5), Fraction(37, 45)),\n 'j': (Fraction(37, 45), Fraction(38, 45)),\n 'i': (Fraction(38, 45), Fraction(13, 15)),\n 'g': (Fraction(13, 15), Fraction(8, 9)),\n 'f': (Fraction(8, 9), Fraction(41, 45)),\n 'c': (Fraction(41, 45), Fraction(14, 15)),\n 'b': (Fraction(14, 15), Fraction(43, 45)),\n 'a': (Fraction(43, 45), Fraction(44, 45)),\n '\\x00': (Fraction(44, 45), Fraction(1, 1))}"} {"_id": "q_8593", "text": "r\"\"\"Fill in self.ngcorpus from a Corpus argument.\n\n Parameters\n ----------\n corpus :Corpus\n The Corpus from which to initialize the n-gram corpus\n n_val : int\n Maximum n value for n-grams\n bos : str\n String to insert as an indicator of beginning of sentence\n eos : str\n String to insert as an indicator of end of sentence\n\n Raises\n ------\n TypeError\n Corpus argument of the Corpus class required.\n\n Example\n -------\n >>> tqbf = 'The quick brown fox jumped over the lazy dog.\\n'\n >>> tqbf += 'And then it slept.\\n And the dog ran off.'\n >>> ngcorp = NGramCorpus()\n >>> ngcorp.corpus_importer(Corpus(tqbf))"} {"_id": "q_8594", "text": "Build up a corpus entry recursively.\n\n Parameters\n ----------\n corpus : Corpus\n The corpus\n words : [str]\n Words to add to the corpus\n count : int\n Count of words"} {"_id": "q_8595", "text": "Fill in self.ngcorpus from a Google NGram corpus file.\n\n Parameters\n ----------\n corpus_file : file\n The Google NGram file from which to initialize the n-gram corpus"} {"_id": "q_8596", "text": "r\"\"\"Return term frequency.\n\n Parameters\n ----------\n term : str\n The term for which to calculate tf\n\n Returns\n -------\n float\n The term frequency (tf)\n\n Raises\n ------\n ValueError\n tf can only calculate the frequency of individual words\n\n Examples\n --------\n >>> tqbf = 'The quick brown fox jumped over the lazy dog.\\n'\n >>> tqbf += 'And then it slept.\\n And the dog ran off.'\n >>> ngcorp = NGramCorpus(Corpus(tqbf))\n >>> NGramCorpus(Corpus(tqbf)).tf('the')\n 1.3010299956639813\n >>> NGramCorpus(Corpus(tqbf)).tf('fox')\n 1.0"} {"_id": "q_8597", "text": "r\"\"\"Return a word decoded from BWT form.\n\n Parameters\n ----------\n code : str\n The word to transform from BWT form\n terminator : str\n A character added to signal the end of the string\n\n Returns\n -------\n str\n Word decoded by BWT\n\n Raises\n ------\n ValueError\n Specified terminator absent from code.\n\n Examples\n --------\n >>> bwt = BWT()\n >>> bwt.decode('n\\x00ilag')\n 'align'\n >>> bwt.decode('annb\\x00aa')\n 'banana'\n >>> bwt.decode('annb@aa', '@')\n 'banana'"} {"_id": "q_8598", "text": "Return the indel distance between two strings.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n int\n Indel distance\n\n Examples\n --------\n >>> cmp = Indel()\n >>> cmp.dist_abs('cat', 'hat')\n 2\n >>> cmp.dist_abs('Niall', 'Neil')\n 3\n >>> cmp.dist_abs('Colin', 'Cuilen')\n 5\n >>> cmp.dist_abs('ATCG', 'TAGC')\n 4"} {"_id": "q_8599", "text": "Return the normalized indel distance between two strings.\n\n This is equivalent to normalized Levenshtein distance, when only\n inserts and deletes are possible.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n Normalized indel distance\n\n Examples\n --------\n >>> cmp = Indel()\n >>> round(cmp.dist('cat', 'hat'), 12)\n 0.333333333333\n >>> round(cmp.dist('Niall', 'Neil'), 12)\n 0.333333333333\n >>> round(cmp.dist('Colin', 'Cuilen'), 12)\n 0.454545454545\n >>> cmp.dist('ATCG', 'TAGC')\n 0.5"} {"_id": "q_8600", "text": "Return similarity.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n *args\n Variable length argument list.\n **kwargs\n Arbitrary keyword arguments.\n\n Returns\n -------\n float\n Similarity"} {"_id": "q_8601", "text": "Return the Tversky distance between two strings.\n\n This is a wrapper for :py:meth:`Tversky.dist`.\n\n Parameters\n ----------\n src : str\n Source string (or QGrams/Counter objects) for comparison\n tar : str\n Target string (or QGrams/Counter objects) for comparison\n qval : int\n The length of each q-gram; 0 for non-q-gram version\n alpha : float\n Tversky index parameter as described above\n beta : float\n Tversky index parameter as described above\n bias : float\n The symmetric Tversky index bias parameter\n\n Returns\n -------\n float\n Tversky distance\n\n Examples\n --------\n >>> dist_tversky('cat', 'hat')\n 0.6666666666666667\n >>> dist_tversky('Niall', 'Neil')\n 0.7777777777777778\n >>> dist_tversky('aluminum', 'Catalan')\n 0.9375\n >>> dist_tversky('ATCG', 'TAGC')\n 1.0"} {"_id": "q_8602", "text": "Return the longest common subsequence of two strings.\n\n Based on the dynamic programming algorithm from\n http://rosettacode.org/wiki/Longest_common_subsequence\n :cite:`rosettacode:2018b`. This is licensed GFDL 1.2.\n\n Modifications include:\n conversion to a numpy array in place of a list of lists\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n str\n The longest common subsequence\n\n Examples\n --------\n >>> sseq = LCSseq()\n >>> sseq.lcsseq('cat', 'hat')\n 'at'\n >>> sseq.lcsseq('Niall', 'Neil')\n 'Nil'\n >>> sseq.lcsseq('aluminum', 'Catalan')\n 'aln'\n >>> sseq.lcsseq('ATCG', 'TAGC')\n 'AC'"} {"_id": "q_8603", "text": "Return the prefix similarity of two strings.\n\n Prefix similarity is the ratio of the length of the shorter term that\n exactly matches the longer term to the length of the shorter term,\n beginning at the start of both terms.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n Prefix similarity\n\n Examples\n --------\n >>> cmp = Prefix()\n >>> cmp.sim('cat', 'hat')\n 0.0\n >>> cmp.sim('Niall', 'Neil')\n 0.25\n >>> cmp.sim('aluminum', 'Catalan')\n 0.0\n >>> cmp.sim('ATCG', 'TAGC')\n 0.0"} {"_id": "q_8604", "text": "r\"\"\"Return the raw corpus.\n\n This is reconstructed by joining sub-components with the corpus' split\n characters\n\n Returns\n -------\n str\n The raw corpus\n\n Example\n -------\n >>> tqbf = 'The quick brown fox jumped over the lazy dog.\\n'\n >>> tqbf += 'And then it slept.\\n And the dog ran off.'\n >>> corp = Corpus(tqbf)\n >>> print(corp.raw())\n The quick brown fox jumped over the lazy dog.\n And then it slept.\n And the dog ran off.\n >>> len(corp.raw())\n 85"} {"_id": "q_8605", "text": "Return the best guess language ID for the word and language choices.\n\n Parameters\n ----------\n name : str\n The term to guess the language of\n name_mode : str\n The name mode of the algorithm: ``gen`` (default),\n ``ash`` (Ashkenazi), or ``sep`` (Sephardic)\n\n Returns\n -------\n int\n Language ID"} {"_id": "q_8606", "text": "Reassess the language of the terms and call the phonetic encoder.\n\n Uses a split multi-word term.\n\n Parameters\n ----------\n term : str\n The term to encode via Beider-Morse\n name_mode : str\n The name mode of the algorithm: ``gen`` (default),\n ``ash`` (Ashkenazi), or ``sep`` (Sephardic)\n rules : tuple\n The set of initial phonetic transform regexps\n final_rules1 : tuple\n The common set of final phonetic transform regexps\n final_rules2 : tuple\n The specific set of final phonetic transform regexps\n concat : bool\n A flag to indicate concatenation\n\n Returns\n -------\n str\n A Beider-Morse phonetic code"} {"_id": "q_8607", "text": "Apply a set of final rules to the phonetic encoding.\n\n Parameters\n ----------\n phonetic : str\n The term to which to apply the final rules\n final_rules : tuple\n The set of final phonetic transform regexps\n language_arg : int\n An integer representing the target language of the phonetic\n encoding\n strip : bool\n Flag to indicate whether to normalize the language attributes\n\n Returns\n -------\n str\n A Beider-Morse phonetic code"} {"_id": "q_8608", "text": "Expand phonetic alternates separated by |s.\n\n Parameters\n ----------\n phonetic : str\n A Beider-Morse phonetic encoding\n\n Returns\n -------\n str\n A Beider-Morse phonetic code"} {"_id": "q_8609", "text": "Remove duplicates from a phonetic encoding list.\n\n Parameters\n ----------\n phonetic : str\n A Beider-Morse phonetic encoding\n\n Returns\n -------\n str\n A Beider-Morse phonetic code"} {"_id": "q_8610", "text": "Remove embedded bracketed attributes.\n\n This (potentially) bitwise-ands bracketed attributes together and adds\n to the end.\n This is applied to a single alternative at a time -- not to a\n parenthesized list.\n It removes all embedded bracketed attributes, logically-ands them\n together, and places them at the end.\n However if strip is true, this can indeed remove embedded bracketed\n attributes from a parenthesized list.\n\n Parameters\n ----------\n text : str\n A Beider-Morse phonetic encoding (in progress)\n strip : bool\n Remove the bracketed attributes (and throw away)\n\n Returns\n -------\n str\n A Beider-Morse phonetic code\n\n Raises\n ------\n ValueError\n No closing square bracket"} {"_id": "q_8611", "text": "Apply a phonetic regex if compatible.\n\n tests for compatible language rules\n\n to do so, apply the rule, expand the results, and detect alternatives\n with incompatible attributes\n\n then drop each alternative that has incompatible attributes and keep\n those that are compatible\n\n if there are no compatible alternatives left, return false\n\n otherwise return the compatible alternatives\n\n apply the rule\n\n Parameters\n ----------\n phonetic : str\n The Beider-Morse phonetic encoding (so far)\n target : str\n A proposed addition to the phonetic encoding\n language_arg : int\n An integer representing the target language of the phonetic\n encoding\n\n Returns\n -------\n str\n A candidate encoding"} {"_id": "q_8612", "text": "Return the index value for a language code.\n\n This returns l_any if more than one code is specified or the code is\n out of bounds.\n\n Parameters\n ----------\n code : int\n The language code to interpret\n name_mode : str\n The name mode of the algorithm: ``gen`` (default),\n ``ash`` (Ashkenazi), or ``sep`` (Sephardic)\n\n Returns\n -------\n int\n Language code index"} {"_id": "q_8613", "text": "Return the strcmp95 distance between two strings.\n\n This is a wrapper for :py:meth:`Strcmp95.dist`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n long_strings : bool\n Set to True to increase the probability of a match when the number of\n matched characters is large. This option allows for a little more\n tolerance when the strings are large. It is not an appropriate test\n when comparing fixed length fields such as phone and social security\n numbers.\n\n Returns\n -------\n float\n Strcmp95 distance\n\n Examples\n --------\n >>> round(dist_strcmp95('cat', 'hat'), 12)\n 0.222222222222\n >>> round(dist_strcmp95('Niall', 'Neil'), 12)\n 0.1545\n >>> round(dist_strcmp95('aluminum', 'Catalan'), 12)\n 0.345238095238\n >>> round(dist_strcmp95('ATCG', 'TAGC'), 12)\n 0.166666666667"} {"_id": "q_8614", "text": "Return the Naval Research Laboratory phonetic encoding of a word.\n\n Parameters\n ----------\n word : str\n The word to transform\n\n Returns\n -------\n str\n The NRL phonetic encoding\n\n Examples\n --------\n >>> pe = NRL()\n >>> pe.encode('the')\n 'DHAX'\n >>> pe.encode('round')\n 'rAWnd'\n >>> pe.encode('quick')\n 'kwIHk'\n >>> pe.encode('eaten')\n 'IYtEHn'\n >>> pe.encode('Smith')\n 'smIHTH'\n >>> pe.encode('Larsen')\n 'lAArsEHn'"} {"_id": "q_8615", "text": "Return the longest common substring of two strings.\n\n Longest common substring (LCSstr).\n\n Based on the code from\n https://en.wikibooks.org/wiki/Algorithm_Implementation/Strings/Longest_common_substring\n :cite:`Wikibooks:2018`.\n This is licensed Creative Commons: Attribution-ShareAlike 3.0.\n\n Modifications include:\n\n - conversion to a numpy array in place of a list of lists\n - conversion to Python 2/3-safe range from xrange via six\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n str\n The longest common substring\n\n Examples\n --------\n >>> sstr = LCSstr()\n >>> sstr.lcsstr('cat', 'hat')\n 'at'\n >>> sstr.lcsstr('Niall', 'Neil')\n 'N'\n >>> sstr.lcsstr('aluminum', 'Catalan')\n 'al'\n >>> sstr.lcsstr('ATCG', 'TAGC')\n 'A'"} {"_id": "q_8616", "text": "r\"\"\"Return the longest common substring similarity of two strings.\n\n Longest common substring similarity (:math:`sim_{LCSstr}`).\n\n This employs the LCS function to derive a similarity metric:\n :math:`sim_{LCSstr}(s,t) = \\frac{|LCSstr(s,t)|}{max(|s|, |t|)}`\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n LCSstr similarity\n\n Examples\n --------\n >>> sim_lcsstr('cat', 'hat')\n 0.6666666666666666\n >>> sim_lcsstr('Niall', 'Neil')\n 0.2\n >>> sim_lcsstr('aluminum', 'Catalan')\n 0.25\n >>> sim_lcsstr('ATCG', 'TAGC')\n 0.25"} {"_id": "q_8617", "text": "Return the Needleman-Wunsch score of two strings.\n\n This is a wrapper for :py:meth:`NeedlemanWunsch.dist_abs`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n gap_cost : float\n The cost of an alignment gap (1 by default)\n sim_func : function\n A function that returns the similarity of two characters (identity\n similarity by default)\n\n Returns\n -------\n float\n Needleman-Wunsch score\n\n Examples\n --------\n >>> needleman_wunsch('cat', 'hat')\n 2.0\n >>> needleman_wunsch('Niall', 'Neil')\n 1.0\n >>> needleman_wunsch('aluminum', 'Catalan')\n -1.0\n >>> needleman_wunsch('ATCG', 'TAGC')\n 0.0"} {"_id": "q_8618", "text": "Return the matrix similarity of two strings.\n\n With the default parameters, this is identical to sim_ident.\n It is possible for sim_matrix to return values outside of the range\n :math:`[0, 1]`, if values outside that range are present in mat,\n mismatch_cost, or match_cost.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n mat : dict\n A dict mapping tuples to costs; the tuples are (src, tar) pairs of\n symbols from the alphabet parameter\n mismatch_cost : float\n The value returned if (src, tar) is absent from mat when src does\n not equal tar\n match_cost : float\n The value returned if (src, tar) is absent from mat when src equals\n tar\n symmetric : bool\n True if the cost of src not matching tar is identical to the cost\n of tar not matching src; in this case, the values in mat need only\n contain (src, tar) or (tar, src), not both\n alphabet : str\n A collection of tokens from which src and tar are drawn; if this is\n defined a ValueError is raised if either tar or src is not found in\n alphabet\n\n Returns\n -------\n float\n Matrix similarity\n\n Raises\n ------\n ValueError\n src value not in alphabet\n ValueError\n tar value not in alphabet\n\n Examples\n --------\n >>> NeedlemanWunsch.sim_matrix('cat', 'hat')\n 0\n >>> NeedlemanWunsch.sim_matrix('hat', 'hat')\n 1"} {"_id": "q_8619", "text": "Return the NCD between two strings using BWT plus RLE.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n Compression distance\n\n Examples\n --------\n >>> cmp = NCDbwtrle()\n >>> cmp.dist('cat', 'hat')\n 0.75\n >>> cmp.dist('Niall', 'Neil')\n 0.8333333333333334\n >>> cmp.dist('aluminum', 'Catalan')\n 1.0\n >>> cmp.dist('ATCG', 'TAGC')\n 0.8"} {"_id": "q_8620", "text": "Cast to tuple.\n\n Returns\n -------\n tuple\n The confusion table as a 4-tuple (tp, tn, fp, fn)\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> ct.to_tuple()\n (120, 60, 20, 30)"} {"_id": "q_8621", "text": "Cast to dict.\n\n Returns\n -------\n dict\n The confusion table as a dict\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> import pprint\n >>> pprint.pprint(ct.to_dict())\n {'fn': 30, 'fp': 20, 'tn': 60, 'tp': 120}"} {"_id": "q_8622", "text": "Return population, N.\n\n Returns\n -------\n int\n The population (N) of the confusion table\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> ct.population()\n 230"} {"_id": "q_8623", "text": "r\"\"\"Return precision.\n\n Precision is defined as :math:`\\frac{tp}{tp + fp}`\n\n AKA positive predictive value (PPV)\n\n Cf. https://en.wikipedia.org/wiki/Precision_and_recall\n\n Cf. https://en.wikipedia.org/wiki/Information_retrieval#Precision\n\n Returns\n -------\n float\n The precision of the confusion table\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> ct.precision()\n 0.8571428571428571"} {"_id": "q_8624", "text": "r\"\"\"Return gain in precision.\n\n The gain in precision is defined as:\n :math:`G(precision) = \\frac{precision}{random~ precision}`\n\n Cf. https://en.wikipedia.org/wiki/Gain_(information_retrieval)\n\n Returns\n -------\n float\n The gain in precision of the confusion table\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> ct.precision_gain()\n 1.3142857142857143"} {"_id": "q_8625", "text": "r\"\"\"Return recall.\n\n Recall is defined as :math:`\\frac{tp}{tp + fn}`\n\n AKA sensitivity\n\n AKA true positive rate (TPR)\n\n Cf. https://en.wikipedia.org/wiki/Precision_and_recall\n\n Cf. https://en.wikipedia.org/wiki/Sensitivity_(test)\n\n Cf. https://en.wikipedia.org/wiki/Information_retrieval#Recall\n\n Returns\n -------\n float\n The recall of the confusion table\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> ct.recall()\n 0.8"} {"_id": "q_8626", "text": "r\"\"\"Return accuracy.\n\n Accuracy is defined as :math:`\\frac{tp + tn}{population}`\n\n Cf. https://en.wikipedia.org/wiki/Accuracy\n\n Returns\n -------\n float\n The accuracy of the confusion table\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> ct.accuracy()\n 0.782608695652174"} {"_id": "q_8627", "text": "r\"\"\"Return gain in accuracy.\n\n The gain in accuracy is defined as:\n :math:`G(accuracy) = \\frac{accuracy}{random~ accuracy}`\n\n Cf. https://en.wikipedia.org/wiki/Gain_(information_retrieval)\n\n Returns\n -------\n float\n The gain in accuracy of the confusion table\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> ct.accuracy_gain()\n 1.4325259515570934"} {"_id": "q_8628", "text": "r\"\"\"Return logarithmic mean of precision & recall.\n\n The logarithmic mean is:\n 0 if either precision or recall is 0,\n the precision if they are equal,\n otherwise :math:`\\frac{precision - recall}\n {ln(precision) - ln(recall)}`\n\n Cf. https://en.wikipedia.org/wiki/Logarithmic_mean\n\n Returns\n -------\n float\n The logarithmic mean of the confusion table's precision & recall\n\n Example\n -------\n >>> ct = ConfusionTable(120, 60, 20, 30)\n >>> ct.pr_lmean()\n 0.8282429171492667"} {"_id": "q_8629", "text": "Return CLEF German stem.\n\n Parameters\n ----------\n word : str\n The word to stem\n\n Returns\n -------\n str\n Word stem\n\n Examples\n --------\n >>> stmr = CLEFGerman()\n >>> stmr.stem('lesen')\n 'lese'\n >>> stmr.stem('graues')\n 'grau'\n >>> stmr.stem('buchstabieren')\n 'buchstabier'"} {"_id": "q_8630", "text": "Return the \"simplest\" Sift4 distance between two terms.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n max_offset : int\n The number of characters to search for matching letters\n\n Returns\n -------\n int\n The Sift4 distance according to the simplest formula\n\n Examples\n --------\n >>> cmp = Sift4Simplest()\n >>> cmp.dist_abs('cat', 'hat')\n 1\n >>> cmp.dist_abs('Niall', 'Neil')\n 2\n >>> cmp.dist_abs('Colin', 'Cuilen')\n 3\n >>> cmp.dist_abs('ATCG', 'TAGC')\n 2"} {"_id": "q_8631", "text": "Return the normalized typo similarity between two strings.\n\n This is a wrapper for :py:meth:`Typo.sim`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n metric : str\n Supported values include: ``euclidean``, ``manhattan``,\n ``log-euclidean``, and ``log-manhattan``\n cost : tuple\n A 4-tuple representing the cost of the four possible edits: inserts,\n deletes, substitutions, and shift, respectively (by default:\n (1, 1, 0.5, 0.5)) The substitution & shift costs should be\n significantly less than the cost of an insertion & deletion unless a\n log metric is used.\n layout : str\n Name of the keyboard layout to use (Currently supported:\n ``QWERTY``, ``Dvorak``, ``AZERTY``, ``QWERTZ``)\n\n Returns\n -------\n float\n Normalized typo similarity\n\n Examples\n --------\n >>> round(sim_typo('cat', 'hat'), 12)\n 0.472953716914\n >>> round(sim_typo('Niall', 'Neil'), 12)\n 0.434971857071\n >>> round(sim_typo('Colin', 'Cuilen'), 12)\n 0.430964390437\n >>> sim_typo('ATCG', 'TAGC')\n 0.375"} {"_id": "q_8632", "text": "Return the normalized Manhattan distance between two strings.\n\n This is a wrapper for :py:meth:`Manhattan.dist`.\n\n Parameters\n ----------\n src : str\n Source string (or QGrams/Counter objects) for comparison\n tar : str\n Target string (or QGrams/Counter objects) for comparison\n qval : int\n The length of each q-gram; 0 for non-q-gram version\n alphabet : collection or int\n The values or size of the alphabet\n\n Returns\n -------\n float\n The normalized Manhattan distance\n\n Examples\n --------\n >>> dist_manhattan('cat', 'hat')\n 0.5\n >>> round(dist_manhattan('Niall', 'Neil'), 12)\n 0.636363636364\n >>> round(dist_manhattan('Colin', 'Cuilen'), 12)\n 0.692307692308\n >>> dist_manhattan('ATCG', 'TAGC')\n 1.0"} {"_id": "q_8633", "text": "Return the normalized Manhattan similarity of two strings.\n\n This is a wrapper for :py:meth:`Manhattan.sim`.\n\n Parameters\n ----------\n src : str\n Source string (or QGrams/Counter objects) for comparison\n tar : str\n Target string (or QGrams/Counter objects) for comparison\n qval : int\n The length of each q-gram; 0 for non-q-gram version\n alphabet : collection or int\n The values or size of the alphabet\n\n Returns\n -------\n float\n The normalized Manhattan similarity\n\n Examples\n --------\n >>> sim_manhattan('cat', 'hat')\n 0.5\n >>> round(sim_manhattan('Niall', 'Neil'), 12)\n 0.363636363636\n >>> round(sim_manhattan('Colin', 'Cuilen'), 12)\n 0.307692307692\n >>> sim_manhattan('ATCG', 'TAGC')\n 0.0"} {"_id": "q_8634", "text": "Return the skeleton key.\n\n Parameters\n ----------\n word : str\n The word to transform into its skeleton key\n\n Returns\n -------\n str\n The skeleton key\n\n Examples\n --------\n >>> sk = SkeletonKey()\n >>> sk.fingerprint('The quick brown fox jumped over the lazy dog.')\n 'THQCKBRWNFXJMPDVLZYGEUIOA'\n >>> sk.fingerprint('Christopher')\n 'CHRSTPIOE'\n >>> sk.fingerprint('Niall')\n 'NLIA'"} {"_id": "q_8635", "text": "Calculate the pairwise similarity statistics a collection of strings.\n\n Calculate pairwise similarities among members of two collections,\n returning the maximum, minimum, mean (according to a supplied function,\n arithmetic mean, by default), and (population) standard deviation\n of those similarities.\n\n Parameters\n ----------\n src_collection : list\n A collection of terms or a string that can be split\n tar_collection : list\n A collection of terms or a string that can be split\n metric : function\n A similarity metric function\n mean_func : function\n A mean function that takes a list of values and returns a float\n symmetric : bool\n Set to True if all pairwise similarities should be calculated in both\n directions\n\n Returns\n -------\n tuple\n The max, min, mean, and standard deviation of similarities\n\n Raises\n ------\n ValueError\n mean_func must be a function\n ValueError\n metric must be a function\n ValueError\n src_collection is neither a string nor iterable\n ValueError\n tar_collection is neither a string nor iterable\n\n Example\n -------\n >>> tuple(round(_, 12) for _ in pairwise_similarity_statistics(\n ... ['Christopher', 'Kristof', 'Christobal'], ['Niall', 'Neal', 'Neil']))\n (0.2, 0.0, 0.118614718615, 0.075070477184)"} {"_id": "q_8636", "text": "Return the R2 region, as defined in the Porter2 specification.\n\n Parameters\n ----------\n term : str\n The term to examine\n r1_prefixes : set\n Prefixes to consider\n\n Returns\n -------\n int\n Length of the R1 region"} {"_id": "q_8637", "text": "Return True iff term ends in a short syllable.\n\n (...according to the Porter2 specification.)\n\n NB: This is akin to the CVC test from the Porter stemmer. The\n description is unfortunately poor/ambiguous.\n\n Parameters\n ----------\n term : str\n The term to examine\n\n Returns\n -------\n bool\n True iff term ends in a short syllable"} {"_id": "q_8638", "text": "Return True iff term is a short word.\n\n (...according to the Porter2 specification.)\n\n Parameters\n ----------\n term : str\n The term to examine\n r1_prefixes : set\n Prefixes to consider\n\n Returns\n -------\n bool\n True iff term is a short word"} {"_id": "q_8639", "text": "Return the eudex phonetic hash of a word.\n\n Parameters\n ----------\n word : str\n The word to transform\n max_length : int\n The length in bits of the code returned (default 8)\n\n Returns\n -------\n int\n The eudex hash\n\n Examples\n --------\n >>> pe = Eudex()\n >>> pe.encode('Colin')\n 432345564238053650\n >>> pe.encode('Christopher')\n 433648490138894409\n >>> pe.encode('Niall')\n 648518346341351840\n >>> pe.encode('Smith')\n 720575940412906756\n >>> pe.encode('Schmidt')\n 720589151732307997"} {"_id": "q_8640", "text": "Return the Q-Grams in src & tar.\n\n Parameters\n ----------\n src : str\n Source string (or QGrams/Counter objects) for comparison\n tar : str\n Target string (or QGrams/Counter objects) for comparison\n qval : int\n The length of each q-gram; 0 for non-q-gram version\n skip : int\n The number of characters to skip (only works when src and tar are\n strings)\n\n Returns\n -------\n tuple of Counters\n Q-Grams\n\n Examples\n --------\n >>> pe = _TokenDistance()\n >>> pe._get_qgrams('AT', 'TT', qval=2)\n (QGrams({'$A': 1, 'AT': 1, 'T#': 1}),\n QGrams({'$T': 1, 'TT': 1, 'T#': 1}))"} {"_id": "q_8641", "text": "Return the Levenshtein similarity of two strings.\n\n This is a wrapper of :py:meth:`Levenshtein.sim`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n mode : str\n Specifies a mode for computing the Levenshtein distance:\n\n - ``lev`` (default) computes the ordinary Levenshtein distance, in\n which edits may include inserts, deletes, and substitutions\n - ``osa`` computes the Optimal String Alignment distance, in which\n edits may include inserts, deletes, substitutions, and\n transpositions but substrings may only be edited once\n\n cost : tuple\n A 4-tuple representing the cost of the four possible edits: inserts,\n deletes, substitutions, and transpositions, respectively (by default:\n (1, 1, 1, 1))\n\n Returns\n -------\n float\n The Levenshtein similarity between src & tar\n\n Examples\n --------\n >>> round(sim_levenshtein('cat', 'hat'), 12)\n 0.666666666667\n >>> round(sim_levenshtein('Niall', 'Neil'), 12)\n 0.4\n >>> sim_levenshtein('aluminum', 'Catalan')\n 0.125\n >>> sim_levenshtein('ATCG', 'TAGC')\n 0.25"} {"_id": "q_8642", "text": "Return the omission key.\n\n Parameters\n ----------\n word : str\n The word to transform into its omission key\n\n Returns\n -------\n str\n The omission key\n\n Examples\n --------\n >>> ok = OmissionKey()\n >>> ok.fingerprint('The quick brown fox jumped over the lazy dog.')\n 'JKQXZVWYBFMGPDHCLNTREUIOA'\n >>> ok.fingerprint('Christopher')\n 'PHCTSRIOE'\n >>> ok.fingerprint('Niall')\n 'LNIA'"} {"_id": "q_8643", "text": "Return the Monge-Elkan distance between two strings.\n\n This is a wrapper for :py:meth:`MongeElkan.dist`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n sim_func : function\n The internal similarity metric to employ\n symmetric : bool\n Return a symmetric similarity measure\n\n Returns\n -------\n float\n Monge-Elkan distance\n\n Examples\n --------\n >>> dist_monge_elkan('cat', 'hat')\n 0.25\n >>> round(dist_monge_elkan('Niall', 'Neil'), 12)\n 0.333333333333\n >>> round(dist_monge_elkan('aluminum', 'Catalan'), 12)\n 0.611111111111\n >>> dist_monge_elkan('ATCG', 'TAGC')\n 0.5"} {"_id": "q_8644", "text": "Return the Phonem code for a word.\n\n Parameters\n ----------\n word : str\n The word to transform\n\n Returns\n -------\n str\n The Phonem value\n\n Examples\n --------\n >>> pe = Phonem()\n >>> pe.encode('Christopher')\n 'CRYSDOVR'\n >>> pe.encode('Niall')\n 'NYAL'\n >>> pe.encode('Smith')\n 'SMYD'\n >>> pe.encode('Schmidt')\n 'CMYD'"} {"_id": "q_8645", "text": "Return CLEF Swedish stem.\n\n Parameters\n ----------\n word : str\n The word to stem\n\n Returns\n -------\n str\n Word stem\n\n Examples\n --------\n >>> clef_swedish('undervisa')\n 'undervis'\n >>> clef_swedish('suspension')\n 'suspensio'\n >>> clef_swedish('visshet')\n 'viss'"} {"_id": "q_8646", "text": "Undouble endings -kk, -dd, and -tt.\n\n Parameters\n ----------\n word : str\n The word to stem\n\n Returns\n -------\n str\n The word with doubled endings undoubled"} {"_id": "q_8647", "text": "Convert IPA to features.\n\n This translates an IPA string of one or more phones to a list of ints\n representing the features of the string.\n\n Parameters\n ----------\n ipa : str\n The IPA representation of a phone or series of phones\n\n Returns\n -------\n list of ints\n A representation of the features of the input string\n\n Examples\n --------\n >>> ipa_to_features('mut')\n [2709662981243185770, 1825831513894594986, 2783230754502126250]\n >>> ipa_to_features('fon')\n [2781702983095331242, 1825831531074464170, 2711173160463936106]\n >>> ipa_to_features('telz')\n [2783230754502126250, 1826957430176000426, 2693158761954453926,\n 2783230754501863834]"} {"_id": "q_8648", "text": "Get a feature vector.\n\n This returns a list of ints, equal in length to the vector input,\n representing presence/absence/neutrality with respect to a particular\n phonetic feature.\n\n Parameters\n ----------\n vector : list\n A tuple or list of ints representing the phonetic features of a phone\n or series of phones (such as is returned by the ipa_to_features\n function)\n feature : str\n A feature name from the set:\n\n - ``consonantal``\n - ``sonorant``\n - ``syllabic``\n - ``labial``\n - ``round``\n - ``coronal``\n - ``anterior``\n - ``distributed``\n - ``dorsal``\n - ``high``\n - ``low``\n - ``back``\n - ``tense``\n - ``pharyngeal``\n - ``ATR``\n - ``voice``\n - ``spread_glottis``\n - ``constricted_glottis``\n - ``continuant``\n - ``strident``\n - ``lateral``\n - ``delayed_release``\n - ``nasal``\n\n Returns\n -------\n list of ints\n A list indicating presence/absence/neutrality with respect to the\n feature\n\n Raises\n ------\n AttributeError\n feature must be one of ...\n\n Examples\n --------\n >>> tails = ipa_to_features('telz')\n >>> get_feature(tails, 'consonantal')\n [1, -1, 1, 1]\n >>> get_feature(tails, 'sonorant')\n [-1, 1, 1, -1]\n >>> get_feature(tails, 'nasal')\n [-1, -1, -1, -1]\n >>> get_feature(tails, 'coronal')\n [1, -1, 1, 1]"} {"_id": "q_8649", "text": "Compare features.\n\n This returns a number in the range [0, 1] representing a comparison of two\n feature bundles.\n\n If one of the bundles is negative, -1 is returned (for unknown values)\n\n If the bundles are identical, 1 is returned.\n\n If they are inverses of one another, 0 is returned.\n\n Otherwise, a float representing their similarity is returned.\n\n Parameters\n ----------\n feat1 : int\n A feature bundle\n feat2 : int\n A feature bundle\n\n Returns\n -------\n float\n A comparison of the feature bundles\n\n Examples\n --------\n >>> cmp_features(ipa_to_features('l')[0], ipa_to_features('l')[0])\n 1.0\n >>> cmp_features(ipa_to_features('l')[0], ipa_to_features('n')[0])\n 0.8709677419354839\n >>> cmp_features(ipa_to_features('l')[0], ipa_to_features('z')[0])\n 0.8709677419354839\n >>> cmp_features(ipa_to_features('l')[0], ipa_to_features('i')[0])\n 0.564516129032258"} {"_id": "q_8650", "text": "Return the length similarity of two strings.\n\n Length similarity is the ratio of the length of the shorter string to\n the longer.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n Length similarity\n\n Examples\n --------\n >>> cmp = Length()\n >>> cmp.sim('cat', 'hat')\n 1.0\n >>> cmp.sim('Niall', 'Neil')\n 0.8\n >>> cmp.sim('aluminum', 'Catalan')\n 0.875\n >>> cmp.sim('ATCG', 'TAGC')\n 1.0"} {"_id": "q_8651", "text": "r\"\"\"Return harmonic mean.\n\n The harmonic mean is defined as:\n :math:`\\frac{|nums|}{\\sum\\limits_{i}\\frac{1}{nums_i}}`\n\n Following the behavior of Wolfram|Alpha:\n - If one of the values in nums is 0, return 0.\n - If more than one value in nums is 0, return NaN.\n\n Cf. https://en.wikipedia.org/wiki/Harmonic_mean\n\n Parameters\n ----------\n nums : list\n A series of numbers\n\n Returns\n -------\n float\n The harmonic mean of nums\n\n Raises\n ------\n AttributeError\n hmean requires at least one value\n\n Examples\n --------\n >>> hmean([1, 2, 3, 4])\n 1.9200000000000004\n >>> hmean([1, 2])\n 1.3333333333333333\n >>> hmean([0, 5, 1000])\n 0"} {"_id": "q_8652", "text": "r\"\"\"Return Seiffert's mean.\n\n Seiffert's mean of two numbers x and y is:\n :math:`\\frac{x - y}{4 \\cdot arctan \\sqrt{\\frac{x}{y}} - \\pi}`\n\n It is defined in :cite:`Seiffert:1993`.\n\n Parameters\n ----------\n nums : list\n A series of numbers\n\n Returns\n -------\n float\n Sieffert's mean of nums\n\n Raises\n ------\n AttributeError\n seiffert_mean supports no more than two values\n\n Examples\n --------\n >>> seiffert_mean([1, 2])\n 1.4712939827611637\n >>> seiffert_mean([1, 0])\n 0.3183098861837907\n >>> seiffert_mean([2, 4])\n 2.9425879655223275\n >>> seiffert_mean([2, 1000])\n 336.84053300118825"} {"_id": "q_8653", "text": "r\"\"\"Return Lehmer mean.\n\n The Lehmer mean is:\n :math:`\\frac{\\sum\\limits_i{x_i^p}}{\\sum\\limits_i{x_i^(p-1)}}`\n\n Cf. https://en.wikipedia.org/wiki/Lehmer_mean\n\n Parameters\n ----------\n nums : list\n A series of numbers\n exp : numeric\n The exponent of the Lehmer mean\n\n Returns\n -------\n float\n The Lehmer mean of nums for the given exponent\n\n Examples\n --------\n >>> lehmer_mean([1, 2, 3, 4])\n 3.0\n >>> lehmer_mean([1, 2])\n 1.6666666666666667\n >>> lehmer_mean([0, 5, 1000])\n 995.0497512437811"} {"_id": "q_8654", "text": "Return geometric-harmonic mean.\n\n Iterates between geometric & harmonic means until they converge to\n a single value (rounded to 12 digits).\n\n Cf. https://en.wikipedia.org/wiki/Geometric-harmonic_mean\n\n Parameters\n ----------\n nums : list\n A series of numbers\n\n Returns\n -------\n float\n The geometric-harmonic mean of nums\n\n Examples\n --------\n >>> ghmean([1, 2, 3, 4])\n 2.058868154613003\n >>> ghmean([1, 2])\n 1.3728805006183502\n >>> ghmean([0, 5, 1000])\n 0.0\n\n >>> ghmean([0, 0])\n 0.0\n >>> ghmean([0, 0, 5])\n nan"} {"_id": "q_8655", "text": "Return arithmetic-geometric-harmonic mean.\n\n Iterates over arithmetic, geometric, & harmonic means until they\n converge to a single value (rounded to 12 digits), following the\n method described in :cite:`Raissouli:2009`.\n\n Parameters\n ----------\n nums : list\n A series of numbers\n\n Returns\n -------\n float\n The arithmetic-geometric-harmonic mean of nums\n\n Examples\n --------\n >>> aghmean([1, 2, 3, 4])\n 2.198327159900212\n >>> aghmean([1, 2])\n 1.4142135623731884\n >>> aghmean([0, 5, 1000])\n 335.0"} {"_id": "q_8656", "text": "Return a word with punctuation stripped out.\n\n Parameters\n ----------\n word : str\n A word to strip punctuation from\n\n Returns\n -------\n str\n The word stripped of punctuation\n\n Examples\n --------\n >>> pe = Synoname()\n >>> pe._synoname_strip_punct('AB;CD EF-GH$IJ')\n 'ABCD EFGHIJ'"} {"_id": "q_8657", "text": "Return the normalized Synoname distance between two words.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n word_approx_min : float\n The minimum word approximation value to signal a 'word_approx'\n match\n char_approx_min : float\n The minimum character approximation value to signal a 'char_approx'\n match\n tests : int or Iterable\n Either an integer indicating tests to perform or a list of test\n names to perform (defaults to performing all tests)\n\n Returns\n -------\n float\n Normalized Synoname distance"} {"_id": "q_8658", "text": "Return the NCD between two strings using bzip2 compression.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n Compression distance\n\n Examples\n --------\n >>> cmp = NCDbz2()\n >>> cmp.dist('cat', 'hat')\n 0.06666666666666667\n >>> cmp.dist('Niall', 'Neil')\n 0.03125\n >>> cmp.dist('aluminum', 'Catalan')\n 0.17647058823529413\n >>> cmp.dist('ATCG', 'TAGC')\n 0.03125"} {"_id": "q_8659", "text": "Return the MetaSoundex code for a word.\n\n Parameters\n ----------\n word : str\n The word to transform\n lang : str\n Either ``en`` for English or ``es`` for Spanish\n\n Returns\n -------\n str\n The MetaSoundex code\n\n Examples\n --------\n >>> pe = MetaSoundex()\n >>> pe.encode('Smith')\n '4500'\n >>> pe.encode('Waters')\n '7362'\n >>> pe.encode('James')\n '1520'\n >>> pe.encode('Schmidt')\n '4530'\n >>> pe.encode('Ashcroft')\n '0261'\n >>> pe.encode('Perez', lang='es')\n '094'\n >>> pe.encode('Martinez', lang='es')\n '69364'\n >>> pe.encode('Gutierrez', lang='es')\n '83994'\n >>> pe.encode('Santiago', lang='es')\n '4638'\n >>> pe.encode('Nicol\u00e1s', lang='es')\n '6754'"} {"_id": "q_8660", "text": "Return the Ratcliff-Obershelp similarity of two strings.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n Ratcliff-Obershelp similarity\n\n Examples\n --------\n >>> cmp = RatcliffObershelp()\n >>> round(cmp.sim('cat', 'hat'), 12)\n 0.666666666667\n >>> round(cmp.sim('Niall', 'Neil'), 12)\n 0.666666666667\n >>> round(cmp.sim('aluminum', 'Catalan'), 12)\n 0.4\n >>> cmp.sim('ATCG', 'TAGC')\n 0.5"} {"_id": "q_8661", "text": "Return the Parmar-Kumbharana encoding of a word.\n\n Parameters\n ----------\n word : str\n The word to transform\n\n Returns\n -------\n str\n The Parmar-Kumbharana encoding\n\n Examples\n --------\n >>> pe = ParmarKumbharana()\n >>> pe.encode('Gough')\n 'GF'\n >>> pe.encode('pneuma')\n 'NM'\n >>> pe.encode('knight')\n 'NT'\n >>> pe.encode('trice')\n 'TRS'\n >>> pe.encode('judge')\n 'JJ'"} {"_id": "q_8662", "text": "Calculate the Hamming distance between the Eudex hashes of two terms.\n\n This is a wrapper for :py:meth:`Eudex.eudex_hamming`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n weights : str, iterable, or generator function\n The weights or weights generator function\n max_length : int\n The number of characters to encode as a eudex hash\n normalized : bool\n Normalizes to [0, 1] if True\n\n Returns\n -------\n int\n The Eudex Hamming distance\n\n Examples\n --------\n >>> eudex_hamming('cat', 'hat')\n 128\n >>> eudex_hamming('Niall', 'Neil')\n 2\n >>> eudex_hamming('Colin', 'Cuilen')\n 10\n >>> eudex_hamming('ATCG', 'TAGC')\n 403\n\n >>> eudex_hamming('cat', 'hat', weights='fibonacci')\n 34\n >>> eudex_hamming('Niall', 'Neil', weights='fibonacci')\n 2\n >>> eudex_hamming('Colin', 'Cuilen', weights='fibonacci')\n 7\n >>> eudex_hamming('ATCG', 'TAGC', weights='fibonacci')\n 117\n\n >>> eudex_hamming('cat', 'hat', weights=None)\n 1\n >>> eudex_hamming('Niall', 'Neil', weights=None)\n 1\n >>> eudex_hamming('Colin', 'Cuilen', weights=None)\n 2\n >>> eudex_hamming('ATCG', 'TAGC', weights=None)\n 9\n\n >>> # Using the OEIS A000142:\n >>> eudex_hamming('cat', 'hat', [1, 1, 2, 6, 24, 120, 720, 5040])\n 1\n >>> eudex_hamming('Niall', 'Neil', [1, 1, 2, 6, 24, 120, 720, 5040])\n 720\n >>> eudex_hamming('Colin', 'Cuilen', [1, 1, 2, 6, 24, 120, 720, 5040])\n 744\n >>> eudex_hamming('ATCG', 'TAGC', [1, 1, 2, 6, 24, 120, 720, 5040])\n 6243"} {"_id": "q_8663", "text": "Return normalized Hamming distance between Eudex hashes of two terms.\n\n This is a wrapper for :py:meth:`Eudex.dist`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n weights : str, iterable, or generator function\n The weights or weights generator function\n max_length : int\n The number of characters to encode as a eudex hash\n\n Returns\n -------\n int\n The normalized Eudex Hamming distance\n\n Examples\n --------\n >>> round(dist_eudex('cat', 'hat'), 12)\n 0.062745098039\n >>> round(dist_eudex('Niall', 'Neil'), 12)\n 0.000980392157\n >>> round(dist_eudex('Colin', 'Cuilen'), 12)\n 0.004901960784\n >>> round(dist_eudex('ATCG', 'TAGC'), 12)\n 0.197549019608"} {"_id": "q_8664", "text": "Yield the next Fibonacci number.\n\n Based on https://www.python-course.eu/generators.php\n Starts at Fibonacci number 3 (the second 1)\n\n Yields\n ------\n int\n The next Fibonacci number"} {"_id": "q_8665", "text": "Return the Euclidean distance between two strings.\n\n This is a wrapper for :py:meth:`Euclidean.dist_abs`.\n\n Parameters\n ----------\n src : str\n Source string (or QGrams/Counter objects) for comparison\n tar : str\n Target string (or QGrams/Counter objects) for comparison\n qval : int\n The length of each q-gram; 0 for non-q-gram version\n normalized : bool\n Normalizes to [0, 1] if True\n alphabet : collection or int\n The values or size of the alphabet\n\n Returns\n -------\n float: The Euclidean distance\n\n Examples\n --------\n >>> euclidean('cat', 'hat')\n 2.0\n >>> round(euclidean('Niall', 'Neil'), 12)\n 2.645751311065\n >>> euclidean('Colin', 'Cuilen')\n 3.0\n >>> round(euclidean('ATCG', 'TAGC'), 12)\n 3.162277660168"} {"_id": "q_8666", "text": "Return the normalized Euclidean distance between two strings.\n\n This is a wrapper for :py:meth:`Euclidean.dist`.\n\n Parameters\n ----------\n src : str\n Source string (or QGrams/Counter objects) for comparison\n tar : str\n Target string (or QGrams/Counter objects) for comparison\n qval : int\n The length of each q-gram; 0 for non-q-gram version\n alphabet : collection or int\n The values or size of the alphabet\n\n Returns\n -------\n float\n The normalized Euclidean distance\n\n Examples\n --------\n >>> round(dist_euclidean('cat', 'hat'), 12)\n 0.57735026919\n >>> round(dist_euclidean('Niall', 'Neil'), 12)\n 0.683130051064\n >>> round(dist_euclidean('Colin', 'Cuilen'), 12)\n 0.727606875109\n >>> dist_euclidean('ATCG', 'TAGC')\n 1.0"} {"_id": "q_8667", "text": "Return Lovins' condition N.\n\n Parameters\n ----------\n word : str\n Word to check\n suffix_len : int\n Suffix length\n\n Returns\n -------\n bool\n True if condition is met"} {"_id": "q_8668", "text": "Return Lovins' condition S.\n\n Parameters\n ----------\n word : str\n Word to check\n suffix_len : int\n Suffix length\n\n Returns\n -------\n bool\n True if condition is met"} {"_id": "q_8669", "text": "Return Lovins' condition X.\n\n Parameters\n ----------\n word : str\n Word to check\n suffix_len : int\n Suffix length\n\n Returns\n -------\n bool\n True if condition is met"} {"_id": "q_8670", "text": "Return Lovins' condition BB.\n\n Parameters\n ----------\n word : str\n Word to check\n suffix_len : int\n Suffix length\n\n Returns\n -------\n bool\n True if condition is met"} {"_id": "q_8671", "text": "Return Lovins stem.\n\n Parameters\n ----------\n word : str\n The word to stem\n\n Returns\n -------\n str\n Word stem\n\n Examples\n --------\n >>> stmr = Lovins()\n >>> stmr.stem('reading')\n 'read'\n >>> stmr.stem('suspension')\n 'suspens'\n >>> stmr.stem('elusiveness')\n 'elus'"} {"_id": "q_8672", "text": "Return the NCD between two strings using zlib compression.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n Compression distance\n\n Examples\n --------\n >>> cmp = NCDzlib()\n >>> cmp.dist('cat', 'hat')\n 0.3333333333333333\n >>> cmp.dist('Niall', 'Neil')\n 0.45454545454545453\n >>> cmp.dist('aluminum', 'Catalan')\n 0.5714285714285714\n >>> cmp.dist('ATCG', 'TAGC')\n 0.4"} {"_id": "q_8673", "text": "Return Pylint badge color.\n\n Parameters\n ----------\n score : float\n A Pylint score\n\n Returns\n -------\n str\n Badge color"} {"_id": "q_8674", "text": "Return pydocstyle badge color.\n\n Parameters\n ----------\n score : float\n A pydocstyle score\n\n Returns\n -------\n str\n Badge color"} {"_id": "q_8675", "text": "Return the bag distance between two strings.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n int\n Bag distance\n\n Examples\n --------\n >>> cmp = Bag()\n >>> cmp.dist_abs('cat', 'hat')\n 1\n >>> cmp.dist_abs('Niall', 'Neil')\n 2\n >>> cmp.dist_abs('aluminum', 'Catalan')\n 5\n >>> cmp.dist_abs('ATCG', 'TAGC')\n 0\n >>> cmp.dist_abs('abcdefg', 'hijklm')\n 7\n >>> cmp.dist_abs('abcdefg', 'hijklmno')\n 8"} {"_id": "q_8676", "text": "Return the normalized bag distance between two strings.\n\n Bag distance is normalized by dividing by :math:`max( |src|, |tar| )`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n\n Returns\n -------\n float\n Normalized bag distance\n\n Examples\n --------\n >>> cmp = Bag()\n >>> cmp.dist('cat', 'hat')\n 0.3333333333333333\n >>> cmp.dist('Niall', 'Neil')\n 0.4\n >>> cmp.dist('aluminum', 'Catalan')\n 0.625\n >>> cmp.dist('ATCG', 'TAGC')\n 0.0"} {"_id": "q_8677", "text": "Return the MLIPNS distance between two strings.\n\n This is a wrapper for :py:meth:`MLIPNS.dist`.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n threshold : float\n A number [0, 1] indicating the maximum similarity score, below which\n the strings are considered 'similar' (0.25 by default)\n max_mismatches : int\n A number indicating the allowable number of mismatches to remove before\n declaring two strings not similar (2 by default)\n\n Returns\n -------\n float\n MLIPNS distance\n\n Examples\n --------\n >>> dist_mlipns('cat', 'hat')\n 0.0\n >>> dist_mlipns('Niall', 'Neil')\n 1.0\n >>> dist_mlipns('aluminum', 'Catalan')\n 1.0\n >>> dist_mlipns('ATCG', 'TAGC')\n 1.0"} {"_id": "q_8678", "text": "Return a similarity of two strings.\n\n This is a generalized function for calling other similarity functions.\n\n Parameters\n ----------\n src : str\n Source string for comparison\n tar : str\n Target string for comparison\n method : function\n Specifies the similarity metric (:py:func:`sim_levenshtein` by default)\n\n Returns\n -------\n float\n Similarity according to the specified function\n\n Raises\n ------\n AttributeError\n Unknown distance function\n\n Examples\n --------\n >>> round(sim('cat', 'hat'), 12)\n 0.666666666667\n >>> round(sim('Niall', 'Neil'), 12)\n 0.4\n >>> sim('aluminum', 'Catalan')\n 0.125\n >>> sim('ATCG', 'TAGC')\n 0.25"} {"_id": "q_8679", "text": "Return Porter helper function _m_degree value.\n\n m-degree is equal to the number of V to C transitions\n\n Parameters\n ----------\n term : str\n The word for which to calculate the m-degree\n\n Returns\n -------\n int\n The m-degree as defined in the Porter stemmer definition"} {"_id": "q_8680", "text": "Return Porter helper function _has_vowel value.\n\n Parameters\n ----------\n term : str\n The word to scan for vowels\n\n Returns\n -------\n bool\n True iff a vowel exists in the term (as defined in the Porter\n stemmer definition)"} {"_id": "q_8681", "text": "Return Porter helper function _ends_in_doubled_cons value.\n\n Parameters\n ----------\n term : str\n The word to check for a final doubled consonant\n\n Returns\n -------\n bool\n True iff the stem ends in a doubled consonant (as defined in the\n Porter stemmer definition)"} {"_id": "q_8682", "text": "Return Porter helper function _ends_in_cvc value.\n\n Parameters\n ----------\n term : str\n The word to scan for cvc\n\n Returns\n -------\n bool\n True iff the stem ends in cvc (as defined in the Porter stemmer\n definition)"} {"_id": "q_8683", "text": "Symmetrical logarithmic scale.\n\n Optional arguments:\n\n *base*:\n The base of the logarithm."} {"_id": "q_8684", "text": "Show usage and available curve functions."} {"_id": "q_8685", "text": "Get the current terminal size."} {"_id": "q_8686", "text": "Return the escape sequence for the selected Control Sequence."} {"_id": "q_8687", "text": "Return a value wrapped in the selected CSI and does a reset."} {"_id": "q_8688", "text": "Read points from istream and output to ostream."} {"_id": "q_8689", "text": "Consume data from a line."} {"_id": "q_8690", "text": "Add a set of data points."} {"_id": "q_8691", "text": "Generate a color ramp for the current screen height."} {"_id": "q_8692", "text": "Run the filter function on the provided points."} {"_id": "q_8693", "text": "Resolve the points to make a line between two points."} {"_id": "q_8694", "text": "Set a text value in the screen canvas."} {"_id": "q_8695", "text": "Normalised data points using numpy."} {"_id": "q_8696", "text": "Loads the content of the text file"} {"_id": "q_8697", "text": "translate the incoming symbol into locally-used"} {"_id": "q_8698", "text": "Loads all symbol maps from db"} {"_id": "q_8699", "text": "Add individual price"} {"_id": "q_8700", "text": "Import prices from CSV file"} {"_id": "q_8701", "text": "displays last price, for symbol if provided"} {"_id": "q_8702", "text": "Display all prices"} {"_id": "q_8703", "text": "Download the latest prices"} {"_id": "q_8704", "text": "Return the default session. The path is read from the default config."} {"_id": "q_8705", "text": "Creates a symbol mapping"} {"_id": "q_8706", "text": "Displays all symbol maps"} {"_id": "q_8707", "text": "Finds the map by in-symbol"} {"_id": "q_8708", "text": "Read text lines from a file"} {"_id": "q_8709", "text": "Parse into the Price entity, ready for saving"} {"_id": "q_8710", "text": "Read the config file"} {"_id": "q_8711", "text": "gets the default config path from resources"} {"_id": "q_8712", "text": "Copy the config template into user's directory"} {"_id": "q_8713", "text": "Returns the path where the active config file is expected.\n This is the user's profile folder."} {"_id": "q_8714", "text": "Sets a value in config"} {"_id": "q_8715", "text": "Retrieves a config value"} {"_id": "q_8716", "text": "Splits the symbol into namespace, symbol tuple"} {"_id": "q_8717", "text": "Returns the current db session"} {"_id": "q_8718", "text": "Fetches all the prices for the given arguments"} {"_id": "q_8719", "text": "Returns the latest price on the date"} {"_id": "q_8720", "text": "Prune historical prices for all symbols, leaving only the latest.\n Returns the number of items removed."} {"_id": "q_8721", "text": "Delete all but the latest available price for the given symbol.\n Returns the number of items removed."} {"_id": "q_8722", "text": "Downloads and parses the price"} {"_id": "q_8723", "text": "Fetches the securities that match the given filters"} {"_id": "q_8724", "text": "Return partial of original function call"} {"_id": "q_8725", "text": "Verify that a part that is zoomed in on has equal length.\n\n Typically used in the context of ``check_function_def()``\n\n Arguments:\n name (str): name of the part for which to check the length to the corresponding part in the solution.\n unequal_msg (str): Message in case the lengths do not match.\n state (State): state as passed by the SCT chain. Don't specify this explicitly.\n\n :Examples:\n\n Student and solution code::\n\n def shout(word):\n return word + '!!!'\n\n SCT that checks number of arguments::\n\n Ex().check_function_def('shout').has_equal_part_len('args', 'not enough args!')"} {"_id": "q_8726", "text": "Checks whether student imported a package or function correctly.\n\n Python features many ways to import packages.\n All of these different methods revolve around the ``import``, ``from`` and ``as`` keywords.\n ``has_import()`` provides a robust way to check whether a student correctly imported a certain package.\n\n By default, ``has_import()`` allows for different ways of aliasing the imported package or function.\n If you want to make sure the correct alias was used to refer to the package or function that was imported,\n set ``same_as=True``.\n\n Args:\n name (str): the name of the package that has to be checked.\n same_as (bool): if True, the alias of the package or function has to be the same. Defaults to False.\n not_imported_msg (str): feedback message when the package is not imported.\n incorrect_as_msg (str): feedback message if the alias is wrong.\n\n :Example:\n\n Example 1, where aliases don't matter (defaut): ::\n\n # solution\n import matplotlib.pyplot as plt\n\n # sct\n Ex().has_import(\"matplotlib.pyplot\")\n\n # passing submissions\n import matplotlib.pyplot as plt\n from matplotlib import pyplot as plt\n import matplotlib.pyplot as pltttt\n\n # failing submissions\n import matplotlib as mpl\n\n Example 2, where the SCT is coded so aliases do matter: ::\n\n # solution\n import matplotlib.pyplot as plt\n\n # sct\n Ex().has_import(\"matplotlib.pyplot\", same_as=True)\n\n # passing submissions\n import matplotlib.pyplot as plt\n from matplotlib import pyplot as plt\n\n # failing submissions\n import matplotlib.pyplot as pltttt"} {"_id": "q_8727", "text": "Search student output for a pattern.\n\n Among the student and solution process, the student submission and solution code as a string,\n the ``Ex()`` state also contains the output that a student generated with his or her submission.\n\n With ``has_output()``, you can access this output and match it against a regular or fixed expression.\n\n Args:\n text (str): the text that is searched for\n pattern (bool): if True (default), the text is treated as a pattern. If False, it is treated as plain text.\n no_output_msg (str): feedback message to be displayed if the output is not found.\n\n :Example:\n\n As an example, suppose we want a student to print out a sentence: ::\n\n # Print the \"This is some ... stuff\"\n print(\"This is some weird stuff\")\n\n The following SCT tests whether the student prints out ``This is some weird stuff``: ::\n\n # Using exact string matching\n Ex().has_output(\"This is some weird stuff\", pattern = False)\n\n # Using a regular expression (more robust)\n # pattern = True is the default\n msg = \"Print out ``This is some ... stuff`` to the output, \" + \\\\\n \"fill in ``...`` with a word you like.\"\n Ex().has_output(r\"This is some \\w* stuff\", no_output_msg = msg)"} {"_id": "q_8728", "text": "Check if the right printouts happened.\n\n ``has_printout()`` will look for the printout in the solution code that you specified with ``index`` (0 in this case), rerun the ``print()`` call in\n the solution process, capture its output, and verify whether the output is present in the output of the student.\n\n This is more robust as ``Ex().check_function('print')`` initiated chains as students can use as many\n printouts as they want, as long as they do the correct one somewhere.\n\n Args:\n index (int): index of the ``print()`` call in the solution whose output you want to search for in the student output.\n not_printed_msg (str): if specified, this overrides the default message that is generated when the output\n is not found in the student output.\n pre_code (str): Python code as a string that is executed before running the targeted student call.\n This is the ideal place to set a random seed, for example.\n copy (bool): whether to try to deep copy objects in the environment, such as lists, that could\n accidentally be mutated. Disabled by default, which speeds up SCTs.\n state (State): state as passed by the SCT chain. Don't specify this explicitly.\n\n :Example:\n\n Suppose you want somebody to print out 4: ::\n\n print(1, 2, 3, 4)\n\n The following SCT would check that: ::\n\n Ex().has_printout(0)\n\n All of the following SCTs would pass: ::\n\n print(1, 2, 3, 4)\n print('1 2 3 4')\n print(1, 2, '3 4')\n print(\"random\"); print(1, 2, 3, 4)\n\n :Example:\n\n Watch out: ``has_printout()`` will effectively **rerun** the ``print()`` call in the solution process after the entire solution script was executed.\n If your solution script updates the value of `x` after executing it, ``has_printout()`` will not work.\n\n Suppose you have the following solution: ::\n\n x = 4\n print(x)\n x = 6\n\n The following SCT will not work: ::\n\n Ex().has_printout(0)\n\n Why? When the ``print(x)`` call is executed, the value of ``x`` will be 6, and pythonwhat will look for the output `'6`' in the output the student generated.\n In cases like these, ``has_printout()`` cannot be used.\n\n :Example:\n\n Inside a for loop ``has_printout()``\n\n Suppose you have the following solution: ::\n\n for i in range(5):\n print(i)\n\n The following SCT will not work: ::\n\n Ex().check_for_loop().check_body().has_printout(0)\n\n The reason is that ``has_printout()`` can only be called from the root state. ``Ex()``.\n If you want to check printouts done in e.g. a for loop, you have to use a `check_function('print')` chain instead: ::\n\n Ex().check_for_loop().check_body().\\\\\n set_context(0).check_function('print').\\\\\n check_args(0).has_equal_value()"} {"_id": "q_8729", "text": "Test multiple choice exercise.\n\n Test for a MultipleChoiceExercise. The correct answer (as an integer) and feedback messages\n are passed to this function.\n\n Args:\n correct (int): the index of the correct answer (should be an instruction). Starts at 1.\n msgs (list(str)): a list containing all feedback messages belonging to each choice of the\n student. The list should have the same length as the number of options."} {"_id": "q_8730", "text": "Get a value from process, return tuple of value, res if succesful"} {"_id": "q_8731", "text": "Override the solution code with something arbitrary.\n\n There might be cases in which you want to temporarily override the solution code\n so you can allow for alternative ways of solving an exercise.\n When you use ``override()`` in an SCT chain, the remainder of that SCT chain will\n run as if the solution code you specified is the only code that was in the solution.\n\n Check the glossary for an example (pandas plotting)\n\n Args:\n solution: solution code as a string that overrides the original solution code.\n state: State instance describing student and solution code. Can be omitted if used with Ex()."} {"_id": "q_8732", "text": "Check whether an object is an instance of a certain class.\n\n ``is_instance()`` can currently only be used when chained from ``check_object()``, the function that is\n used to 'zoom in' on the object of interest.\n\n Args:\n inst (class): The class that the object should have.\n not_instance_msg (str): When specified, this overrides the automatically generated message in case\n the object does not have the expected class.\n state (State): The state that is passed in through the SCT chain (don't specify this).\n\n :Example:\n\n Student code and solution code::\n\n import numpy as np\n arr = np.array([1, 2, 3, 4, 5])\n\n SCT::\n\n # Verify the class of arr\n import numpy\n Ex().check_object('arr').is_instance(numpy.ndarray)"} {"_id": "q_8733", "text": "Return copy of instance, omitting entries that are EMPTY"} {"_id": "q_8734", "text": "getter for Parser outputs"} {"_id": "q_8735", "text": "When dispatched on loops, has_context the target vars are the attribute _target_vars.\n\n Note: This is to allow people to call has_context on a node (e.g. for_loop) rather than\n one of its attributes (e.g. body). Purely for convenience."} {"_id": "q_8736", "text": "Return child state with name part as its ast tree"} {"_id": "q_8737", "text": "Return child state with indexed name part as its ast tree.\n\n ``index`` can be:\n\n - an integer, in which case the student/solution_parts are indexed by position.\n - a string, in which case the student/solution_parts are expected to be a dictionary.\n - a list of indices (which can be integer or string), in which case the student parts are indexed step by step."} {"_id": "q_8738", "text": "Return the true anomaly at each time"} {"_id": "q_8739", "text": "Loads the class from the class_path string"} {"_id": "q_8740", "text": "process pagination requests from request parameter"} {"_id": "q_8741", "text": "Create separate dictionary of supported filter values provided"} {"_id": "q_8742", "text": "Search for courses\n\n Args:\n request (required) - django request object\n\n Returns:\n http json response with the following fields\n \"took\" - how many seconds the operation took\n \"total\" - how many results were found\n \"max_score\" - maximum score from these resutls\n \"results\" - json array of result documents\n\n or\n\n \"error\" - displayable information about an error that occured on the server\n\n POST Params:\n \"search_string\" (optional) - text with which to search for courses\n \"page_size\" (optional)- how many results to return per page (defaults to 20, with maximum cutoff at 100)\n \"page_index\" (optional) - for which page (zero-indexed) to include results (defaults to 0)"} {"_id": "q_8743", "text": "Return field to apply into filter, if an array then use a range, otherwise look for a term match"} {"_id": "q_8744", "text": "We have a field_dictionary - we want to match the values for an elasticsearch \"match\" query\n This is only potentially useful when trying to tune certain search operations"} {"_id": "q_8745", "text": "We have a filter_dictionary - this means that if the field is included\n and matches, then we can include, OR if the field is undefined, then we\n assume it is safe to include"} {"_id": "q_8746", "text": "We have a list of terms with which we return facets"} {"_id": "q_8747", "text": "fetch mapped-items structure from cache"} {"_id": "q_8748", "text": "Logs indexing errors and raises a general ElasticSearch Exception"} {"_id": "q_8749", "text": "Interfaces with the elasticsearch mappings for the index\n prevents multiple loading of the same mappings from ES when called more than once\n\n Mappings format in elasticsearch is as follows:\n {\n \"doc_type\": {\n \"properties\": {\n \"nested_property\": {\n \"properties\": {\n \"an_analysed_property\": {\n \"type\": \"string\"\n },\n \"another_analysed_property\": {\n \"type\": \"string\"\n }\n }\n },\n \"a_not_analysed_property\": {\n \"type\": \"string\",\n \"index\": \"not_analyzed\"\n },\n \"a_date_property\": {\n \"type\": \"date\"\n }\n }\n }\n }\n\n We cache the properties of each doc_type, if they are not available, we'll load them again from Elasticsearch"} {"_id": "q_8750", "text": "Implements call to add documents to the ES index\n Note the call to _check_mappings which will setup fields with the desired mappings"} {"_id": "q_8751", "text": "Implements call to search the index for the desired content.\n\n Args:\n query_string (str): the string of values upon which to search within the\n content of the objects within the index\n\n field_dictionary (dict): dictionary of values which _must_ exist and\n _must_ match in order for the documents to be included in the results\n\n filter_dictionary (dict): dictionary of values which _must_ match if the\n field exists in order for the documents to be included in the results;\n documents for which the field does not exist may be included in the\n results if they are not otherwise filtered out\n\n exclude_dictionary(dict): dictionary of values all of which which must\n not match in order for the documents to be included in the results;\n documents which have any of these fields and for which the value matches\n one of the specified values shall be filtered out of the result set\n\n facet_terms (dict): dictionary of terms to include within search\n facets list - key is the term desired to facet upon, and the value is a\n dictionary of extended information to include. Supported right now is a\n size specification for a cap upon how many facet results to return (can\n be an empty dictionary to use default size for underlying engine):\n\n e.g.\n {\n \"org\": {\"size\": 10}, # only show top 10 organizations\n \"modes\": {}\n }\n\n use_field_match (bool): flag to indicate whether to use elastic\n filtering or elastic matching for field matches - this is nothing but a\n potential performance tune for certain queries\n\n (deprecated) exclude_ids (list): list of id values to exclude from the results -\n useful for finding maches that aren't \"one of these\"\n\n Returns:\n dict object with results in the desired format\n {\n \"took\": 3,\n \"total\": 4,\n \"max_score\": 2.0123,\n \"results\": [\n {\n \"score\": 2.0123,\n \"data\": {\n ...\n }\n },\n {\n \"score\": 0.0983,\n \"data\": {\n ...\n }\n }\n ],\n \"facets\": {\n \"org\": {\n \"total\": total_count,\n \"other\": 1,\n \"terms\": {\n \"MITx\": 25,\n \"HarvardX\": 18\n }\n },\n \"modes\": {\n \"total\": modes_count,\n \"other\": 15,\n \"terms\": {\n \"honor\": 58,\n \"verified\": 44,\n }\n }\n }\n }\n\n Raises:\n ElasticsearchException when there is a problem with the response from elasticsearch\n\n Example usage:\n .search(\n \"find the words within this string\",\n {\n \"must_have_field\": \"mast_have_value for must_have_field\"\n },\n {\n\n }\n )"} {"_id": "q_8752", "text": "Call the search engine with the appropriate parameters"} {"_id": "q_8753", "text": "Course Discovery activities against the search engine index of course details"} {"_id": "q_8754", "text": "Used by default implementation for finding excerpt"} {"_id": "q_8755", "text": "Used by default property excerpt"} {"_id": "q_8756", "text": "decorate the matches within the excerpt"} {"_id": "q_8757", "text": "Called during post processing of result\n Any properties defined in your subclass will get exposed as members of the result json from the search"} {"_id": "q_8758", "text": "Called from within search handler. Finds desired subclass and decides if the\n result should be removed and adds properties derived from the result information"} {"_id": "q_8759", "text": "Property to display a useful excerpt representing the matches within the results"} {"_id": "q_8760", "text": "Called from within search handler\n Finds desired subclass and adds filter information based upon user information"} {"_id": "q_8761", "text": "Called from within search handler\n Finds desired subclass and calls initialize method"} {"_id": "q_8762", "text": "Opens data file and for each line, calls _eat_name_line"} {"_id": "q_8763", "text": "Parses one line of data file"} {"_id": "q_8764", "text": "Finds the most popular gender for the given name counting by given counter"} {"_id": "q_8765", "text": "Returns best gender for the given name and country pair"} {"_id": "q_8766", "text": "Executes the suite of TidyPy tools upon the project and returns the\n issues that are found.\n\n :param config: the TidyPy configuration to use\n :type config: dict\n :param path: that path to the project to analyze\n :type path: str\n :param progress:\n the progress reporter object that will receive callbacks during the\n execution of the tool suite. If not specified, not progress\n notifications will occur.\n :type progress: tidypy.Progress\n :rtype: tidypy.Collector"} {"_id": "q_8767", "text": "Executes the configured suite of issue reports.\n\n :param config: the TidyPy configuration to use\n :type config: dict\n :param path: that path to the project that was analyzed\n :type path: str\n :param collector: the issues to report\n :type collector: tidypy.Collector"} {"_id": "q_8768", "text": "Determines whether or not the specified file is excluded by the\n project's configuration.\n\n :param path: the path to check\n :type path: pathlib.Path\n :rtype: bool"} {"_id": "q_8769", "text": "Determines whether or not the specified directory is excluded by the\n project's configuration.\n\n :param path: the path to check\n :type path: pathlib.Path\n :rtype: bool"} {"_id": "q_8770", "text": "A generator that produces a sequence of paths to files in the project\n that matches the specified filters.\n\n :param filters:\n the regular expressions to use when finding files in the project.\n If not specified, all files are returned.\n :type filters: list(str)"} {"_id": "q_8771", "text": "A generator that produces a sequence of paths to directories in the\n project that matches the specified filters.\n\n :param filters:\n the regular expressions to use when finding directories in the\n project. If not specified, all directories are returned.\n :type filters: list(str)\n :param containing:\n if a directory passes through the specified filters, it is checked\n for the presence of a file that matches one of the regular\n expressions in this parameter.\n :type containing: list(str)"} {"_id": "q_8772", "text": "Adds an issue to the collection.\n\n :param issues: the issue(s) to add\n :type issues: tidypy.Issue or list(tidypy.Issue)"} {"_id": "q_8773", "text": "Returns the number of issues in the collection.\n\n :param include_unclean:\n whether or not to include issues that are being ignored due to\n being a duplicate, excluded, etc.\n :type include_unclean: bool\n :rtype: int"} {"_id": "q_8774", "text": "Retrieves the issues in the collection.\n\n :param sortby: the properties to sort the issues by\n :type sortby: list(str)\n :rtype: list(tidypy.Issue)"} {"_id": "q_8775", "text": "A convenience method for parsing a TOML-serialized configuration.\n\n :param content: a TOML string containing a TidyPy configuration\n :type content: str\n :param is_pyproject:\n whether or not the content is (or resembles) a ``pyproject.toml``\n file, where the TidyPy configuration is located within a key named\n ``tool``.\n :type is_pyproject: bool\n :rtype: dict"} {"_id": "q_8776", "text": "Retrieves the TidyPy tools that are available in the current Python\n environment.\n\n The returned dictionary has keys that are the tool names and values are the\n tool classes.\n\n :rtype: dict"} {"_id": "q_8777", "text": "Retrieves the TidyPy configuration extenders that are available in the\n current Python environment.\n\n The returned dictionary has keys are the extender names and values are the\n extender classes.\n\n :rtype: dict"} {"_id": "q_8778", "text": "Clears out the cache of TidyPy configurations that were retrieved from\n outside the normal locations."} {"_id": "q_8779", "text": "Prints the specified string to ``stderr``.\n\n :param msg: the message to print\n :type msg: str"} {"_id": "q_8780", "text": "A context manager that will append the specified paths to Python's\n ``sys.path`` during the execution of the block.\n\n :param paths: the paths to append\n :type paths: list(str)"} {"_id": "q_8781", "text": "Compiles a list of regular expressions.\n\n :param masks: the regular expressions to compile\n :type masks: list(str) or str\n :returns: list(regular expression object)"} {"_id": "q_8782", "text": "Retrieves the AST of the specified file.\n\n This function performs simple caching so that the same file isn't read or\n parsed more than once per process.\n\n :param filepath: the file to parse\n :type filepath: str\n :returns: ast.AST"} {"_id": "q_8783", "text": "Called when an individual tool completes execution.\n\n :param tool: the name of the tool that completed\n :type tool: str"} {"_id": "q_8784", "text": "Execute an x3270 command\n\n `cmdstr` gets sent directly to the x3270 subprocess on it's stdin."} {"_id": "q_8785", "text": "Connect to a host"} {"_id": "q_8786", "text": "Wait until the screen is ready, the cursor has been positioned\n on a modifiable field, and the keyboard is unlocked.\n\n Sometimes the server will \"unlock\" the keyboard but the screen will\n not yet be ready. In that case, an attempt to read or write to the\n screen will result in a 'E' keyboard status because we tried to\n read from a screen that is not yet ready.\n\n Using this method tells the client to wait until a field is\n detected and the cursor has been positioned on it."} {"_id": "q_8787", "text": "move the cursor to the given co-ordinates. Co-ordinates are 1\n based, as listed in the status area of the terminal."} {"_id": "q_8788", "text": "clears the field at the position given and inserts the string\n `tosend`\n\n tosend: the string to insert\n length: the length of the field\n\n Co-ordinates are 1 based, as listed in the status area of the\n terminal.\n\n raises: FieldTruncateError if `tosend` is longer than\n `length`."} {"_id": "q_8789", "text": "Configures this extension with a given configuration dictionary.\n This allows use of this extension without a flask app.\n\n Args:\n config (dict): A dictionary with configuration keys"} {"_id": "q_8790", "text": "Cleanup after a request. Close any open connections."} {"_id": "q_8791", "text": "An abstracted authentication method. Decides whether to perform a\n direct bind or a search bind based upon the login attribute configured\n in the config.\n\n Args:\n username (str): Username of the user to bind\n password (str): User's password to bind with.\n\n Returns:\n AuthenticationResponse"} {"_id": "q_8792", "text": "Performs a search bind to authenticate a user. This is\n required when a the login attribute is not the same\n as the RDN, since we cannot string together their DN on\n the fly, instead we have to find it in the LDAP, then attempt\n to bind with their credentials.\n\n Args:\n username (str): Username of the user to bind (the field specified\n as LDAP_BIND_LOGIN_ATTR)\n password (str): User's password to bind with when we find their dn.\n\n Returns:\n AuthenticationResponse"} {"_id": "q_8793", "text": "Gets a list of groups a user at dn is a member of\n\n Args:\n dn (str): The dn of the user to find memberships for.\n _connection (ldap3.Connection): A connection object to use when\n searching. If not given, a temporary connection will be\n created, and destroyed after use.\n group_search_dn (str): The search dn for groups. Defaults to\n ``'{LDAP_GROUP_DN},{LDAP_BASE_DN}'``.\n\n Returns:\n list: A list of LDAP groups the user is a member of."} {"_id": "q_8794", "text": "Gets info about a user specified at dn.\n\n Args:\n dn (str): The dn of the user to find\n _connection (ldap3.Connection): A connection object to use when\n searching. If not given, a temporary connection will be\n created, and destroyed after use.\n\n Returns:\n dict: A dictionary of the user info from LDAP"} {"_id": "q_8795", "text": "Gets info about a user at a specified username by searching the\n Users DN. Username attribute is the same as specified as\n LDAP_USER_LOGIN_ATTR.\n\n\n Args:\n username (str): Username of the user to search for.\n _connection (ldap3.Connection): A connection object to use when\n searching. If not given, a temporary connection will be\n created, and destroyed after use.\n Returns:\n dict: A dictionary of the user info from LDAP"} {"_id": "q_8796", "text": "Gets an object at the specified dn and returns it.\n\n Args:\n dn (str): The dn of the object to find.\n filter (str): The LDAP syntax search filter.\n attributes (list): A list of LDAP attributes to get when searching.\n _connection (ldap3.Connection): A connection object to use when\n searching. If not given, a temporary connection will be created,\n and destroyed after use.\n\n Returns:\n dict: A dictionary of the object info from LDAP"} {"_id": "q_8797", "text": "Convenience property for externally accessing an authenticated\n connection to the server. This connection is automatically\n handled by the appcontext, so you do not have to perform an unbind.\n\n Returns:\n ldap3.Connection: A bound ldap3.Connection\n Raises:\n ldap3.core.exceptions.LDAPException: Since this method is performing\n a bind on behalf of the caller. You should handle this case\n occuring, such as invalid service credentials."} {"_id": "q_8798", "text": "Make a connection.\n\n Args:\n bind_user (str): User to bind with. If `None`, AUTH_ANONYMOUS is\n used, otherwise authentication specified with\n config['LDAP_BIND_AUTHENTICATION_TYPE'] is used.\n bind_password (str): Password to bind to the directory with\n contextualise (bool): If true (default), will add this connection to the\n appcontext so it can be unbound upon app_teardown.\n\n Returns:\n ldap3.Connection: An unbound ldap3.Connection. You should handle exceptions\n upon bind if you use this internal method."} {"_id": "q_8799", "text": "Destroys a connection. Removes the connection from the appcontext, and\n unbinds it.\n\n Args:\n connection (ldap3.Connection): The connnection to destroy"} {"_id": "q_8800", "text": "query a s3 endpoint for an image based on a string\n\n EXAMPLE QUERIES:\n\n [empty] list all container collections\n vsoch/dinosaur look for containers with name vsoch/dinosaur"} {"_id": "q_8801", "text": "search across labels"} {"_id": "q_8802", "text": "query a GitLab artifacts folder for a list of images. \n If query is None, collections are listed."} {"_id": "q_8803", "text": "a \"show all\" search that doesn't require a query\n the user is shown URLs to"} {"_id": "q_8804", "text": "update headers with a token & other fields"} {"_id": "q_8805", "text": "require secrets ensures that the client has the secrets file, and\n specifically has one or more parameters defined. If params is None,\n only a check is done for the file.\n\n Parameters\n ==========\n params: a list of keys to lookup in the client secrets, eg:\n \n secrets[client_name][params1] should not be in [None,''] or not set"} {"_id": "q_8806", "text": "stream to a temporary file, rename on successful completion\n\n Parameters\n ==========\n file_name: the file name to stream to\n url: the url to stream from\n headers: additional headers to add"} {"_id": "q_8807", "text": "update_token uses HTTP basic authentication to attempt to authenticate\n given a 401 response. We take as input previous headers, and update \n them.\n\n Parameters\n ==========\n response: the http request response to parse for the challenge."} {"_id": "q_8808", "text": "attempt to read the detail provided by the response. If none, \n default to using the reason"} {"_id": "q_8809", "text": "given a bucket name and a client that is initialized, get or\n create the bucket."} {"_id": "q_8810", "text": "init_ cliends will obtain the tranfer and access tokens, and then\n use them to create a transfer client."} {"_id": "q_8811", "text": "return logs for a particular container. The logs file is equivalent to\n the name, but with extension .log. If there is no name, the most recent\n log is returned.\n\n Parameters\n ==========\n name: the container name to print logs for."} {"_id": "q_8812", "text": "return a list of logs. We return any file that ends in .log"} {"_id": "q_8813", "text": "create an endpoint folder, catching the error if it exists.\n\n Parameters\n ==========\n endpoint_id: the endpoint id parameters\n folder: the relative path of the folder to create"} {"_id": "q_8814", "text": "return a transfer client for the user"} {"_id": "q_8815", "text": "print the status for all or one of the backends."} {"_id": "q_8816", "text": "add the variable to the config"} {"_id": "q_8817", "text": "remove a variable from the config, if found."} {"_id": "q_8818", "text": "generate a base64 encoded header to ask for a token. This means\n base64 encoding a username and password and adding to the\n Authorization header to identify the client.\n\n Parameters\n ==========\n username: the username\n password: the password"} {"_id": "q_8819", "text": "Authorize a client based on encrypting the payload with the client\n secret, timestamp, and other metadata"} {"_id": "q_8820", "text": "head request, typically used for status code retrieval, etc."} {"_id": "q_8821", "text": "paginate_call is a wrapper for get to paginate results"} {"_id": "q_8822", "text": "verify will return a True or False to determine to verify the\n requests call or not. If False, we should the user a warning message,\n as this should not be done in production!"} {"_id": "q_8823", "text": "delete an image to Singularity Registry"} {"_id": "q_8824", "text": "get version by way of sregistry.version, returns a \n lookup dictionary with several global variables without\n needing to import singularity"} {"_id": "q_8825", "text": "get requirements, mean reading in requirements and versions from\n the lookup obtained with get_lookup"} {"_id": "q_8826", "text": "get_singularity_version will determine the singularity version for a\n build first, an environmental variable is looked at, followed by \n using the system version.\n\n Parameters\n ==========\n singularity_version: if not defined, look for in environment. If still\n not find, try finding via executing --version to Singularity. Only return\n None if not set in environment or installed."} {"_id": "q_8827", "text": "get_installdir returns the installation directory of the application"} {"_id": "q_8828", "text": "return the robot.png thumbnail from the database folder.\n if the user has exported a different image, use that instead."} {"_id": "q_8829", "text": "run_command uses subprocess to send a command to the terminal.\n\n Parameters\n ==========\n cmd: the command to send, should be a list for subprocess\n error_message: the error message to give to user if fails,\n if none specified, will alert that command failed."} {"_id": "q_8830", "text": "this is a wrapper around the main client.get_metadata to first parse\n a Dropbox FileMetadata into a dicionary, then pass it on to the \n primary get_metadata function.\n\n Parameters\n ==========\n image_file: the full path to the image file that had metadata\n extracted\n metadata: the Dropbox FileMetadata to parse."} {"_id": "q_8831", "text": "print the output to the console for the user. If the user wants the content\n also printed to an output file, do that.\n\n Parameters\n ==========\n response: the response from the builder, with metadata added\n output_file: if defined, write output also to file"} {"_id": "q_8832", "text": "list a specific log for a builder, or the latest log if none provided\n\n Parameters\n ==========\n args: the argparse object to look for a container name\n container_name: a default container name set to be None (show latest log)"} {"_id": "q_8833", "text": "get a listing of collections that the user has access to."} {"_id": "q_8834", "text": "update secrets will look for a user and token in the environment\n If we find the values, cache and continue. Otherwise, exit with error"} {"_id": "q_8835", "text": "The user is required to have an application secrets file in his\n or her environment. The information isn't saved to the secrets\n file, but the client exists with error if the variable isn't found."} {"_id": "q_8836", "text": "get the correct client depending on the driver of interest. The\n selected client can be chosen based on the environment variable\n SREGISTRY_CLIENT, and later changed based on the image uri parsed\n If there is no preference, the default is to load the singularity \n hub client.\n\n Parameters\n ==========\n image: if provided, we derive the correct client based on the uri\n of an image. If not provided, we default to environment, then hub.\n quiet: if True, suppress most output about the client (e.g. speak)"} {"_id": "q_8837", "text": "get_manifests calls get_manifest for each of the schema versions,\n including v2 and v1. Version 1 includes image layers and metadata,\n and version 2 must be parsed for a specific manifest, and the 2nd\n call includes the layers. If a digest is not provided\n latest is used.\n\n Parameters\n ==========\n repo_name: reference to the /: to obtain\n digest: a tag or shasum version"} {"_id": "q_8838", "text": "get_manifest should return an image manifest\n for a particular repo and tag. The image details\n are extracted when the client is generated.\n\n Parameters\n ==========\n repo_name: reference to the /: to obtain\n digest: a tag or shasum version\n version: one of v1, v2, and config (for image config)"} {"_id": "q_8839", "text": "determine the user preference for atomic download of layers. If\n the user has set a singularity cache directory, honor it. Otherwise,\n use the Singularity default."} {"_id": "q_8840", "text": "extract the environment from the manifest, or return None.\n Used by functions env_extract_image, and env_extract_tar"} {"_id": "q_8841", "text": "get all settings, either for a particular client if a name is provided,\n or across clients.\n\n Parameters\n ==========\n client_name: the client name to return settings for (optional)"} {"_id": "q_8842", "text": "a wrapper to get_and_update, but if not successful, will print an\n error and exit."} {"_id": "q_8843", "text": "Just update a setting, doesn't need to be returned."} {"_id": "q_8844", "text": "Authorize a client based on encrypting the payload with the client\n token, which should be matched on the receiving server"} {"_id": "q_8845", "text": "load a particular template based on a name. We look for a name IN data,\n so the query name can be a partial string of the full name.\n\n Parameters\n ==========\n name: the name of a template to look up"} {"_id": "q_8846", "text": "run a build, meaning inserting an instance. Retry if there is failure\n\n Parameters\n ==========\n config: the configuration dictionary generated by setup_build"} {"_id": "q_8847", "text": "return a list of containers, determined by finding the metadata field\n \"type\" with value \"container.\" We alert the user to no containers \n if results is empty, and exit\n\n {'metadata': {'items': \n [\n {'key': 'type', 'value': 'container'}, ... \n ]\n }\n }"} {"_id": "q_8848", "text": "a \"list all\" search that doesn't require a query. Here we return to\n the user all objects that have custom metadata value of \"container\"\n\n IMPORTANT: the upload function adds this metadata. For a container to\n be found by the client, it must have the type as container in metadata."} {"_id": "q_8849", "text": "sharing an image means sending a remote share from an image you\n control to a contact, usually an email."} {"_id": "q_8850", "text": "initialize the database, with the default database path or custom of\n\n the format sqlite:////scif/data/expfactory.db\n\n The custom path can be set with the environment variable SREGISTRY_DATABASE\n when a user creates the client, we must initialize this db\n the database should use the .singularity cache folder to cache\n layers and images, and .singularity/sregistry.db as a database"} {"_id": "q_8851", "text": "get default build template."} {"_id": "q_8852", "text": "query will show images determined by the extension of img\n or simg.\n\n Parameters\n ==========\n query: the container name (path) or uri to search for\n args.endpoint: can be an endpoint id and optional path, e.g.:\n\n --endpoint 6881ae2e-db26-11e5-9772-22000b9da45e:.singularity'\n --endpoint 6881ae2e-db26-11e5-9772-22000b9da45e'\n\n if not defined, we show the user endpoints to choose from\n\n Usage\n =====\n If endpoint is defined with a query, then we search the given endpoint\n for a container of interested (designated by ending in .img or .simg\n\n If no endpoint is provided but instead just a query, we use the query\n to search endpoints."} {"_id": "q_8853", "text": "list all endpoints, providing a list of endpoints to the user to\n better filter the search. This function takes no arguments,\n as the user has not provided an endpoint id or query."} {"_id": "q_8854", "text": "An endpoint is required here to list files within. Optionally, we can\n take a path relative to the endpoint root.\n\n Parameters\n ==========\n endpoint: a single endpoint ID or an endpoint id and relative path.\n If no path is provided, we use '', which defaults to scratch.\n\n query: if defined, limit files to those that have query match"} {"_id": "q_8855", "text": "share will use the client to get a shareable link for an image of choice.\n the functions returns a url of choice to send to a recipient."} {"_id": "q_8856", "text": "for private or protected registries, a client secrets file is required\n to be located at .sregistry. If no secrets are found, we use default\n of Singularity Hub, and return a dummy secrets."} {"_id": "q_8857", "text": "delete object will delete a file from a bucket\n\n Parameters\n ==========\n storage_service: the service obtained with get_storage_service\n bucket_name: the name of the bucket\n object_name: the \"name\" parameter of the object."} {"_id": "q_8858", "text": "delete an image from Google Storage.\n\n Parameters\n ==========\n name: the name of the file (or image) to delete"} {"_id": "q_8859", "text": "get_subparser will get a dictionary of subparsers, to help with printing help"} {"_id": "q_8860", "text": "Generate a robot name. Inspiration from Haikunator, but much more\n poorly implemented ;)\n\n Parameters\n ==========\n delim: Delimiter\n length: TokenLength\n chars: TokenChars"} {"_id": "q_8861", "text": "get a temporary directory for an operation. If SREGISTRY_TMPDIR\n is set, return that. Otherwise, return the output of tempfile.mkdtemp\n\n Parameters\n ==========\n requested_tmpdir: an optional requested temporary directory, first\n priority as is coming from calling function.\n prefix: Given a need for a sandbox (or similar), we will need to \n create a subfolder *within* the SREGISTRY_TMPDIR.\n create: boolean to determine if we should create folder (True)"} {"_id": "q_8862", "text": "extract a tar archive to a specified output folder\n\n Parameters\n ==========\n archive: the archive file to extract\n output_folder: the output folder to extract to\n handle_whiteout: use docker2oci variation to handle whiteout files"} {"_id": "q_8863", "text": "use blob2oci to handle whiteout files for extraction. Credit for this\n script goes to docker2oci by Olivier Freyermouth, and see script\n folder for license.\n\n Parameters\n ==========\n archive: the archive to extract\n output_folder the output folder (sandbox) to extract to"} {"_id": "q_8864", "text": "read_json reads in a json file and returns\n the data structure as dict."} {"_id": "q_8865", "text": "clean up will delete a list of files, only if they exist"} {"_id": "q_8866", "text": "push an image to an S3 endpoint"} {"_id": "q_8867", "text": "get a collection if it exists. If it doesn't exist, create it first.\n\n Parameters\n ==========\n name: the collection name, usually parsed from get_image_names()['name']"} {"_id": "q_8868", "text": "get a container, otherwise return None."} {"_id": "q_8869", "text": "List local images in the database, optionally with a query.\n\n Paramters\n =========\n query: a string to search for in the container or collection name|tag|uri"} {"_id": "q_8870", "text": "Inspect a local image in the database, which typically includes the\n basic fields in the model."} {"_id": "q_8871", "text": "rename performs a move, but ensures the path is maintained in storage\n\n Parameters\n ==========\n image_name: the image name (uri) to rename to.\n path: the name to rename (basename is taken)"} {"_id": "q_8872", "text": "Move an image from it's current location to a new path.\n Removing the image from organized storage is not the recommended approach\n however is still a function wanted by some.\n\n Parameters\n ==========\n image_name: the parsed image name.\n path: the location to move the image to"} {"_id": "q_8873", "text": "Remove an image from the database and filesystem."} {"_id": "q_8874", "text": "get or create a container, including the collection to add it to.\n This function can be used from a file on the local system, or via a URL\n that has been downloaded. Either way, if one of url, version, or image_file\n is not provided, the model is created without it. If a version is not\n provided but a file path is, then the file hash is used.\n\n Parameters\n ==========\n image_path: full path to image file\n image_name: if defined, the user wants a custom name (and not based on uri)\n metadata: any extra metadata to keep for the image (dict)\n save: if True, move the image to the cache if it's not there\n copy: If True, copy the image instead of moving it.\n\n image_name: a uri that gets parsed into a names object that looks like:\n\n {'collection': 'vsoch',\n 'image': 'hello-world',\n 'storage': 'vsoch/hello-world-latest.img',\n 'tag': 'latest',\n 'version': '12345'\n 'uri': 'vsoch/hello-world:latest@12345'}\n\n After running add, the user will take some image in a working\n directory, add it to the database, and have it available for search\n and use under SREGISTRY_STORAGE//\n\n If the container was retrieved from a webby place, it should have version\n If no version is found, the file hash is used."} {"_id": "q_8875", "text": "push an image to Singularity Registry"} {"_id": "q_8876", "text": "take a recipe, and return the complete header, line. If\n remove_header is True, only return the value.\n\n Parameters\n ==========\n recipe: the recipe file\n headers: the header key to find and parse\n remove_header: if true, remove the key"} {"_id": "q_8877", "text": "find_single_recipe will parse a single file, and if valid,\n return an updated manifest\n\n Parameters\n ==========\n filename: the filename to assess for a recipe\n pattern: a default pattern to search for\n manifest: an already started manifest"} {"_id": "q_8878", "text": "given a list of files, copy them to a temporary folder,\n compress into a .tar.gz, and rename based on the file hash.\n Return the full path to the .tar.gz in the temporary folder.\n\n Parameters\n ==========\n package_files: a list of files to include in the tar.gz"} {"_id": "q_8879", "text": "format_container_name will take a name supplied by the user,\n remove all special characters (except for those defined by \"special-characters\"\n and return the new image name."} {"_id": "q_8880", "text": "useColor will determine if color should be added\n to a print. Will check if being run in a terminal, and\n if has support for asci"} {"_id": "q_8881", "text": "determine if a level should print to\n stderr, includes all levels but INFO and QUIET"} {"_id": "q_8882", "text": "write will write a message to a stream,\n first checking the encoding"} {"_id": "q_8883", "text": "table will print a table of entries. If the rows is \n a dictionary, the keys are interpreted as column names. if\n not, a numbered list is used."} {"_id": "q_8884", "text": "return a default template for some function in sregistry\n If there is no template, None is returned.\n\n Parameters\n ==========\n name: the name of the template to retrieve"} {"_id": "q_8885", "text": "return the image manifest via the aws client, saved in self.manifest"} {"_id": "q_8886", "text": "update secrets will take a secrets credential file\n either located at .sregistry or the environment variable\n SREGISTRY_CLIENT_SECRETS and update the current client \n secrets as well as the associated API base. This is where you\n should do any customization of the secrets flie, or using\n it to update your client, if needed."} {"_id": "q_8887", "text": "Translate S3 errors to FSErrors."} {"_id": "q_8888", "text": "Create a S3File backed with a temporary file."} {"_id": "q_8889", "text": "Builds a url to a gravatar from an email address.\n\n :param email: The email to fetch the gravatar for\n :param size: The size (in pixels) of the gravatar to fetch\n :param default: What type of default image to use if the gravatar does not exist\n :param rating: Used to filter the allowed gravatar ratings\n :param secure: If True use https, otherwise plain http"} {"_id": "q_8890", "text": "Returns True if the user has a gravatar, False if otherwise"} {"_id": "q_8891", "text": "Generator for blocks for a chimera block quotient"} {"_id": "q_8892", "text": "Extract the blocks from a graph, and returns a\n block-quotient graph according to the acceptability\n functions block_good and eblock_good\n\n Inputs:\n G: a networkx graph\n blocks: a tuple of tuples"} {"_id": "q_8893", "text": "Return a set of resonance forms as SMILES strings, given a SMILES string.\n\n :param smiles: A SMILES string.\n :returns: A set containing SMILES strings for every possible resonance form.\n :rtype: set of strings."} {"_id": "q_8894", "text": "Repeatedly apply normalization transform to molecule until no changes occur.\n\n It is possible for multiple products to be produced when a rule is applied. The rule is applied repeatedly to\n each of the products, until no further changes occur or after 20 attempts. If there are multiple unique products\n after the final application, the first product (sorted alphabetically by SMILES) is chosen."} {"_id": "q_8895", "text": "Return a canonical tautomer by enumerating and scoring all possible tautomers.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :return: The canonical tautomer.\n :rtype: rdkit.Chem.rdchem.Mol"} {"_id": "q_8896", "text": "Break covalent bonds between metals and organic atoms under certain conditions.\n\n The algorithm works as follows:\n\n - Disconnect N, O, F from any metal.\n - Disconnect other non-metals from transition metals + Al (but not Hg, Ga, Ge, In, Sn, As, Tl, Pb, Bi, Po).\n - For every bond broken, adjust the charges of the begin and end atoms accordingly.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :return: The molecule with metals disconnected.\n :rtype: rdkit.Chem.rdchem.Mol"} {"_id": "q_8897", "text": "Return a standardized canonical SMILES string given a SMILES string.\n\n Note: This is a convenience function for quickly standardizing a single SMILES string. It is more efficient to use\n the :class:`~molvs.standardize.Standardizer` class directly when working with many molecules or when custom options\n are needed.\n\n :param string smiles: The SMILES for the molecule.\n :returns: The SMILES for the standardized molecule.\n :rtype: string."} {"_id": "q_8898", "text": "Return a set of tautomers as SMILES strings, given a SMILES string.\n\n :param smiles: A SMILES string.\n :returns: A set containing SMILES strings for every possible tautomer.\n :rtype: set of strings."} {"_id": "q_8899", "text": "Return a standardized version the given molecule.\n\n The standardization process consists of the following stages: RDKit\n :py:func:`~rdkit.Chem.rdmolops.RemoveHs`, RDKit :py:func:`~rdkit.Chem.rdmolops.SanitizeMol`,\n :class:`~molvs.metal.MetalDisconnector`, :class:`~molvs.normalize.Normalizer`,\n :class:`~molvs.charge.Reionizer`, RDKit :py:func:`~rdkit.Chem.rdmolops.AssignStereochemistry`.\n\n :param mol: The molecule to standardize.\n :type mol: rdkit.Chem.rdchem.Mol\n :returns: The standardized molecule.\n :rtype: rdkit.Chem.rdchem.Mol"} {"_id": "q_8900", "text": "Return the tautomer parent of a given molecule.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :param bool skip_standardize: Set to True if mol has already been standardized.\n :returns: The tautomer parent molecule.\n :rtype: rdkit.Chem.rdchem.Mol"} {"_id": "q_8901", "text": "Return the fragment parent of a given molecule.\n\n The fragment parent is the largest organic covalent unit in the molecule.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :param bool skip_standardize: Set to True if mol has already been standardized.\n :returns: The fragment parent molecule.\n :rtype: rdkit.Chem.rdchem.Mol"} {"_id": "q_8902", "text": "Return the stereo parent of a given molecule.\n\n The stereo parent has all stereochemistry information removed from tetrahedral centers and double bonds.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :param bool skip_standardize: Set to True if mol has already been standardized.\n :returns: The stereo parent molecule.\n :rtype: rdkit.Chem.rdchem.Mol"} {"_id": "q_8903", "text": "Return the isotope parent of a given molecule.\n\n The isotope parent has all atoms replaced with the most abundant isotope for that element.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :param bool skip_standardize: Set to True if mol has already been standardized.\n :returns: The isotope parent molecule.\n :rtype: rdkit.Chem.rdchem.Mol"} {"_id": "q_8904", "text": "Return the super parent of a given molecule.\n\n THe super parent is fragment, charge, isotope, stereochemistry and tautomer insensitive. From the input\n molecule, the largest fragment is taken. This is uncharged and then isotope and stereochemistry information is\n discarded. Finally, the canonical tautomer is determined and returned.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :param bool skip_standardize: Set to True if mol has already been standardized.\n :returns: The super parent molecule.\n :rtype: rdkit.Chem.rdchem.Mol"} {"_id": "q_8905", "text": "Main function for molvs command line interface."} {"_id": "q_8906", "text": "Return the molecule with specified fragments removed.\n\n :param mol: The molecule to remove fragments from.\n :type mol: rdkit.Chem.rdchem.Mol\n :return: The molecule with fragments removed.\n :rtype: rdkit.Chem.rdchem.Mol"} {"_id": "q_8907", "text": "Return the largest covalent unit.\n\n The largest fragment is determined by number of atoms (including hydrogens). Ties are broken by taking the\n fragment with the higher molecular weight, and then by taking the first alphabetically by SMILES if needed.\n\n :param mol: The molecule to choose the largest fragment from.\n :type mol: rdkit.Chem.rdchem.Mol\n :return: The largest fragment.\n :rtype: rdkit.Chem.rdchem.Mol"} {"_id": "q_8908", "text": "Construct a constraint from a validation function.\n\n Args:\n func (function):\n Function that evaluates True when the variables satisfy the constraint.\n\n variables (iterable):\n Iterable of variable labels.\n\n vartype (:class:`~dimod.Vartype`/str/set):\n Variable type for the constraint. Accepted input values:\n\n * :attr:`~dimod.Vartype.SPIN`, ``'SPIN'``, ``{-1, 1}``\n * :attr:`~dimod.Vartype.BINARY`, ``'BINARY'``, ``{0, 1}``\n\n name (string, optional, default='Constraint'):\n Name for the constraint.\n\n Examples:\n This example creates a constraint that binary variables `a` and `b`\n are not equal.\n\n >>> import dwavebinarycsp\n >>> import operator\n >>> const = dwavebinarycsp.Constraint.from_func(operator.ne, ['a', 'b'], 'BINARY')\n >>> print(const.name)\n Constraint\n >>> (0, 1) in const.configurations\n True\n\n This example creates a constraint that :math:`out = NOT(x)`\n for spin variables.\n\n >>> import dwavebinarycsp\n >>> def not_(y, x): # y=NOT(x) for spin variables\n ... return (y == -x)\n ...\n >>> const = dwavebinarycsp.Constraint.from_func(\n ... not_,\n ... ['out', 'in'],\n ... {1, -1},\n ... name='not_spin')\n >>> print(const.name)\n not_spin\n >>> (1, -1) in const.configurations\n True"} {"_id": "q_8909", "text": "Construct a constraint from valid configurations.\n\n Args:\n configurations (iterable[tuple]):\n Valid configurations of the variables. Each configuration is a tuple of variable\n assignments ordered by :attr:`~Constraint.variables`.\n\n variables (iterable):\n Iterable of variable labels.\n\n vartype (:class:`~dimod.Vartype`/str/set):\n Variable type for the constraint. Accepted input values:\n\n * :attr:`~dimod.Vartype.SPIN`, ``'SPIN'``, ``{-1, 1}``\n * :attr:`~dimod.Vartype.BINARY`, ``'BINARY'``, ``{0, 1}``\n\n name (string, optional, default='Constraint'):\n Name for the constraint.\n\n Examples:\n\n This example creates a constraint that variables `a` and `b` are not equal.\n\n >>> import dwavebinarycsp\n >>> const = dwavebinarycsp.Constraint.from_configurations([(0, 1), (1, 0)],\n ... ['a', 'b'], dwavebinarycsp.BINARY)\n >>> print(const.name)\n Constraint\n >>> (0, 0) in const.configurations # Order matches variables: a,b\n False\n\n This example creates a constraint based on specified valid configurations\n that represents an OR gate for spin variables.\n\n >>> import dwavebinarycsp\n >>> const = dwavebinarycsp.Constraint.from_configurations(\n ... [(-1, -1, -1), (1, -1, 1), (1, 1, -1), (1, 1, 1)],\n ... ['y', 'x1', 'x2'],\n ... dwavebinarycsp.SPIN, name='or_spin')\n >>> print(const.name)\n or_spin\n >>> (1, 1, -1) in const.configurations # Order matches variables: y,x1,x2\n True"} {"_id": "q_8910", "text": "Check that a solution satisfies the constraint.\n\n Args:\n solution (container):\n An assignment for the variables in the constraint.\n\n Returns:\n bool: True if the solution satisfies the constraint; otherwise False.\n\n Examples:\n This example creates a constraint that :math:`a \\\\ne b` on binary variables\n and tests it for two candidate solutions, with additional unconstrained\n variable c.\n\n >>> import dwavebinarycsp\n >>> const = dwavebinarycsp.Constraint.from_configurations([(0, 1), (1, 0)],\n ... ['a', 'b'], dwavebinarycsp.BINARY)\n >>> solution = {'a': 1, 'b': 1, 'c': 0}\n >>> const.check(solution)\n False\n >>> solution = {'a': 1, 'b': 0, 'c': 0}\n >>> const.check(solution)\n True"} {"_id": "q_8911", "text": "Flip a variable in the constraint.\n\n Args:\n v (variable):\n Variable in the constraint to take the complementary value of its\n construction value.\n\n Examples:\n This example creates a constraint that :math:`a = b` on binary variables\n and flips variable a.\n\n >>> import dwavebinarycsp\n >>> const = dwavebinarycsp.Constraint.from_func(operator.eq,\n ... ['a', 'b'], dwavebinarycsp.BINARY)\n >>> const.check({'a': 0, 'b': 0})\n True\n >>> const.flip_variable('a')\n >>> const.check({'a': 1, 'b': 0})\n True\n >>> const.check({'a': 0, 'b': 0})\n False"} {"_id": "q_8912", "text": "Add a constraint.\n\n Args:\n constraint (function/iterable/:obj:`.Constraint`):\n Constraint definition in one of the supported formats:\n\n 1. Function, with input arguments matching the order and\n :attr:`~.ConstraintSatisfactionProblem.vartype` type of the `variables`\n argument, that evaluates True when the constraint is satisfied.\n 2. List explicitly specifying each allowed configuration as a tuple.\n 3. :obj:`.Constraint` object built either explicitly or by :mod:`dwavebinarycsp.factories`.\n\n variables(iterable):\n Variables associated with the constraint. Not required when `constraint` is\n a :obj:`.Constraint` object.\n\n Examples:\n This example defines a function that evaluates True when the constraint is satisfied.\n The function's input arguments match the order and type of the `variables` argument.\n\n >>> import dwavebinarycsp\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)\n >>> def all_equal(a, b, c): # works for both dwavebinarycsp.BINARY and dwavebinarycsp.SPIN\n ... return (a == b) and (b == c)\n >>> csp.add_constraint(all_equal, ['a', 'b', 'c'])\n >>> csp.check({'a': 0, 'b': 0, 'c': 0})\n True\n >>> csp.check({'a': 0, 'b': 0, 'c': 1})\n False\n\n This example explicitly lists allowed configurations.\n\n >>> import dwavebinarycsp\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.SPIN)\n >>> eq_configurations = {(-1, -1), (1, 1)}\n >>> csp.add_constraint(eq_configurations, ['v0', 'v1'])\n >>> csp.check({'v0': -1, 'v1': +1})\n False\n >>> csp.check({'v0': -1, 'v1': -1})\n True\n\n This example uses a :obj:`.Constraint` object built by :mod:`dwavebinarycsp.factories`.\n\n >>> import dwavebinarycsp\n >>> import dwavebinarycsp.factories.constraint.gates as gates\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)\n >>> csp.add_constraint(gates.and_gate(['a', 'b', 'c'])) # add an AND gate\n >>> csp.add_constraint(gates.xor_gate(['a', 'c', 'd'])) # add an XOR gate\n >>> csp.check({'a': 1, 'b': 0, 'c': 0, 'd': 1})\n True"} {"_id": "q_8913", "text": "Build a binary quadratic model with minimal energy levels at solutions to the specified constraint satisfaction\n problem.\n\n Args:\n csp (:obj:`.ConstraintSatisfactionProblem`):\n Constraint satisfaction problem.\n\n min_classical_gap (float, optional, default=2.0):\n Minimum energy gap from ground. Each constraint violated by the solution increases\n the energy level of the binary quadratic model by at least this much relative\n to ground energy.\n\n max_graph_size (int, optional, default=8):\n Maximum number of variables in the binary quadratic model that can be used to\n represent a single constraint.\n\n Returns:\n :class:`~dimod.BinaryQuadraticModel`\n\n Notes:\n For a `min_classical_gap` > 2 or constraints with more than two variables, requires\n access to factories from the penaltymodel_ ecosystem to construct the binary quadratic\n model.\n\n .. _penaltymodel: https://github.com/dwavesystems/penaltymodel\n\n Examples:\n This example creates a binary-valued constraint satisfaction problem\n with two constraints, :math:`a = b` and :math:`b \\\\ne c`, and builds\n a binary quadratic model with a minimum energy level of -2 such that\n each constraint violation by a solution adds the default minimum energy gap.\n\n >>> import dwavebinarycsp\n >>> import operator\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)\n >>> csp.add_constraint(operator.eq, ['a', 'b']) # a == b\n >>> csp.add_constraint(operator.ne, ['b', 'c']) # b != c\n >>> bqm = dwavebinarycsp.stitch(csp)\n >>> bqm.energy({'a': 0, 'b': 0, 'c': 1}) # satisfies csp\n -2.0\n >>> bqm.energy({'a': 0, 'b': 0, 'c': 0}) # violates one constraint\n 0.0\n >>> bqm.energy({'a': 1, 'b': 0, 'c': 0}) # violates two constraints\n 2.0\n\n This example creates a binary-valued constraint satisfaction problem\n with two constraints, :math:`a = b` and :math:`b \\\\ne c`, and builds\n a binary quadratic model with a minimum energy gap of 4.\n Note that in this case the conversion to binary quadratic model adds two\n ancillary variables that must be minimized over when solving.\n\n >>> import dwavebinarycsp\n >>> import operator\n >>> import itertools\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)\n >>> csp.add_constraint(operator.eq, ['a', 'b']) # a == b\n >>> csp.add_constraint(operator.ne, ['b', 'c']) # b != c\n >>> bqm = dwavebinarycsp.stitch(csp, min_classical_gap=4.0)\n >>> list(bqm) # # doctest: +SKIP\n ['a', 'aux1', 'aux0', 'b', 'c']\n >>> min([bqm.energy({'a': 0, 'b': 0, 'c': 1, 'aux0': aux0, 'aux1': aux1}) for\n ... aux0, aux1 in list(itertools.product([0, 1], repeat=2))]) # satisfies csp\n -6.0\n >>> min([bqm.energy({'a': 0, 'b': 0, 'c': 0, 'aux0': aux0, 'aux1': aux1}) for\n ... aux0, aux1 in list(itertools.product([0, 1], repeat=2))]) # violates one constraint\n -2.0\n >>> min([bqm.energy({'a': 1, 'b': 0, 'c': 0, 'aux0': aux0, 'aux1': aux1}) for\n ... aux0, aux1 in list(itertools.product([0, 1], repeat=2))]) # violates two constraints\n 2.0\n\n This example finds for the previous example the minimum graph size.\n\n >>> import dwavebinarycsp\n >>> import operator\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)\n >>> csp.add_constraint(operator.eq, ['a', 'b']) # a == b\n >>> csp.add_constraint(operator.ne, ['b', 'c']) # b != c\n >>> for n in range(8, 1, -1):\n ... try:\n ... bqm = dwavebinarycsp.stitch(csp, min_classical_gap=4.0, max_graph_size=n)\n ... except dwavebinarycsp.exceptions.ImpossibleBQM:\n ... print(n+1)\n ...\n 3"} {"_id": "q_8914", "text": "create a bqm for a constraint with two variables.\n\n bqm will have exactly classical gap 2."} {"_id": "q_8915", "text": "AND gate.\n\n Args:\n variables (list): Variable labels for the and gate as `[in1, in2, out]`,\n where `in1, in2` are inputs and `out` the gate's output.\n vartype (Vartype, optional, default='BINARY'): Variable type. Accepted\n input values:\n\n * Vartype.SPIN, 'SPIN', {-1, 1}\n * Vartype.BINARY, 'BINARY', {0, 1}\n name (str, optional, default='AND'): Name for the constraint.\n\n Returns:\n Constraint(:obj:`.Constraint`): Constraint that is satisfied when its variables are\n assigned values that match the valid states of an AND gate.\n\n Examples:\n >>> import dwavebinarycsp\n >>> import dwavebinarycsp.factories.constraint.gates as gates\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)\n >>> csp.add_constraint(gates.and_gate(['a', 'b', 'c'], name='AND1'))\n >>> csp.check({'a': 1, 'b': 0, 'c': 0})\n True"} {"_id": "q_8916", "text": "XOR gate.\n\n Args:\n variables (list): Variable labels for the and gate as `[in1, in2, out]`,\n where `in1, in2` are inputs and `out` the gate's output.\n vartype (Vartype, optional, default='BINARY'): Variable type. Accepted\n input values:\n\n * Vartype.SPIN, 'SPIN', {-1, 1}\n * Vartype.BINARY, 'BINARY', {0, 1}\n name (str, optional, default='XOR'): Name for the constraint.\n\n Returns:\n Constraint(:obj:`.Constraint`): Constraint that is satisfied when its variables are\n assigned values that match the valid states of an XOR gate.\n\n Examples:\n >>> import dwavebinarycsp\n >>> import dwavebinarycsp.factories.constraint.gates as gates\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)\n >>> csp.add_constraint(gates.xor_gate(['x', 'y', 'z'], name='XOR1'))\n >>> csp.check({'x': 1, 'y': 1, 'z': 1})\n False"} {"_id": "q_8917", "text": "Half adder.\n\n Args:\n variables (list): Variable labels for the and gate as `[in1, in2, sum, carry]`,\n where `in1, in2` are inputs to be added and `sum` and 'carry' the resultant\n outputs.\n vartype (Vartype, optional, default='BINARY'): Variable type. Accepted\n input values:\n\n * Vartype.SPIN, 'SPIN', {-1, 1}\n * Vartype.BINARY, 'BINARY', {0, 1}\n name (str, optional, default='HALF_ADDER'): Name for the constraint.\n\n Returns:\n Constraint(:obj:`.Constraint`): Constraint that is satisfied when its variables are\n assigned values that match the valid states of a Boolean half adder.\n\n Examples:\n >>> import dwavebinarycsp\n >>> import dwavebinarycsp.factories.constraint.gates as gates\n >>> csp = dwavebinarycsp.ConstraintSatisfactionProblem(dwavebinarycsp.BINARY)\n >>> csp.add_constraint(gates.halfadder_gate(['a', 'b', 'total', 'carry'], name='HA1'))\n >>> csp.check({'a': 1, 'b': 1, 'total': 0, 'carry': 1})\n True"} {"_id": "q_8918", "text": "Random XOR constraint satisfaction problem.\n\n Args:\n num_variables (integer): Number of variables (at least three).\n num_clauses (integer): Number of constraints that together constitute the\n constraint satisfaction problem.\n vartype (Vartype, optional, default='BINARY'): Variable type. Accepted\n input values:\n\n * Vartype.SPIN, 'SPIN', {-1, 1}\n * Vartype.BINARY, 'BINARY', {0, 1}\n satisfiable (bool, optional, default=True): True if the CSP can be satisfied.\n\n Returns:\n CSP (:obj:`.ConstraintSatisfactionProblem`): CSP that is satisfied when its variables\n are assigned values that satisfy a XOR satisfiability problem.\n\n Examples:\n This example creates a CSP with 5 variables and two random constraints and checks\n whether a particular assignment of variables satisifies it.\n\n >>> import dwavebinarycsp\n >>> import dwavebinarycsp.factories as sat\n >>> csp = sat.random_xorsat(5, 2)\n >>> csp.constraints # doctest: +SKIP\n [Constraint.from_configurations(frozenset({(1, 0, 0), (1, 1, 1), (0, 1, 0), (0, 0, 1)}), (4, 3, 0),\n Vartype.BINARY, name='XOR (0 flipped)'),\n Constraint.from_configurations(frozenset({(1, 1, 0), (0, 1, 1), (0, 0, 0), (1, 0, 1)}), (2, 0, 4),\n Vartype.BINARY, name='XOR (2 flipped) (0 flipped)')]\n >>> csp.check({0: 1, 1: 0, 2: 0, 3: 1, 4: 1}) # doctest: +SKIP\n True"} {"_id": "q_8919", "text": "Generates a model chooser definition from a model, and adds it to the\n registry."} {"_id": "q_8920", "text": "Parse additional url fields and map them to inputs\n\n Attempt to create a dictionary with keys being user input, and\n response being the returned URL"} {"_id": "q_8921", "text": "Convert a list of JSON values to a list of models"} {"_id": "q_8922", "text": "Populate all fields of a model with data\n\n Given a model with a PandoraModel superclass will enumerate all\n declared fields on that model and populate the values of their Field\n and SyntheticField classes. All declared fields will have a value after\n this function runs even if they are missing from the incoming JSON."} {"_id": "q_8923", "text": "Convert one JSON value to a model object"} {"_id": "q_8924", "text": "Common repr logic for subclasses to hook"} {"_id": "q_8925", "text": "Write command to remote process"} {"_id": "q_8926", "text": "Ensure player backing process is started"} {"_id": "q_8927", "text": "Play a new song from a Pandora model\n\n Returns once the stream starts but does not shut down the remote audio\n output backend process. Calls the input callback when the user has\n input."} {"_id": "q_8928", "text": "Play the station until something ends it\n\n This function will run forever until termintated by calling\n end_station."} {"_id": "q_8929", "text": "Set stdout to non-blocking\n\n VLC does not always return a newline when reading status so in order to\n be lazy and still use the read API without caring about how much output\n there is we switch stdout to nonblocking mode and just read a large\n chunk of datin order to be lazy and still use the read API without\n caring about how much output there is we switch stdout to nonblocking\n mode and just read a large chunk of data."} {"_id": "q_8930", "text": "Format a station menu and make the user select a station"} {"_id": "q_8931", "text": "Input callback, handles key presses"} {"_id": "q_8932", "text": "Function decorator implementing retrying logic.\n\n exceptions: A tuple of exception classes; default (Exception,)\n\n The decorator will call the function up to max_tries times if it raises\n an exception.\n\n By default it catches instances of the Exception class and subclasses.\n This will recover after all but the most fatal errors. You may specify a\n custom tuple of exception classes with the 'exceptions' argument; the\n function will only be retried if it raises one of the specified\n exceptions."} {"_id": "q_8933", "text": "Iterate over a finite iterator forever\n\n When the iterator is exhausted will call the function again to generate a\n new iterator and keep iterating."} {"_id": "q_8934", "text": "Gather user input and convert it to an integer\n\n Will keep trying till the user enters an interger or until they ^C the\n program."} {"_id": "q_8935", "text": "Example program integrating an IVP problem of van der Pol oscillator"} {"_id": "q_8936", "text": "open the drop box\n\n You need to call this method before starting putting packages.\n\n Returns\n -------\n None"} {"_id": "q_8937", "text": "put a task\n\n This method places a task in the working area and have the\n dispatcher execute it.\n\n If you need to put multiple tasks, it can be much faster to\n use `put_multiple()` than to use this method multiple times\n depending of the dispatcher.\n\n Parameters\n ----------\n package : callable\n A task\n\n Returns\n -------\n int\n A package index assigned by the working area"} {"_id": "q_8938", "text": "return pairs of package indices and results of finished tasks\n\n This method does not wait for tasks to finish.\n\n Returns\n -------\n list\n A list of pairs of package indices and results"} {"_id": "q_8939", "text": "return a pair of a package index and result of a task\n\n This method waits until a tasks finishes. It returns `None` if\n no task is running.\n\n Returns\n -------\n tuple or None\n A pair of a package index and result. `None` if no tasks\n is running."} {"_id": "q_8940", "text": "run the event loops in the background.\n\n Args:\n eventLoops (list): a list of event loops to run"} {"_id": "q_8941", "text": "Return a pair of a run id and a result.\n\n This method waits until an event loop finishes.\n This method returns None if no loop is running."} {"_id": "q_8942", "text": "wait until all event loops end and returns the results."} {"_id": "q_8943", "text": "Convert ``key_vals_dict`` to `tuple_list``.\n\n Args:\n key_vals_dict (dict): The first parameter.\n fill: a value to fill missing data\n\n Returns:\n A list of tuples"} {"_id": "q_8944", "text": "Open the working area\n\n Returns\n -------\n None"} {"_id": "q_8945", "text": "Collect the result of a task\n\n Parameters\n ----------\n package_index :\n a package index\n\n Returns\n -------\n obj\n The result of the task"} {"_id": "q_8946", "text": "Returns the relative path of the result\n\n This method returns the path to the result relative to the\n top dir of the working area. This method simply constructs the\n path based on the convention and doesn't check if the result\n actually exists.\n\n Parameters\n ----------\n package_index :\n a package index\n\n Returns\n -------\n str\n the relative path to the result"} {"_id": "q_8947", "text": "Submit multiple jobs\n\n Parameters\n ----------\n workingArea :\n A workingArea\n package_indices : list(int)\n A list of package indices\n\n Returns\n -------\n list(str)\n The list of the run IDs of the jobs"} {"_id": "q_8948", "text": "Return the run IDs of the finished jobs\n\n Returns\n -------\n list(str)\n The list of the run IDs of the finished jobs"} {"_id": "q_8949", "text": "Wait until all jobs finish and return the run IDs of the finished jobs\n\n Returns\n -------\n list(str)\n The list of the run IDs of the finished jobs"} {"_id": "q_8950", "text": "put a task and its arguments\n\n If you need to put multiple tasks, it can be faster to put\n multiple tasks with `put_multiple()` than to use this method\n multiple times.\n\n Parameters\n ----------\n task : a function\n A function to be executed\n args : list\n A list of positional arguments to the `task`\n kwargs : dict\n A dict with keyword arguments to the `task`\n\n Returns\n -------\n int, str, or any hashable and sortable\n A task ID. IDs are sortable in the order in which the\n corresponding tasks are put."} {"_id": "q_8951", "text": "return a list of pairs of IDs and results of finished tasks.\n\n This method doesn't wait for tasks to finish. It returns IDs\n and results which have already finished.\n\n Returns\n -------\n list\n A list of pairs of IDs and results"} {"_id": "q_8952", "text": "return a pair of an ID and a result of a task.\n\n This method waits for a task to finish.\n\n Returns\n -------\n An ID and a result of a task. `None` if no task is running."} {"_id": "q_8953", "text": "return a list of pairs of IDs and results of all tasks.\n\n This method waits for all tasks to finish.\n\n Returns\n -------\n list\n A list of pairs of IDs and results"} {"_id": "q_8954", "text": "return a list results of all tasks.\n\n This method waits for all tasks to finish.\n\n Returns\n -------\n list\n A list of results of the tasks. The results are sorted in\n the order in which the tasks are put."} {"_id": "q_8955", "text": "expand a path config\n\n Args:\n path_cfg (str, tuple, dict): a config for path\n alias_dict (dict): a dict for aliases\n overriding_kargs (dict): to be used for recursive call"} {"_id": "q_8956", "text": "check if the jobs are running and return a list of pids for\n finished jobs"} {"_id": "q_8957", "text": "wait until all jobs finish and return a list of pids"} {"_id": "q_8958", "text": "return the ROOT.vector object for the branch."} {"_id": "q_8959", "text": "Ensure all config-time files have been generated. Return a\n dictionary of generated items."} {"_id": "q_8960", "text": "unpack the specified tarball url into the specified directory"} {"_id": "q_8961", "text": "return a list of Version objects, each with a tarball URL set"} {"_id": "q_8962", "text": "return a list of GithubComponentVersion objects for the tip of each branch"} {"_id": "q_8963", "text": "Try to unpublish a recently published version. Return any errors that\n occur."} {"_id": "q_8964", "text": "Read a list of files. Their configuration values are merged, with\n preference to values from files earlier in the list."} {"_id": "q_8965", "text": "return a configuration value\n\n usage:\n get('section.property')\n\n Note that currently array indexes are not supported. You must\n get the whole array.\n\n returns None if any path element or the property is missing"} {"_id": "q_8966", "text": "Set a configuration value. If no filename is specified, the\n property is set in the first configuration file. Note that if a\n filename is specified and the property path is present in an\n earlier filename then set property will be hidden.\n\n usage:\n set('section.property', value='somevalue')\n\n Note that currently array indexes are not supported. You must\n set the whole array."} {"_id": "q_8967", "text": "indicate whether the current item is the last one in a generator"} {"_id": "q_8968", "text": "Publish to the appropriate registry, return a description of any\n errors that occured, or None if successful.\n No VCS tagging is performed."} {"_id": "q_8969", "text": "Try to un-publish the current version. Return a description of any\n errors that occured, or None if successful."} {"_id": "q_8970", "text": "Return the specified script command. If the first part of the\n command is a .py file, then the current python interpreter is\n prepended.\n\n If the script is a single string, rather than an array, it is\n shlex-split."} {"_id": "q_8971", "text": "Check if this module has any dependencies with the specified name\n in its dependencies list, or in target dependencies for the\n specified target"} {"_id": "q_8972", "text": "Check if this module, or any of its dependencies, have a\n dependencies with the specified name in their dependencies, or in\n their targetDependencies corresponding to the specified target.\n\n Note that if recursive dependencies are not installed, this test\n may return a false-negative."} {"_id": "q_8973", "text": "Retrieve and install all the dependencies of this component and its\n dependencies, recursively, or satisfy them from a collection of\n available_components or from disk.\n\n Returns\n =======\n (components, errors)\n\n components: dictionary of name:Component\n errors: sequence of errors\n\n Parameters\n ==========\n\n available_components:\n None (default) or a dictionary of name:component. This is\n searched before searching directories or fetching remote\n components\n\n search_dirs:\n None (default), or sequence of directories to search for\n already installed, (but not yet loaded) components. Used so\n that manually installed or linked components higher up the\n dependency tree are found by their users lower down.\n\n These directories are searched in order, and finally the\n current directory is checked.\n\n update_installed:\n False (default), True, or set(): whether to check the\n available versions of installed components, and update if a\n newer version is available. If this is a set(), only update\n things in the specified set.\n\n traverse_links:\n False (default) or True: whether to recurse into linked\n dependencies when updating/installing.\n\n target:\n None (default), or a Target object. If specified the target\n name and it's similarTo list will be used in resolving\n dependencies. If None, then only target-independent\n dependencies will be installed\n\n test:\n True, False, or 'toplevel: should test-only dependencies be\n installed? (yes, no, or only for this module, not its\n dependencies)."} {"_id": "q_8974", "text": "Some components must export whole directories full of headers into\n the search path. This is really really bad, and they shouldn't do\n it, but support is provided as a concession to compatibility."} {"_id": "q_8975", "text": "merge dictionaries of dictionaries recursively, with elements from\n dictionaries earlier in the argument sequence taking precedence"} {"_id": "q_8976", "text": "create a new nested dictionary object with the same structure as\n 'dictionary', but with all scalar values replaced with 'value'"} {"_id": "q_8977", "text": "returns pack.DependencySpec for the base target of this target (or\n None if this target does not inherit from another target."} {"_id": "q_8978", "text": "Return true if this target inherits from the named target (directly\n or indirectly. Also returns true if this target is the named\n target. Otherwise return false."} {"_id": "q_8979", "text": "Execute the given command, returning an error message if an error occured\n or None if the command was succesful."} {"_id": "q_8980", "text": "Execute the commands necessary to build this component, and all of\n its dependencies."} {"_id": "q_8981", "text": "return decorator to prune cache after calling fn with a probability of p"} {"_id": "q_8982", "text": "Calibrate noisy variance estimates with empirical Bayes.\n\n Parameters\n ----------\n vars: ndarray\n List of variance estimates.\n sigma2: int\n Estimate of the Monte Carlo noise in vars.\n\n Returns\n -------\n An array of the calibrated variance estimates"} {"_id": "q_8983", "text": "Derive samples used to create trees in scikit-learn RandomForest objects.\n\n Recovers the samples in each tree from the random state of that tree using\n :func:`forest._generate_sample_indices`.\n\n Parameters\n ----------\n n_samples : int\n The number of samples used to fit the scikit-learn RandomForest object.\n\n forest : RandomForest\n Regressor or Classifier object that is already fit by scikit-learn.\n\n Returns\n -------\n Array that records how many times a data point was placed in a tree.\n Columns are individual trees. Rows are the number of times a sample was\n used in a tree."} {"_id": "q_8984", "text": "Retrieves the number of contributors to a repo in the organization.\n Also adds to unique contributor list."} {"_id": "q_8985", "text": "Retrieves the number of closed issues."} {"_id": "q_8986", "text": "Checks to see if the given repo has a top level LICENSE file."} {"_id": "q_8987", "text": "Writes stats from the organization to JSON."} {"_id": "q_8988", "text": "Updates the total.csv file with current data."} {"_id": "q_8989", "text": "Updates languages.csv file with current data."} {"_id": "q_8990", "text": "Checks if a directory exists. If not, it creates one with the specified\n file_path."} {"_id": "q_8991", "text": "Removes all rows of the associated date from the given csv file.\n Defaults to today."} {"_id": "q_8992", "text": "Create a github3.py session for a GitHub Enterprise instance\n\n If token is not provided, will attempt to use the GITHUB_API_TOKEN\n environment variable if present."} {"_id": "q_8993", "text": "Simplified check for API limits\n\n If necessary, spin in place waiting for API to reset before returning.\n\n See: https://developer.github.com/v3/#rate-limiting"} {"_id": "q_8994", "text": "Create a GitHub session for making requests"} {"_id": "q_8995", "text": "Yields GitHub3.py repo objects for provided orgs and repo names\n\n If orgs and repos are BOTH empty, execute special mode of getting ALL\n repositories from the GitHub Server.\n\n If public_only is True, will return only those repos that are marked as\n public. Set this to false to return all organizations that the session has\n permissions to access."} {"_id": "q_8996", "text": "Retrieves an organization via given org name. If given\n empty string, prompts user for an org name."} {"_id": "q_8997", "text": "Create CodeGovProject object from GitLab Repository"} {"_id": "q_8998", "text": "Create CodeGovProject object from DOE CODE record\n\n Handles crafting Code.gov Project"} {"_id": "q_8999", "text": "Retrieves the traffic for the repositories of the given organization."} {"_id": "q_9000", "text": "Retrieves the total referrers and unique referrers of all repos in json\n and then stores it in a dict."} {"_id": "q_9001", "text": "Retrieves data from json and stores it in the supplied dict. Accepts\n 'clones' or 'views' as type."} {"_id": "q_9002", "text": "Writes all traffic data to file."} {"_id": "q_9003", "text": "Checks the given csv file against the json data scraped for the given\n dict. It will remove all data retrieved that has already been recorded\n so we don't write redundant data to file. Returns count of rows from\n file."} {"_id": "q_9004", "text": "Writes given dict to file."} {"_id": "q_9005", "text": "Converts a DOE CODE .json file into DOE CODE projects\n Yields DOE CODE records from a DOE CODE .json file"} {"_id": "q_9006", "text": "Yeilds DOE CODE records based on provided input sources\n\n param:\n filename (str): Path to a DOE CODE .json file\n url (str): URL for a DOE CODE server json file\n key (str): API Key for connecting to DOE CODE server"} {"_id": "q_9007", "text": "Performs a login and sets the Github object via given credentials. If\n credentials are empty or incorrect then prompts user for credentials.\n Stores the authentication token in a CREDENTIALS_FILE used for future\n logins. Handles Two Factor Authentication."} {"_id": "q_9008", "text": "Retrieves the emails of the members of the organization. Note this Only\n gets public emails. Private emails would need authentication for each\n user."} {"_id": "q_9009", "text": "Writes the user emails to file."} {"_id": "q_9010", "text": "Return a connected Bitbucket session"} {"_id": "q_9011", "text": "Return a connected GitLab session\n\n ``token`` should be a ``private_token`` from Gitlab"} {"_id": "q_9012", "text": "Yields Gitlab project objects for all projects in Bitbucket"} {"_id": "q_9013", "text": "Given a Git repository URL, returns number of lines of code based on cloc\n\n Reference:\n - cloc: https://github.com/AlDanial/cloc\n - https://www.omg.org/spec/AFP/\n - Another potential way to calculation effort\n\n Sample cloc output:\n {\n \"header\": {\n \"cloc_url\": \"github.com/AlDanial/cloc\",\n \"cloc_version\": \"1.74\",\n \"elapsed_seconds\": 0.195950984954834,\n \"n_files\": 27,\n \"n_lines\": 2435,\n \"files_per_second\": 137.78956000769,\n \"lines_per_second\": 12426.5769858787\n },\n \"C++\": {\n \"nFiles\": 7,\n \"blank\": 121,\n \"comment\": 314,\n \"code\": 371\n },\n \"C/C++ Header\": {\n \"nFiles\": 8,\n \"blank\": 107,\n \"comment\": 604,\n \"code\": 191\n },\n \"CMake\": {\n \"nFiles\": 11,\n \"blank\": 49,\n \"comment\": 465,\n \"code\": 165\n },\n \"Markdown\": {\n \"nFiles\": 1,\n \"blank\": 18,\n \"comment\": 0,\n \"code\": 30\n },\n \"SUM\": {\n \"blank\": 295,\n \"comment\": 1383,\n \"code\": 757,\n \"nFiles\": 27\n }\n }"} {"_id": "q_9014", "text": "Read a 'pretty' formatted GraphQL query file into a one-line string.\n\n Removes line breaks and comments. Condenses white space.\n\n Args:\n filePath (str): A relative or absolute path to a file containing\n a GraphQL query.\n File may use comments and multi-line formatting.\n .. _GitHub GraphQL Explorer:\n https://developer.github.com/v4/explorer/\n verbose (Optional[bool]): If False, prints will be suppressed.\n Defaults to False.\n\n Returns:\n str: A single line GraphQL query."} {"_id": "q_9015", "text": "Send a curl request to GitHub.\n\n Args:\n gitquery (str): The query or endpoint itself.\n Examples:\n query: 'query { viewer { login } }'\n endpoint: '/user'\n gitvars (Optional[Dict]): All query variables.\n Defaults to empty.\n verbose (Optional[bool]): If False, stderr prints will be\n suppressed. Defaults to False.\n rest (Optional[bool]): If True, uses the REST API instead\n of GraphQL. Defaults to False.\n\n Returns:\n {\n 'statusNum' (int): The HTTP status code.\n 'headDict' (Dict[str]): The response headers.\n 'linkDict' (Dict[int]): Link based pagination data.\n 'result' (str): The body of the response.\n }"} {"_id": "q_9016", "text": "Wait until the given UTC timestamp.\n\n Args:\n utcTimeStamp (int): A UTC format timestamp.\n verbose (Optional[bool]): If False, all extra printouts will be\n suppressed. Defaults to True."} {"_id": "q_9017", "text": "Makes a pretty countdown.\n\n Args:\n gitquery (str): The query or endpoint itself.\n Examples:\n query: 'query { viewer { login } }'\n endpoint: '/user'\n printString (Optional[str]): A counter message to display.\n Defaults to 'Waiting %*d seconds...'\n verbose (Optional[bool]): If False, all extra printouts will be\n suppressed. Defaults to True."} {"_id": "q_9018", "text": "Creates the TFS Connection Context"} {"_id": "q_9019", "text": "Create a core_client.py client for a Team Foundation Server Enterprise connection instance\n\n If token is not provided, will attempt to use the TFS_API_TOKEN\n environment variable if present."} {"_id": "q_9020", "text": "Creates a TFS Git Client to pull Git repo info"} {"_id": "q_9021", "text": "Uses the weekly commits and traverses back through the last\n year, each week subtracting the weekly commits and storing them. It\n needs an initial starting commits number, which should be taken from\n the most up to date number from github_stats.py output."} {"_id": "q_9022", "text": "Generates a self-signed certificate for use in an internal development\n environment for testing SSL pages.\n\n http://almostalldigital.wordpress.com/2013/03/07/self-signed-ssl-certificate-for-ec2-load-balancer/"} {"_id": "q_9023", "text": "Creates a certificate signing request to be submitted to a formal\n certificate authority to generate a certificate.\n\n Note, the provider may say the CSR must be created on the target server,\n but this is not necessary."} {"_id": "q_9024", "text": "Reads the expiration date of a local crt file."} {"_id": "q_9025", "text": "Scans through all local .crt files and displays the expiration dates."} {"_id": "q_9026", "text": "Confirms the key, CSR, and certificate files all match."} {"_id": "q_9027", "text": "Recursively merges two dictionaries.\n\n Uses fabric's AttributeDict so you can reference values via dot-notation.\n e.g. env.value1.value2.value3...\n\n http://stackoverflow.com/questions/3232943/update-value-of-a-nested-dictionary-of-varying-depth"} {"_id": "q_9028", "text": "Compares the local version against the latest official version on PyPI and displays a warning message if a newer release is available.\n\n This check can be disabled by setting the environment variable BURLAP_CHECK_VERSION=0."} {"_id": "q_9029", "text": "Decorator for registering a satchel method as a Fabric task.\n\n Can be used like:\n\n @task\n def my_method(self):\n ...\n\n @task(precursors=['other_satchel'])\n def my_method(self):\n ..."} {"_id": "q_9030", "text": "Check if a path exists, and is a file."} {"_id": "q_9031", "text": "Check if a path exists, and is a directory."} {"_id": "q_9032", "text": "Check if a path exists, and is a symbolic link."} {"_id": "q_9033", "text": "Upload a template file.\n\n This is a wrapper around :func:`fabric.contrib.files.upload_template`\n that adds some extra parameters.\n\n If ``mkdir`` is True, then the remote directory will be created, as\n the current user or as ``user`` if specified.\n\n If ``chown`` is True, then it will ensure that the current user (or\n ``user`` if specified) is the owner of the remote file."} {"_id": "q_9034", "text": "Compute the MD5 sum of a file."} {"_id": "q_9035", "text": "Get the lines of a remote file, ignoring empty or commented ones"} {"_id": "q_9036", "text": "Return the time of last modification of path.\n The return value is a number giving the number of seconds since the epoch\n\n Same as :py:func:`os.path.getmtime()`"} {"_id": "q_9037", "text": "Copy a file or directory"} {"_id": "q_9038", "text": "Move a file or directory"} {"_id": "q_9039", "text": "Remove a file or directory"} {"_id": "q_9040", "text": "Require a file to exist and have specific contents and properties.\n\n You can provide either:\n\n - *contents*: the required contents of the file::\n\n from fabtools import require\n\n require.file('/tmp/hello.txt', contents='Hello, world')\n\n - *source*: the local path of a file to upload::\n\n from fabtools import require\n\n require.file('/tmp/hello.txt', source='files/hello.txt')\n\n - *url*: the URL of a file to download (*path* is then optional)::\n\n from fabric.api import cd\n from fabtools import require\n\n with cd('tmp'):\n require.file(url='http://example.com/files/hello.txt')\n\n If *verify_remote* is ``True`` (the default), then an MD5 comparison\n will be used to check whether the remote file is the same as the\n source. If this is ``False``, the file will be assumed to be the\n same if it is present. This is useful for very large files, where\n generating an MD5 sum may take a while.\n\n When providing either the *contents* or the *source* parameter, Fabric's\n ``put`` function will be used to upload the file to the remote host.\n When ``use_sudo`` is ``True``, the file will first be uploaded to a temporary\n directory, then moved to its final location. The default temporary\n directory is ``/tmp``, but can be overridden with the *temp_dir* parameter.\n If *temp_dir* is an empty string, then the user's home directory will\n be used.\n\n If `use_sudo` is `True`, then the remote file will be owned by root,\n and its mode will reflect root's default *umask*. The optional *owner*,\n *group* and *mode* parameters can be used to override these properties.\n\n .. note:: This function can be accessed directly from the\n ``fabtools.require`` module for convenience."} {"_id": "q_9041", "text": "Determines if a new release has been made."} {"_id": "q_9042", "text": "Upgrade all packages, skip obsoletes if ``obsoletes=0`` in ``yum.conf``.\n\n Exclude *kernel* upgrades by default."} {"_id": "q_9043", "text": "Check if an RPM package is installed."} {"_id": "q_9044", "text": "Install one or more RPM packages.\n\n Extra *repos* may be passed to ``yum`` to enable extra repositories at install time.\n\n Extra *yes* may be passed to ``yum`` to validate license if necessary.\n\n Extra *options* may be passed to ``yum`` if necessary\n (e.g. ``['--nogpgcheck', '--exclude=package']``).\n\n ::\n\n import burlap\n\n # Install a single package, in an alternative install root\n burlap.rpm.install('emacs', options='--installroot=/my/new/location')\n\n # Install multiple packages silently\n burlap.rpm.install([\n 'unzip',\n 'nano'\n ], '--quiet')"} {"_id": "q_9045", "text": "Install a group of packages.\n\n You can use ``yum grouplist`` to get the list of groups.\n\n Extra *options* may be passed to ``yum`` if necessary like\n (e.g. ``['--nogpgcheck', '--exclude=package']``).\n\n ::\n\n import burlap\n\n # Install development packages\n burlap.rpm.groupinstall('Development tools')"} {"_id": "q_9046", "text": "Remove an existing software group.\n\n Extra *options* may be passed to ``yum`` if necessary."} {"_id": "q_9047", "text": "Get the list of ``yum`` repositories.\n\n Returns enabled repositories by default. Extra *status* may be passed\n to list disabled repositories if necessary.\n\n Media and debug repositories are kept disabled, except if you pass *media*.\n\n ::\n\n import burlap\n\n # Install a package that may be included in disabled repositories\n burlap.rpm.install('vim', burlap.rpm.repolist('disabled'))"} {"_id": "q_9048", "text": "Uploads media to an Amazon S3 bucket using s3sync.\n\n Requires s3cmd. Install with:\n\n pip install s3cmd"} {"_id": "q_9049", "text": "Issues invalidation requests to a Cloudfront distribution\n for the current static media bucket, triggering it to reload the specified\n paths from the origin.\n\n Note, only 1000 paths can be issued in a request at any one time."} {"_id": "q_9050", "text": "Gets an S3 bucket of the given name, creating one if it doesn't already exist.\n\n Should be called with a role, if AWS credentials are stored in role settings. e.g.\n\n fab local s3.get_or_create_bucket:mybucket"} {"_id": "q_9051", "text": "Configures the server to use a static IP."} {"_id": "q_9052", "text": "Upgrade all packages."} {"_id": "q_9053", "text": "Check if a package is installed."} {"_id": "q_9054", "text": "Install one or more packages.\n\n If *update* is ``True``, the package definitions will be updated\n first, using :py:func:`~burlap.deb.update_index`.\n\n Extra *options* may be passed to ``apt-get`` if necessary.\n\n Example::\n\n import burlap\n\n # Update index, then install a single package\n burlap.deb.install('build-essential', update=True)\n\n # Install multiple packages\n burlap.deb.install([\n 'python-dev',\n 'libxml2-dev',\n ])\n\n # Install a specific version\n burlap.deb.install('emacs', version='23.3+1-1ubuntu9')"} {"_id": "q_9055", "text": "Enable unattended package installation by preseeding ``debconf``\n parameters.\n\n Example::\n\n import burlap\n\n # Unattended install of Postfix mail server\n burlap.deb.preseed_package('postfix', {\n 'postfix/main_mailer_type': ('select', 'Internet Site'),\n 'postfix/mailname': ('string', 'example.com'),\n 'postfix/destinations': ('string', 'example.com, localhost.localdomain, localhost'),\n })\n burlap.deb.install('postfix')"} {"_id": "q_9056", "text": "Check if the given key id exists in apt keyring."} {"_id": "q_9057", "text": "Check if a group exists."} {"_id": "q_9058", "text": "Responds to a forced password change via `passwd` prompts due to password expiration."} {"_id": "q_9059", "text": "Adds the user to the given list of groups."} {"_id": "q_9060", "text": "Creates a user with the given username."} {"_id": "q_9061", "text": "Forces the user to change their password the next time they login."} {"_id": "q_9062", "text": "Run a remote command as the root user.\n\n When connecting as root to the remote system, this will use Fabric's\n ``run`` function. In other cases, it will use ``sudo``."} {"_id": "q_9063", "text": "Iteratively builds a file hash without loading the entire file into memory.\n Designed to process an arbitrary binary file."} {"_id": "q_9064", "text": "Install the latest version of `setuptools`_.\n\n ::\n\n import burlap\n\n burlap.python_setuptools.install_setuptools()"} {"_id": "q_9065", "text": "Install Python packages with ``easy_install``.\n\n Examples::\n\n import burlap\n\n # Install a single package\n burlap.python_setuptools.install('package', use_sudo=True)\n\n # Install a list of packages\n burlap.python_setuptools.install(['pkg1', 'pkg2'], use_sudo=True)\n\n .. note:: most of the time, you'll want to use\n :py:func:`burlap.python.install()` instead,\n which uses ``pip`` to install packages."} {"_id": "q_9066", "text": "Installs all the necessary packages necessary for managing virtual\n environments with pip."} {"_id": "q_9067", "text": "Returns true if the virtualenv tool is installed."} {"_id": "q_9068", "text": "Returns true if the virtual environment has been created."} {"_id": "q_9069", "text": "Lists the packages that require the given package."} {"_id": "q_9070", "text": "Returns all requirements files combined into one string."} {"_id": "q_9071", "text": "Creates and saves an EC2 key pair to a local PEM file."} {"_id": "q_9072", "text": "Deletes and recreates one or more VM instances."} {"_id": "q_9073", "text": "Utility to take the methods of the instance of a class, instance,\n and add them as functions to a module, module_name, so that Fabric\n can find and call them. Call this at the bottom of a module after\n the class definition."} {"_id": "q_9074", "text": "Given the function name, looks up the method for dynamically retrieving host data."} {"_id": "q_9075", "text": "Returns a subset of the env dictionary containing\n only those keys with the name prefix."} {"_id": "q_9076", "text": "Returns a template to a local file.\n If no filename given, a temporary filename will be generated and returned."} {"_id": "q_9077", "text": "Returns a template to a remote file.\n If no filename given, a temporary filename will be generated and returned."} {"_id": "q_9078", "text": "Iterates over sites, safely setting environment variables for each site."} {"_id": "q_9079", "text": "perform topo sort on elements.\n\n :arg source: list of ``(name, [list of dependancies])`` pairs\n :returns: list of names, with dependancies listed first"} {"_id": "q_9080", "text": "Returns a list of hosts that have been configured to support the given site."} {"_id": "q_9081", "text": "Returns a copy of the global environment with all the local variables copied back into it."} {"_id": "q_9082", "text": "Context manager that hides the command prefix and activates dryrun to capture all following task commands to their equivalent Bash outputs."} {"_id": "q_9083", "text": "Adds this satchel to the global registeries for fast lookup from other satchels."} {"_id": "q_9084", "text": "Removes this satchel from global registeries."} {"_id": "q_9085", "text": "Returns a version of env filtered to only include the variables in our namespace."} {"_id": "q_9086", "text": "Loads settings for the target site."} {"_id": "q_9087", "text": "Returns a list of all required packages."} {"_id": "q_9088", "text": "Returns true if at least one tracker detects a change."} {"_id": "q_9089", "text": "Delete a PostgreSQL database.\n\n Example::\n\n import burlap\n\n # Remove DB if it exists\n if burlap.postgres.database_exists('myapp'):\n burlap.postgres.drop_database('myapp')"} {"_id": "q_9090", "text": "Directly transfers a table between two databases."} {"_id": "q_9091", "text": "Get the IPv4 address assigned to an interface.\n\n Example::\n\n import burlap\n\n # Print all configured IP addresses\n for interface in burlap.network.interfaces():\n print(burlap.network.address(interface))"} {"_id": "q_9092", "text": "Installs system packages listed in yum-requirements.txt."} {"_id": "q_9093", "text": "Displays all packages required by the current role\n based on the documented services provided."} {"_id": "q_9094", "text": "Writes entire crontab to the host."} {"_id": "q_9095", "text": "Forcibly kills Rabbit and purges all its queues.\n\n For emergency use when the server becomes unresponsive, even to service stop calls.\n\n If this also fails to correct the performance issues, the server may have to be completely\n reinstalled."} {"_id": "q_9096", "text": "Returns a generator yielding all the keys that have values that differ between each dictionary."} {"_id": "q_9097", "text": "Given a list of components, re-orders them according to inter-component dependencies so the most depended upon are first."} {"_id": "q_9098", "text": "Returns a generator yielding the named functions needed for a deployment."} {"_id": "q_9099", "text": "Returns the path to the manifest file."} {"_id": "q_9100", "text": "Returns a dictionary representing the current configuration state.\n\n Thumbprint is of the form:\n\n {\n component_name1: {key: value},\n component_name2: {key: value},\n ...\n }"} {"_id": "q_9101", "text": "Returns a dictionary representing the previous configuration state.\n\n Thumbprint is of the form:\n\n {\n component_name1: {key: value},\n component_name2: {key: value},\n ...\n }"} {"_id": "q_9102", "text": "Marks the remote server as currently being deployed to."} {"_id": "q_9103", "text": "Update the thumbprint on the remote server but execute no satchel configurators.\n\n components = A comma-delimited list of satchel names to limit the fake deployment to.\n set_satchels = A semi-colon delimited list of key-value pairs to set in satchels before recording a fake deployment."} {"_id": "q_9104", "text": "Inspects differences between the last deployment and the current code state."} {"_id": "q_9105", "text": "Retrieves the Django settings dictionary."} {"_id": "q_9106", "text": "Runs the Django createsuperuser management command."} {"_id": "q_9107", "text": "Runs the Dango loaddata management command.\n\n By default, runs on only the current site.\n\n Pass site=all to run on all sites."} {"_id": "q_9108", "text": "A generic wrapper around Django's manage command."} {"_id": "q_9109", "text": "Runs the standard Django syncdb command for one or more sites."} {"_id": "q_9110", "text": "Starts a Django management command in a screen.\n\n Parameters:\n\n command :- all arguments passed to `./manage` as a single string\n\n site :- the site to run the command for (default is all)\n\n Designed to be ran like:\n\n fab dj.manage_async:\"some_management_command --force\""} {"_id": "q_9111", "text": "Looks up the root login for the given database on the given host and sets\n it to environment variables.\n\n Populates these standard variables:\n\n db_root_password\n db_root_username"} {"_id": "q_9112", "text": "Renders local settings for a specific database."} {"_id": "q_9113", "text": "Return free space in bytes."} {"_id": "q_9114", "text": "Loads database parameters from a specific named set."} {"_id": "q_9115", "text": "Determines if there's enough space to load the target database."} {"_id": "q_9116", "text": "Sets connection parameters to localhost, if not set already."} {"_id": "q_9117", "text": "Configures HDMI to support hot-plugging, so it'll work even if it wasn't\n plugged in when the Pi was originally powered up.\n\n Note, this does cause slightly higher power consumption, so if you don't need HDMI,\n don't bother with this.\n\n http://raspberrypi.stackexchange.com/a/2171/29103"} {"_id": "q_9118", "text": "Enables access to the camera.\n\n http://raspberrypi.stackexchange.com/questions/14229/how-can-i-enable-the-camera-without-using-raspi-config\n https://mike632t.wordpress.com/2014/06/26/raspberry-pi-camera-setup/\n\n Afterwards, test with:\n\n /opt/vc/bin/raspistill --nopreview --output image.jpg\n\n Check for compatibility with:\n\n vcgencmd get_camera\n\n which should show:\n\n supported=1 detected=1"} {"_id": "q_9119", "text": "Some images purporting to support both the Pi2 and Pi3 use the wrong kernel modules."} {"_id": "q_9120", "text": "Runs methods services have requested be run before each deployment."} {"_id": "q_9121", "text": "Applies routine, typically application-level changes to the service."} {"_id": "q_9122", "text": "Runs methods services have requested be run before after deployment."} {"_id": "q_9123", "text": "Applies one-time settings changes to the host, usually to initialize the service."} {"_id": "q_9124", "text": "Enables all modules in the current module list.\n Does not disable any currently enabled modules not in the list."} {"_id": "q_9125", "text": "Based on the number of sites per server and the number of resources on the server,\n calculates the optimal number of processes that should be allocated for each WSGI site."} {"_id": "q_9126", "text": "Instantiates a new local renderer.\n Override this to do any additional initialization."} {"_id": "q_9127", "text": "Uploads select media to an Apache accessible directory."} {"_id": "q_9128", "text": "Installs the mod-evasive Apache module for combating DDOS attacks.\n\n https://www.linode.com/docs/websites/apache-tips-and-tricks/modevasive-on-apache"} {"_id": "q_9129", "text": "Installs the mod-rpaf Apache module.\n\n https://github.com/gnif/mod_rpaf"} {"_id": "q_9130", "text": "Forwards all traffic to a page saying the server is down for maintenance."} {"_id": "q_9131", "text": "Supervisor can take a very long time to start and stop,\n so wait for it."} {"_id": "q_9132", "text": "Collects the configurations for all registered services and writes\n the appropriate supervisord.conf file."} {"_id": "q_9133", "text": "Clone a remote Git repository into a new directory.\n\n :param remote_url: URL of the remote repository to clone.\n :type remote_url: str\n\n :param path: Path of the working copy directory. Must not exist yet.\n :type path: str\n\n :param use_sudo: If ``True`` execute ``git`` with\n :func:`fabric.operations.sudo`, else with\n :func:`fabric.operations.run`.\n :type use_sudo: bool\n\n :param user: If ``use_sudo is True``, run :func:`fabric.operations.sudo`\n with the given user. If ``use_sudo is False`` this parameter\n has no effect.\n :type user: str"} {"_id": "q_9134", "text": "Add a remote Git repository into a directory.\n\n :param path: Path of the working copy directory. This directory must exist\n and be a Git working copy with a default remote to fetch from.\n :type path: str\n\n :param use_sudo: If ``True`` execute ``git`` with\n :func:`fabric.operations.sudo`, else with\n :func:`fabric.operations.run`.\n :type use_sudo: bool\n\n :param user: If ``use_sudo is True``, run :func:`fabric.operations.sudo`\n with the given user. If ``use_sudo is False`` this parameter\n has no effect.\n :type user: str\n\n :param name: name for the remote repository\n :type name: str\n\n :param remote_url: URL of the remote repository\n :type remote_url: str\n\n :param fetch: If ``True`` execute ``git remote add -f``\n :type fetch: bool"} {"_id": "q_9135", "text": "Fetch changes from the default remote repository and merge them.\n\n :param path: Path of the working copy directory. This directory must exist\n and be a Git working copy with a default remote to pull from.\n :type path: str\n\n :param use_sudo: If ``True`` execute ``git`` with\n :func:`fabric.operations.sudo`, else with\n :func:`fabric.operations.run`.\n :type use_sudo: bool\n\n :param user: If ``use_sudo is True``, run :func:`fabric.operations.sudo`\n with the given user. If ``use_sudo is False`` this parameter\n has no effect.\n :type user: str\n :param force: If ``True``, append the ``--force`` option to the command.\n :type force: bool"} {"_id": "q_9136", "text": "Retrieves all commit messages for all commits between the given commit numbers\n on the current branch."} {"_id": "q_9137", "text": "Retrieves the git commit number of the current head branch."} {"_id": "q_9138", "text": "Get the Vagrant version."} {"_id": "q_9139", "text": "Run the following tasks on a vagrant box.\n\n First, you need to import this task in your ``fabfile.py``::\n\n from fabric.api import *\n from burlap.vagrant import vagrant\n\n @task\n def some_task():\n run('echo hello')\n\n Then you can easily run tasks on your current Vagrant box::\n\n $ fab vagrant some_task"} {"_id": "q_9140", "text": "Context manager that sets a vagrant VM\n as the remote host.\n\n Use this context manager inside a task to run commands\n on your current Vagrant box::\n\n from burlap.vagrant import vagrant_settings\n\n with vagrant_settings():\n run('hostname')"} {"_id": "q_9141", "text": "Get the list of vagrant base boxes"} {"_id": "q_9142", "text": "Get the distribution family.\n\n Returns one of ``debian``, ``redhat``, ``arch``, ``gentoo``,\n ``sun``, ``other``."} {"_id": "q_9143", "text": "Gets the list of supported locales.\n\n Each locale is returned as a ``(locale, charset)`` tuple."} {"_id": "q_9144", "text": "Sets ownership and permissions for Celery-related files."} {"_id": "q_9145", "text": "This is called for each site to render a Celery config file."} {"_id": "q_9146", "text": "Ensures all tests have passed for this branch.\n\n This should be called before deployment, to prevent accidental deployment of code\n that hasn't passed automated testing."} {"_id": "q_9147", "text": "Returns true if the given host exists on the network.\n Returns false otherwise."} {"_id": "q_9148", "text": "Deletes all SSH keys on the localhost associated with the current remote host."} {"_id": "q_9149", "text": "Returns true if the host does not exist at the expected location and may need\n to have its initial configuration set.\n Returns false if the host exists at the expected location."} {"_id": "q_9150", "text": "Called to set default password login for systems that do not yet have passwordless\n login setup."} {"_id": "q_9151", "text": "Assigns a name to the server accessible from user space.\n\n Note, we add the name to /etc/hosts since not all programs use\n /etc/hostname to reliably identify the server hostname."} {"_id": "q_9152", "text": "Get a partition list for all disk or for selected device only\n\n Example::\n\n from burlap.disk import partitions\n\n spart = {'Linux': 0x83, 'Swap': 0x82}\n parts = partitions()\n # parts = {'/dev/sda1': 131, '/dev/sda2': 130, '/dev/sda3': 131}\n r = parts['/dev/sda1'] == spart['Linux']\n r = r and parts['/dev/sda2'] == spart['Swap']\n if r:\n print(\"You can format these partitions\")"} {"_id": "q_9153", "text": "Get a HDD device by uuid\n\n Example::\n\n from burlap.disk import getdevice_by_uuid\n\n device = getdevice_by_uuid(\"356fafdc-21d5-408e-a3e9-2b3f32cb2a8c\")\n if device:\n mount(device,'/mountpoint')"} {"_id": "q_9154", "text": "Run a MySQL query."} {"_id": "q_9155", "text": "Create a MySQL user.\n\n Example::\n\n import burlap\n\n # Create DB user if it does not exist\n if not burlap.mysql.user_exists('dbuser'):\n burlap.mysql.create_user('dbuser', password='somerandomstring')"} {"_id": "q_9156", "text": "Check if a MySQL database exists."} {"_id": "q_9157", "text": "Retrieves the path to the MySQL configuration file."} {"_id": "q_9158", "text": "This does a cross-match against the TIC catalog on MAST.\n\n Speed tests: about 10 crossmatches per second. (-> 3 hours for 10^5 objects\n to crossmatch).\n\n Parameters\n ----------\n\n ra,dec : np.array\n The coordinates to cross match against, all in decimal degrees.\n\n radius : float\n The cross-match radius to use, in decimal degrees.\n\n Returns\n -------\n\n dict\n Returns the match results JSON from MAST loaded into a dict."} {"_id": "q_9159", "text": "This converts the normalized fluxes in the TESS lcdicts to TESS mags.\n\n Uses the object's TESS mag stored in lcdict['objectinfo']['tessmag']::\n\n mag - object_tess_mag = -2.5 log (flux/median_flux)\n\n Parameters\n ----------\n\n lcdict : lcdict\n An `lcdict` produced by `read_tess_fitslc` or\n `consolidate_tess_fitslc`. This must have normalized fluxes in its\n measurement columns (use the `normalize` kwarg for these functions).\n\n columns : sequence of str\n The column keys of the normalized flux and background measurements in\n the `lcdict` to operate on and convert to magnitudes in TESS band (T).\n\n Returns\n -------\n\n lcdict\n The returned `lcdict` will contain extra columns corresponding to\n magnitudes for each input normalized flux/background column."} {"_id": "q_9160", "text": "This returns the periodogram plot PNG as base64, plus info as a dict.\n\n Parameters\n ----------\n\n lspinfo : dict\n This is an lspinfo dict containing results from a period-finding\n function. If it's from an astrobase period-finding function in\n periodbase, this will already be in the correct format. To use external\n period-finder results with this function, the `lspinfo` dict must be of\n the following form, with at least the keys listed below::\n\n {'periods': np.array of all periods searched by the period-finder,\n 'lspvals': np.array of periodogram power value for each period,\n 'bestperiod': a float value that is the period with the highest\n peak in the periodogram, i.e. the most-likely actual\n period,\n 'method': a three-letter code naming the period-finder used; must\n be one of the keys in the\n `astrobase.periodbase.METHODLABELS` dict,\n 'nbestperiods': a list of the periods corresponding to periodogram\n peaks (`nbestlspvals` below) to annotate on the\n periodogram plot so they can be called out\n visually,\n 'nbestlspvals': a list of the power values associated with\n periodogram peaks to annotate on the periodogram\n plot so they can be called out visually; should be\n the same length as `nbestperiods` above}\n\n `nbestperiods` and `nbestlspvals` must have at least 5 elements each,\n e.g. describing the five 'best' (highest power) peaks in the\n periodogram.\n\n plotdpi : int\n The resolution in DPI of the output periodogram plot to make.\n\n override_pfmethod : str or None\n This is used to set a custom label for this periodogram\n method. Normally, this is taken from the 'method' key in the input\n `lspinfo` dict, but if you want to override the output method name,\n provide this as a string here. This can be useful if you have multiple\n results you want to incorporate into a checkplotdict from a single\n period-finder (e.g. if you ran BLS over several period ranges\n separately).\n\n Returns\n -------\n\n dict\n Returns a dict that contains the following items::\n\n {methodname: {'periods':the period array from lspinfo,\n 'lspval': the periodogram power array from lspinfo,\n 'bestperiod': the best period from lspinfo,\n 'nbestperiods': the 'nbestperiods' list from lspinfo,\n 'nbestlspvals': the 'nbestlspvals' list from lspinfo,\n 'periodogram': base64 encoded string representation of\n the periodogram plot}}\n\n The dict is returned in this format so it can be directly incorporated\n under the period-finder's label `methodname` in a checkplotdict, using\n Python's dict `update()` method."} {"_id": "q_9161", "text": "This normalizes the magnitude time-series to a specified value.\n\n This is used to normalize time series measurements that may have large time\n gaps and vertical offsets in mag/flux measurement between these\n 'timegroups', either due to instrument changes or different filters.\n\n NOTE: this works in-place! The mags array will be replaced with normalized\n mags when this function finishes.\n\n Parameters\n ----------\n\n times,mags : array-like\n The times (assumed to be some form of JD) and mags (or flux)\n measurements to be normalized.\n\n mingap : float\n This defines how much the difference between consecutive measurements is\n allowed to be to consider them as parts of different timegroups. By\n default it is set to 4.0 days.\n\n normto : {'globalmedian', 'zero'} or a float\n Specifies the normalization type::\n\n 'globalmedian' -> norms each mag to the global median of the LC column\n 'zero' -> norms each mag to zero\n a float -> norms each mag to this specified float value.\n\n magsarefluxes : bool\n Indicates if the input `mags` array is actually an array of flux\n measurements instead of magnitude measurements. If this is set to True,\n then:\n\n - if `normto` is 'zero', then the median flux is divided from each\n observation's flux value to yield normalized fluxes with 1.0 as the\n global median.\n\n - if `normto` is 'globalmedian', then the global median flux value\n across the entire time series is multiplied with each measurement.\n\n - if `norm` is set to a `float`, then this number is multiplied with the\n flux value for each measurement.\n\n debugmode : bool\n If this is True, will print out verbose info on each timegroup found.\n\n Returns\n -------\n\n times,normalized_mags : np.arrays\n Normalized magnitude values after normalization. If normalization fails\n for some reason, `times` and `normalized_mags` will both be None."} {"_id": "q_9162", "text": "Calculate the total SNR of a transit assuming gaussian uncertainties.\n\n `modelmags` gets interpolated onto the cadence of `mags`. The noise is\n calculated as the 1-sigma std deviation of the residual (see below).\n\n Following Carter et al. 2009::\n\n Q = sqrt( \u0393 T ) * \u03b4 / \u03c3\n\n for Q the total SNR of the transit in the r->0 limit, where::\n\n r = Rp/Rstar,\n T = transit duration,\n \u03b4 = transit depth,\n \u03c3 = RMS of the lightcurve in transit.\n \u0393 = sampling rate\n\n Thus \u0393 * T is roughly the number of points obtained during transit.\n (This doesn't correctly account for the SNR during ingress/egress, but this\n is a second-order correction).\n\n Note this is the same total SNR as described by e.g., Kovacs et al. 2002,\n their Equation 11.\n\n NOTE: this only works with fluxes at the moment.\n\n Parameters\n ----------\n\n times,mags : np.array\n The input flux time-series to process.\n\n modeltimes,modelmags : np.array\n A transiting planet model, either from BLS, a trapezoid model, or a\n Mandel-Agol model.\n\n atol_normalization : float\n The absolute tolerance to which the median of the passed model fluxes\n must be equal to 1.\n\n indsforrms : np.array\n A array of bools of `len(mags)` used to select points for the RMS\n measurement. If not passed, the RMS of the entire passed timeseries is\n used as an approximation. Genearlly, it's best to use out of transit\n points, so the RMS measurement is not model-dependent.\n\n magsarefluxes : bool\n Currently forced to be True because this function only works with\n fluxes.\n\n verbose : bool\n If True, indicates progress and warns about problems.\n\n transitdepth : float or None\n If the transit depth is known, pass it in here. Otherwise, it is\n calculated assuming OOT flux is 1.\n\n npoints_in_transits : int or None\n If the number of points in transit is known, pass it in here. Otherwise,\n the function will guess at this value.\n\n Returns\n -------\n\n (snr, transit_depth, noise) : tuple\n The returned tuple contains the calculated SNR, transit depth, and noise\n of the residual lightcurve calculated using the relation described\n above."} {"_id": "q_9163", "text": "Using Carter et al. 2009's estimate, calculate the theoretical optimal\n precision on mid-transit time measurement possible given a transit of a\n particular SNR.\n\n The relation used is::\n\n sigma_tc = Q^{-1} * T * sqrt(\u03b8/2)\n\n Q = SNR of the transit.\n T = transit duration, which is 2.14 hours from discovery paper.\n \u03b8 = \u03c4/T = ratio of ingress to total duration\n ~= (few minutes [guess]) / 2.14 hours\n\n Parameters\n ----------\n\n snr : float\n The measured signal-to-noise of the transit, e,g. from\n :py:func:`astrobase.periodbase.kbls.bls_stats_singleperiod` or from\n running the `.compute_stats()` method on an Astropy BoxLeastSquares\n object.\n\n t_ingress_min : float\n The ingress duration in minutes. This is t_I to t_II in Winn (2010)\n nomenclature.\n\n t_duration_hr : float\n The transit duration in hours. This is t_I to t_IV in Winn (2010)\n nomenclature.\n\n Returns\n -------\n\n float\n Returns the precision achievable for transit-center time as calculated\n from the relation above. This is in days."} {"_id": "q_9164", "text": "This gets the out-of-transit light curve points.\n\n Relevant during iterative masking of transits for multiple planet system\n search.\n\n Parameters\n ----------\n\n time,flux,err_flux : np.array\n The input flux time-series measurements and their associated measurement\n errors\n\n blsfit_savpath : str or None\n If provided as a str, indicates the path of the fit plot to make for a\n simple BLS model fit to the transit using the obtained period and epoch.\n\n trapfit_savpath : str or None\n If provided as a str, indicates the path of the fit plot to make for a\n trapezoidal transit model fit to the transit using the obtained period\n and epoch.\n\n in_out_transit_savpath : str or None\n If provided as a str, indicates the path of the plot file that will be\n made for a plot showing the in-transit points and out-of-transit points\n tagged separately.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n magsarefluxes : bool\n This is by default True for this function, since it works on fluxes only\n at the moment.\n\n nworkers : int\n The number of parallel BLS period-finder workers to use.\n\n extra_maskfrac : float\n This is the separation (N) from in-transit points you desire, in units\n of the transit duration. `extra_maskfrac = 0` if you just want points\n inside transit, otherwise::\n\n t_starts = t_Is - N*tdur, t_ends = t_IVs + N*tdur\n\n Thus setting N=0.03 masks slightly more than the guessed transit\n duration.\n\n Returns\n -------\n\n (times_oot, fluxes_oot, errs_oot) : tuple of np.array\n The `times`, `flux`, `err_flux` values from the input at the time values\n out-of-transit are returned."} {"_id": "q_9165", "text": "This just compresses the sqlitecurve. Should be independent of OS."} {"_id": "q_9166", "text": "This just compresses the sqlitecurve in gzip format.\n\n FIXME: this doesn't work with gzip < 1.6 or non-GNU gzip (probably)."} {"_id": "q_9167", "text": "This just uncompresses the sqlitecurve in gzip format.\n\n FIXME: this doesn't work with gzip < 1.6 or non-GNU gzip (probably)."} {"_id": "q_9168", "text": "This just tries to apply the caster function to castee.\n\n Returns None on failure."} {"_id": "q_9169", "text": "This parses the CSV header from the CSV HAT sqlitecurve.\n\n Returns a dict that can be used to update an existing lcdict with the\n relevant metadata info needed to form a full LC."} {"_id": "q_9170", "text": "This parses the header of the LCC CSV V1 LC format."} {"_id": "q_9171", "text": "This describes the LCC CSV format light curve file.\n\n Parameters\n ----------\n\n lcdict : dict\n The input lcdict to parse for column and metadata info.\n\n returndesc : bool\n If True, returns the description string as an str instead of just\n printing it to stdout.\n\n Returns\n -------\n\n str or None\n If returndesc is True, returns the description lines as a str, otherwise\n returns nothing."} {"_id": "q_9172", "text": "This reads a HAT data server or LCC-Server produced CSV light curve\n into an lcdict.\n\n This will automatically figure out the format of the file\n provided. Currently, it can read:\n\n - legacy HAT data server CSV LCs (e.g. from\n https://hatsouth.org/planets/lightcurves.html) with an extension of the\n form: `.hatlc.csv.gz`.\n - all LCC-Server produced LCC-CSV-V1 LCs (e.g. from\n https://data.hatsurveys.org) with an extension of the form: `-csvlc.gz`.\n\n\n Parameters\n ----------\n\n lcfile : str\n The light curve file to read.\n\n Returns\n -------\n\n dict\n Returns an lcdict that can be read and used by many astrobase processing\n functions."} {"_id": "q_9173", "text": "This finds the time gaps in the light curve, so we can figure out which\n times are for consecutive observations and which represent gaps\n between seasons.\n\n Parameters\n ----------\n\n lctimes : np.array\n This is the input array of times, assumed to be in some form of JD.\n\n mingap : float\n This defines how much the difference between consecutive measurements is\n allowed to be to consider them as parts of different timegroups. By\n default it is set to 4.0 days.\n\n Returns\n -------\n\n tuple\n A tuple of the form below is returned, containing the number of time\n groups found and Python slice objects for each group::\n\n (ngroups, [slice(start_ind_1, end_ind_1), ...])"} {"_id": "q_9174", "text": "This is called when we're executed from the commandline.\n\n The current usage from the command-line is described below::\n\n usage: hatlc [-h] [--describe] hatlcfile\n\n read a HAT LC of any format and output to stdout\n\n positional arguments:\n hatlcfile path to the light curve you want to read and pipe to stdout\n\n optional arguments:\n -h, --help show this help message and exit\n --describe don't dump the columns, show only object info and LC metadata"} {"_id": "q_9175", "text": "This calculates the M-dwarf subtype given SDSS `r-i` and `i-z` colors.\n\n Parameters\n ----------\n\n ri_color : float\n The SDSS `r-i` color of the object.\n\n iz_color : float\n The SDSS `i-z` color of the object.\n\n Returns\n -------\n\n (subtype, index1, index2) : tuple\n `subtype`: if the star appears to be an M dwarf, will return an int\n between 0 and 9 indicating its subtype, e.g. will return 4 for an M4\n dwarf. If the object isn't an M dwarf, will return None\n\n `index1`, `index2`: the M-dwarf color locus value and spread of this\n object calculated from the `r-i` and `i-z` colors."} {"_id": "q_9176", "text": "This applies EPD in parallel to all LCs in the input list.\n\n Parameters\n ----------\n\n lclist : list of str\n This is the list of light curve files to run EPD on.\n\n externalparams : dict or None\n This is a dict that indicates which keys in the lcdict obtained from the\n lcfile correspond to the required external parameters. As with timecol,\n magcol, and errcol, these can be simple keys (e.g. 'rjd') or compound\n keys ('magaperture1.mags'). The dict should look something like::\n\n {'fsv':'' array: S values for each observation,\n 'fdv':'' array: D values for each observation,\n 'fkv':'' array: K values for each observation,\n 'xcc':'' array: x coords for each observation,\n 'ycc':'' array: y coords for each observation,\n 'bgv':'' array: sky background for each observation,\n 'bge':'' array: sky background err for each observation,\n 'iha':'' array: hour angle for each observation,\n 'izd':'' array: zenith distance for each observation}\n\n Alternatively, if these exact keys are already present in the lcdict,\n indicate this by setting externalparams to None.\n\n timecols,magcols,errcols : lists of str\n The keys in the lcdict produced by your light curve reader function that\n correspond to the times, mags/fluxes, and associated measurement errors\n that will be used as inputs to the EPD process. If these are None, the\n default values for `timecols`, `magcols`, and `errcols` for your light\n curve format will be used here.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curve files.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n epdsmooth_sigclip : float or int or sequence of two floats/ints or None\n This specifies how to sigma-clip the input LC before fitting the EPD\n function to it.\n\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n epdsmooth_windowsize : int\n This is the number of LC points to smooth over to generate a smoothed\n light curve that will be used to fit the EPD function.\n\n epdsmooth_func : Python function\n This sets the smoothing filter function to use. A Savitsky-Golay filter\n is used to smooth the light curve by default. The functions that can be\n used with this kwarg are listed in `varbase.trends`. If you want to use\n your own function, it MUST have the following signature::\n\n def smoothfunc(mags_array, window_size, **extraparams)\n\n and return a numpy array of the same size as `mags_array` with the\n smoothed time-series. Any extra params can be provided using the\n `extraparams` dict.\n\n epdsmooth_extraparams : dict\n This is a dict of any extra filter params to supply to the smoothing\n function.\n\n nworkers : int\n The number of parallel workers to launch when processing the LCs.\n\n maxworkertasks : int\n The maximum number of tasks a parallel worker will complete before it is\n replaced with a new one (sometimes helps with memory-leaks).\n\n Returns\n -------\n\n dict\n Returns a dict organized by all the keys in the input `magcols` list,\n containing lists of EPD pickle light curves for that `magcol`.\n\n Notes\n -----\n\n - S -> measure of PSF sharpness (~1/sigma^2 sosmaller S = wider PSF)\n - D -> measure of PSF ellipticity in xy direction\n - K -> measure of PSF ellipticity in cross direction\n\n S, D, K are related to the PSF's variance and covariance, see eqn 30-33 in\n A. Pal's thesis: https://arxiv.org/abs/0906.3486"} {"_id": "q_9177", "text": "This applies EPD in parallel to all LCs in a directory.\n\n Parameters\n ----------\n\n lcdir : str\n The light curve directory to process.\n\n externalparams : dict or None\n This is a dict that indicates which keys in the lcdict obtained from the\n lcfile correspond to the required external parameters. As with timecol,\n magcol, and errcol, these can be simple keys (e.g. 'rjd') or compound\n keys ('magaperture1.mags'). The dict should look something like::\n\n {'fsv':'' array: S values for each observation,\n 'fdv':'' array: D values for each observation,\n 'fkv':'' array: K values for each observation,\n 'xcc':'' array: x coords for each observation,\n 'ycc':'' array: y coords for each observation,\n 'bgv':'' array: sky background for each observation,\n 'bge':'' array: sky background err for each observation,\n 'iha':'' array: hour angle for each observation,\n 'izd':'' array: zenith distance for each observation}\n\n lcfileglob : str or None\n A UNIX fileglob to use to select light curve files in `lcdir`. If this\n is not None, the value provided will override the default fileglob for\n your light curve format.\n\n timecols,magcols,errcols : lists of str\n The keys in the lcdict produced by your light curve reader function that\n correspond to the times, mags/fluxes, and associated measurement errors\n that will be used as inputs to the EPD process. If these are None, the\n default values for `timecols`, `magcols`, and `errcols` for your light\n curve format will be used here.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n epdsmooth_sigclip : float or int or sequence of two floats/ints or None\n This specifies how to sigma-clip the input LC before fitting the EPD\n function to it.\n\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n epdsmooth_windowsize : int\n This is the number of LC points to smooth over to generate a smoothed\n light curve that will be used to fit the EPD function.\n\n epdsmooth_func : Python function\n This sets the smoothing filter function to use. A Savitsky-Golay filter\n is used to smooth the light curve by default. The functions that can be\n used with this kwarg are listed in `varbase.trends`. If you want to use\n your own function, it MUST have the following signature::\n\n def smoothfunc(mags_array, window_size, **extraparams)\n\n and return a numpy array of the same size as `mags_array` with the\n smoothed time-series. Any extra params can be provided using the\n `extraparams` dict.\n\n epdsmooth_extraparams : dict\n This is a dict of any extra filter params to supply to the smoothing\n function.\n\n nworkers : int\n The number of parallel workers to launch when processing the LCs.\n\n maxworkertasks : int\n The maximum number of tasks a parallel worker will complete before it is\n replaced with a new one (sometimes helps with memory-leaks).\n\n Returns\n -------\n\n dict\n Returns a dict organized by all the keys in the input `magcols` list,\n containing lists of EPD pickle light curves for that `magcol`.\n\n Notes\n -----\n\n - S -> measure of PSF sharpness (~1/sigma^2 sosmaller S = wider PSF)\n - D -> measure of PSF ellipticity in xy direction\n - K -> measure of PSF ellipticity in cross direction\n\n S, D, K are related to the PSF's variance and covariance, see eqn 30-33 in\n A. Pal's thesis: https://arxiv.org/abs/0906.3486"} {"_id": "q_9178", "text": "This wraps Astropy's BoxLeastSquares for use with bls_parallel_pfind below.\n\n `task` is a tuple::\n\n task[0] = times\n task[1] = mags\n task[2] = errs\n task[3] = magsarefluxes\n\n task[4] = minfreq\n task[5] = nfreq\n task[6] = stepsize\n\n task[7] = ndurations\n task[8] = mintransitduration\n task[9] = maxtransitduration\n\n task[10] = blsobjective\n task[11] = blsmethod\n task[12] = blsoversample"} {"_id": "q_9179", "text": "This wraps starfeatures."} {"_id": "q_9180", "text": "This drives the `get_starfeatures` function for a collection of LCs.\n\n Parameters\n ----------\n\n lclist : list of str\n The list of light curve file names to process.\n\n outdir : str\n The output directory where the results will be placed.\n\n lc_catalog_pickle : str\n The path to a catalog containing at a dict with least:\n\n - an object ID array accessible with `dict['objects']['objectid']`\n\n - an LC filename array accessible with `dict['objects']['lcfname']`\n\n - a `scipy.spatial.KDTree` or `cKDTree` object to use for finding\n neighbors for each object accessible with `dict['kdtree']`\n\n A catalog pickle of the form needed can be produced using\n :py:func:`astrobase.lcproc.catalogs.make_lclist` or\n :py:func:`astrobase.lcproc.catalogs.filter_lclist`.\n\n neighbor_radius_arcsec : float\n This indicates the radius in arcsec to search for neighbors for this\n object using the light curve catalog's `kdtree`, `objlist`, `lcflist`,\n and in GAIA.\n\n maxobjects : int\n The number of objects to process from `lclist`.\n\n deredden : bool\n This controls if the colors and any color classifications will be\n dereddened using 2MASS DUST.\n\n custom_bandpasses : dict or None\n This is a dict used to define any custom bandpasses in the\n `in_objectinfo` dict you want to make this function aware of and\n generate colors for. Use the format below for this dict::\n\n {\n '':{'dustkey':'',\n 'label':''\n 'colors':[['-',\n ' - '],\n ['-',\n ' - ']]},\n .\n ...\n .\n '':{'dustkey':'',\n 'label':''\n 'colors':[['-',\n ' - '],\n ['-',\n ' - ']]},\n }\n\n Where:\n\n `bandpass_key` is a key to use to refer to this bandpass in the\n `objectinfo` dict, e.g. 'sdssg' for SDSS g band\n\n `twomass_dust_key` is the key to use in the 2MASS DUST result table for\n reddening per band-pass. For example, given the following DUST result\n table (using http://irsa.ipac.caltech.edu/applications/DUST/)::\n\n |Filter_name|LamEff |A_over_E_B_V_SandF|A_SandF|A_over_E_B_V_SFD|A_SFD|\n |char |float |float |float |float |float|\n | |microns| |mags | |mags |\n CTIO U 0.3734 4.107 0.209 4.968 0.253\n CTIO B 0.4309 3.641 0.186 4.325 0.221\n CTIO V 0.5517 2.682 0.137 3.240 0.165\n .\n .\n ...\n\n The `twomass_dust_key` for 'vmag' would be 'CTIO V'. If you want to\n skip DUST lookup and want to pass in a specific reddening magnitude\n for your bandpass, use a float for the value of\n `twomass_dust_key`. If you want to skip DUST lookup entirely for\n this bandpass, use None for the value of `twomass_dust_key`.\n\n `band_label` is the label to use for this bandpass, e.g. 'W1' for\n WISE-1 band, 'u' for SDSS u, etc.\n\n The 'colors' list contains color definitions for all colors you want\n to generate using this bandpass. this list contains elements of the\n form::\n\n ['-',' - ']\n\n where the the first item is the bandpass keys making up this color,\n and the second item is the label for this color to be used by the\n frontends. An example::\n\n ['sdssu-sdssg','u - g']\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n Returns\n -------\n\n list of str\n A list of all star features pickles produced."} {"_id": "q_9181", "text": "This runs `get_starfeatures` in parallel for all light curves in `lclist`.\n\n Parameters\n ----------\n\n lclist : list of str\n The list of light curve file names to process.\n\n outdir : str\n The output directory where the results will be placed.\n\n lc_catalog_pickle : str\n The path to a catalog containing at a dict with least:\n\n - an object ID array accessible with `dict['objects']['objectid']`\n\n - an LC filename array accessible with `dict['objects']['lcfname']`\n\n - a `scipy.spatial.KDTree` or `cKDTree` object to use for finding\n neighbors for each object accessible with `dict['kdtree']`\n\n A catalog pickle of the form needed can be produced using\n :py:func:`astrobase.lcproc.catalogs.make_lclist` or\n :py:func:`astrobase.lcproc.catalogs.filter_lclist`.\n\n neighbor_radius_arcsec : float\n This indicates the radius in arcsec to search for neighbors for this\n object using the light curve catalog's `kdtree`, `objlist`, `lcflist`,\n and in GAIA.\n\n maxobjects : int\n The number of objects to process from `lclist`.\n\n deredden : bool\n This controls if the colors and any color classifications will be\n dereddened using 2MASS DUST.\n\n custom_bandpasses : dict or None\n This is a dict used to define any custom bandpasses in the\n `in_objectinfo` dict you want to make this function aware of and\n generate colors for. Use the format below for this dict::\n\n {\n '':{'dustkey':'',\n 'label':''\n 'colors':[['-',\n ' - '],\n ['-',\n ' - ']]},\n .\n ...\n .\n '':{'dustkey':'',\n 'label':''\n 'colors':[['-',\n ' - '],\n ['-',\n ' - ']]},\n }\n\n Where:\n\n `bandpass_key` is a key to use to refer to this bandpass in the\n `objectinfo` dict, e.g. 'sdssg' for SDSS g band\n\n `twomass_dust_key` is the key to use in the 2MASS DUST result table for\n reddening per band-pass. For example, given the following DUST result\n table (using http://irsa.ipac.caltech.edu/applications/DUST/)::\n\n |Filter_name|LamEff |A_over_E_B_V_SandF|A_SandF|A_over_E_B_V_SFD|A_SFD|\n |char |float |float |float |float |float|\n | |microns| |mags | |mags |\n CTIO U 0.3734 4.107 0.209 4.968 0.253\n CTIO B 0.4309 3.641 0.186 4.325 0.221\n CTIO V 0.5517 2.682 0.137 3.240 0.165\n .\n .\n ...\n\n The `twomass_dust_key` for 'vmag' would be 'CTIO V'. If you want to\n skip DUST lookup and want to pass in a specific reddening magnitude\n for your bandpass, use a float for the value of\n `twomass_dust_key`. If you want to skip DUST lookup entirely for\n this bandpass, use None for the value of `twomass_dust_key`.\n\n `band_label` is the label to use for this bandpass, e.g. 'W1' for\n WISE-1 band, 'u' for SDSS u, etc.\n\n The 'colors' list contains color definitions for all colors you want\n to generate using this bandpass. this list contains elements of the\n form::\n\n ['-',' - ']\n\n where the the first item is the bandpass keys making up this color,\n and the second item is the label for this color to be used by the\n frontends. An example::\n\n ['sdssu-sdssg','u - g']\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n nworkers : int\n The number of parallel workers to launch.\n\n Returns\n -------\n\n dict\n A dict with key:val pairs of the input light curve filename and the\n output star features pickle for each LC processed."} {"_id": "q_9182", "text": "This runs parallel star feature extraction for a directory of LCs.\n\n Parameters\n ----------\n\n lcdir : list of str\n The directory to search for light curves.\n\n outdir : str\n The output directory where the results will be placed.\n\n lc_catalog_pickle : str\n The path to a catalog containing at a dict with least:\n\n - an object ID array accessible with `dict['objects']['objectid']`\n\n - an LC filename array accessible with `dict['objects']['lcfname']`\n\n - a `scipy.spatial.KDTree` or `cKDTree` object to use for finding\n neighbors for each object accessible with `dict['kdtree']`\n\n A catalog pickle of the form needed can be produced using\n :py:func:`astrobase.lcproc.catalogs.make_lclist` or\n :py:func:`astrobase.lcproc.catalogs.filter_lclist`.\n\n neighbor_radius_arcsec : float\n This indicates the radius in arcsec to search for neighbors for this\n object using the light curve catalog's `kdtree`, `objlist`, `lcflist`,\n and in GAIA.\n\n fileglob : str\n The UNIX file glob to use to search for the light curves in `lcdir`. If\n None, the default value for the light curve format specified will be\n used.\n\n maxobjects : int\n The number of objects to process from `lclist`.\n\n deredden : bool\n This controls if the colors and any color classifications will be\n dereddened using 2MASS DUST.\n\n custom_bandpasses : dict or None\n This is a dict used to define any custom bandpasses in the\n `in_objectinfo` dict you want to make this function aware of and\n generate colors for. Use the format below for this dict::\n\n {\n '':{'dustkey':'',\n 'label':''\n 'colors':[['-',\n ' - '],\n ['-',\n ' - ']]},\n .\n ...\n .\n '':{'dustkey':'',\n 'label':''\n 'colors':[['-',\n ' - '],\n ['-',\n ' - ']]},\n }\n\n Where:\n\n `bandpass_key` is a key to use to refer to this bandpass in the\n `objectinfo` dict, e.g. 'sdssg' for SDSS g band\n\n `twomass_dust_key` is the key to use in the 2MASS DUST result table for\n reddening per band-pass. For example, given the following DUST result\n table (using http://irsa.ipac.caltech.edu/applications/DUST/)::\n\n |Filter_name|LamEff |A_over_E_B_V_SandF|A_SandF|A_over_E_B_V_SFD|A_SFD|\n |char |float |float |float |float |float|\n | |microns| |mags | |mags |\n CTIO U 0.3734 4.107 0.209 4.968 0.253\n CTIO B 0.4309 3.641 0.186 4.325 0.221\n CTIO V 0.5517 2.682 0.137 3.240 0.165\n .\n .\n ...\n\n The `twomass_dust_key` for 'vmag' would be 'CTIO V'. If you want to\n skip DUST lookup and want to pass in a specific reddening magnitude\n for your bandpass, use a float for the value of\n `twomass_dust_key`. If you want to skip DUST lookup entirely for\n this bandpass, use None for the value of `twomass_dust_key`.\n\n `band_label` is the label to use for this bandpass, e.g. 'W1' for\n WISE-1 band, 'u' for SDSS u, etc.\n\n The 'colors' list contains color definitions for all colors you want\n to generate using this bandpass. this list contains elements of the\n form::\n\n ['-',' - ']\n\n where the the first item is the bandpass keys making up this color,\n and the second item is the label for this color to be used by the\n frontends. An example::\n\n ['sdssu-sdssg','u - g']\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n nworkers : int\n The number of parallel workers to launch.\n\n Returns\n -------\n\n dict\n A dict with key:val pairs of the input light curve filename and the\n output star features pickle for each LC processed."} {"_id": "q_9183", "text": "This is the parallel worker for the function below.\n\n task[0] = frequency for this worker\n task[1] = times array\n task[2] = mags array\n task[3] = fold_time\n task[4] = j_range\n task[5] = keep_threshold_1\n task[6] = keep_threshold_2\n task[7] = phasebinsize\n\n we don't need errs for the worker."} {"_id": "q_9184", "text": "This drives the periodicfeatures collection for a list of periodfinding\n pickles.\n\n Parameters\n ----------\n\n pfpkl_list : list of str\n The list of period-finding pickles to use.\n\n lcbasedir : str\n The base directory where the associated light curves are located.\n\n outdir : str\n The directory where the results will be written.\n\n starfeaturesdir : str or None\n The directory containing the `starfeatures-.pkl` files for\n each object to use calculate neighbor proximity light curve features.\n\n fourierorder : int\n The Fourier order to use to generate sinusoidal function and fit that to\n the phased light curve.\n\n transitparams : list of floats\n The transit depth, duration, and ingress duration to use to generate a\n trapezoid planet transit model fit to the phased light curve. The period\n used is the one provided in `period`, while the epoch is automatically\n obtained from a spline fit to the phased light curve.\n\n ebparams : list of floats\n The primary eclipse depth, eclipse duration, the primary-secondary depth\n ratio, and the phase of the secondary eclipse to use to generate an\n eclipsing binary model fit to the phased light curve. The period used is\n the one provided in `period`, while the epoch is automatically obtained\n from a spline fit to the phased light curve.\n\n pdiff_threshold : float\n This is the max difference between periods to consider them the same.\n\n sidereal_threshold : float\n This is the max difference between any of the 'best' periods and the\n sidereal day periods to consider them the same.\n\n sampling_peak_multiplier : float\n This is the minimum multiplicative factor of a 'best' period's\n normalized periodogram peak over the sampling periodogram peak at the\n same period required to accept the 'best' period as possibly real.\n\n sampling_startp, sampling_endp : float\n If the `pgramlist` doesn't have a time-sampling Lomb-Scargle\n periodogram, it will be obtained automatically. Use these kwargs to\n control the minimum and maximum period interval to be searched when\n generating this periodogram.\n\n timecols : list of str or None\n The timecol keys to use from the lcdict in calculating the features.\n\n magcols : list of str or None\n The magcol keys to use from the lcdict in calculating the features.\n\n errcols : list of str or None\n The errcol keys to use from the lcdict in calculating the features.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n verbose : bool\n If True, will indicate progress while working.\n\n maxobjects : int\n The total number of objects to process from `pfpkl_list`.\n\n Returns\n -------\n\n Nothing."} {"_id": "q_9185", "text": "This runs periodic feature generation in parallel for all periodfinding\n pickles in the input list.\n\n Parameters\n ----------\n\n pfpkl_list : list of str\n The list of period-finding pickles to use.\n\n lcbasedir : str\n The base directory where the associated light curves are located.\n\n outdir : str\n The directory where the results will be written.\n\n starfeaturesdir : str or None\n The directory containing the `starfeatures-.pkl` files for\n each object to use calculate neighbor proximity light curve features.\n\n fourierorder : int\n The Fourier order to use to generate sinusoidal function and fit that to\n the phased light curve.\n\n transitparams : list of floats\n The transit depth, duration, and ingress duration to use to generate a\n trapezoid planet transit model fit to the phased light curve. The period\n used is the one provided in `period`, while the epoch is automatically\n obtained from a spline fit to the phased light curve.\n\n ebparams : list of floats\n The primary eclipse depth, eclipse duration, the primary-secondary depth\n ratio, and the phase of the secondary eclipse to use to generate an\n eclipsing binary model fit to the phased light curve. The period used is\n the one provided in `period`, while the epoch is automatically obtained\n from a spline fit to the phased light curve.\n\n pdiff_threshold : float\n This is the max difference between periods to consider them the same.\n\n sidereal_threshold : float\n This is the max difference between any of the 'best' periods and the\n sidereal day periods to consider them the same.\n\n sampling_peak_multiplier : float\n This is the minimum multiplicative factor of a 'best' period's\n normalized periodogram peak over the sampling periodogram peak at the\n same period required to accept the 'best' period as possibly real.\n\n sampling_startp, sampling_endp : float\n If the `pgramlist` doesn't have a time-sampling Lomb-Scargle\n periodogram, it will be obtained automatically. Use these kwargs to\n control the minimum and maximum period interval to be searched when\n generating this periodogram.\n\n timecols : list of str or None\n The timecol keys to use from the lcdict in calculating the features.\n\n magcols : list of str or None\n The magcol keys to use from the lcdict in calculating the features.\n\n errcols : list of str or None\n The errcol keys to use from the lcdict in calculating the features.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n verbose : bool\n If True, will indicate progress while working.\n\n maxobjects : int\n The total number of objects to process from `pfpkl_list`.\n\n nworkers : int\n The number of parallel workers to launch to process the input.\n\n Returns\n -------\n\n dict\n A dict containing key: val pairs of the input period-finder result and\n the output periodic feature result pickles for each input pickle is\n returned."} {"_id": "q_9186", "text": "This runs parallel periodicfeature extraction for a directory of\n periodfinding result pickles.\n\n Parameters\n ----------\n\n pfpkl_dir : str\n The directory containing the pickles to process.\n\n lcbasedir : str\n The directory where all of the associated light curve files are located.\n\n outdir : str\n The directory where all the output will be written.\n\n pfpkl_glob : str\n The UNIX file glob to use to search for period-finder result pickles in\n `pfpkl_dir`.\n\n starfeaturesdir : str or None\n The directory containing the `starfeatures-.pkl` files for\n each object to use calculate neighbor proximity light curve features.\n\n fourierorder : int\n The Fourier order to use to generate sinusoidal function and fit that to\n the phased light curve.\n\n transitparams : list of floats\n The transit depth, duration, and ingress duration to use to generate a\n trapezoid planet transit model fit to the phased light curve. The period\n used is the one provided in `period`, while the epoch is automatically\n obtained from a spline fit to the phased light curve.\n\n ebparams : list of floats\n The primary eclipse depth, eclipse duration, the primary-secondary depth\n ratio, and the phase of the secondary eclipse to use to generate an\n eclipsing binary model fit to the phased light curve. The period used is\n the one provided in `period`, while the epoch is automatically obtained\n from a spline fit to the phased light curve.\n\n pdiff_threshold : float\n This is the max difference between periods to consider them the same.\n\n sidereal_threshold : float\n This is the max difference between any of the 'best' periods and the\n sidereal day periods to consider them the same.\n\n sampling_peak_multiplier : float\n This is the minimum multiplicative factor of a 'best' period's\n normalized periodogram peak over the sampling periodogram peak at the\n same period required to accept the 'best' period as possibly real.\n\n sampling_startp, sampling_endp : float\n If the `pgramlist` doesn't have a time-sampling Lomb-Scargle\n periodogram, it will be obtained automatically. Use these kwargs to\n control the minimum and maximum period interval to be searched when\n generating this periodogram.\n\n timecols : list of str or None\n The timecol keys to use from the lcdict in calculating the features.\n\n magcols : list of str or None\n The magcol keys to use from the lcdict in calculating the features.\n\n errcols : list of str or None\n The errcol keys to use from the lcdict in calculating the features.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n verbose : bool\n If True, will indicate progress while working.\n\n maxobjects : int\n The total number of objects to process from `pfpkl_list`.\n\n nworkers : int\n The number of parallel workers to launch to process the input.\n\n Returns\n -------\n\n dict\n A dict containing key: val pairs of the input period-finder result and\n the output periodic feature result pickles for each input pickle is\n returned."} {"_id": "q_9187", "text": "This parses the header for a catalog file and returns it as a file object.\n\n Parameters\n ----------\n\n xc : str\n The file name of an xmatch catalog prepared previously.\n\n xk : list of str\n This is a list of column names to extract from the xmatch catalog.\n\n Returns\n -------\n\n tuple\n The tuple returned is of the form::\n\n (infd: the file object associated with the opened xmatch catalog,\n catdefdict: a dict describing the catalog column definitions,\n catcolinds: column number indices of the catalog,\n catcoldtypes: the numpy dtypes of the catalog columns,\n catcolnames: the names of each catalog column,\n catcolunits: the units associated with each catalog column)"} {"_id": "q_9188", "text": "This loads the external xmatch catalogs into a dict for use in an xmatch.\n\n Parameters\n ----------\n\n xmatchto : list of str\n This is a list of paths to all the catalog text files that will be\n loaded.\n\n The text files must be 'CSVs' that use the '|' character as the\n separator betwen columns. These files should all begin with a header in\n JSON format on lines starting with the '#' character. this header will\n define the catalog and contains the name of the catalog and the column\n definitions. Column definitions must have the column name and the numpy\n dtype of the columns (in the same format as that expected for the\n numpy.genfromtxt function). Any line that does not begin with '#' is\n assumed to be part of the columns in the catalog. An example is shown\n below::\n\n # {\"name\":\"NSVS catalog of variable stars\",\n # \"columns\":[\n # {\"key\":\"objectid\", \"dtype\":\"U20\", \"name\":\"Object ID\", \"unit\": null},\n # {\"key\":\"ra\", \"dtype\":\"f8\", \"name\":\"RA\", \"unit\":\"deg\"},\n # {\"key\":\"decl\",\"dtype\":\"f8\", \"name\": \"Declination\", \"unit\":\"deg\"},\n # {\"key\":\"sdssr\",\"dtype\":\"f8\",\"name\":\"SDSS r\", \"unit\":\"mag\"},\n # {\"key\":\"vartype\",\"dtype\":\"U20\",\"name\":\"Variable type\", \"unit\":null}\n # ],\n # \"colra\":\"ra\",\n # \"coldec\":\"decl\",\n # \"description\":\"Contains variable stars from the NSVS catalog\"}\n objectid1 | 45.0 | -20.0 | 12.0 | detached EB\n objectid2 | 145.0 | 23.0 | 10.0 | RRab\n objectid3 | 12.0 | 11.0 | 14.0 | Cepheid\n .\n .\n .\n\n xmatchkeys : list of lists\n This is the list of lists of column names (as str) to get out of each\n `xmatchto` catalog. This should be the same length as `xmatchto` and\n each element here will apply to the respective file in `xmatchto`.\n\n outfile : str or None\n If this is not None, set this to the name of the pickle to write the\n collected xmatch catalogs to. this pickle can then be loaded\n transparently by the :py:func:`astrobase.checkplot.pkl.checkplot_dict`,\n :py:func:`astrobase.checkplot.pkl.checkplot_pickle` functions to provide\n xmatch info to the\n :py:func:`astrobase.checkplot.pkl_xmatch.xmatch_external_catalogs`\n function below.\n\n If this is None, will return the loaded xmatch catalogs directly. This\n will be a huge dict, so make sure you have enough RAM.\n\n Returns\n -------\n\n str or dict\n Based on the `outfile` kwarg, will either return the path to a collected\n xmatch pickle file or the collected xmatch dict."} {"_id": "q_9189", "text": "Wraps the input angle to 360.0 degrees.\n\n Parameters\n ----------\n\n angle : float\n The angle to wrap around 360.0 deg.\n\n radians : bool\n If True, will assume that the input is in radians. The output will then\n also be in radians.\n\n Returns\n -------\n\n float\n Wrapped angle. If radians is True: input is assumed to be in radians,\n output is also in radians."} {"_id": "q_9190", "text": "Calculates the great circle angular distance between two coords.\n\n This calculates the great circle angular distance in arcseconds between two\n coordinates (ra1,dec1) and (ra2,dec2). This is basically a clone of GCIRC\n from the IDL Astrolib.\n\n Parameters\n ----------\n\n ra1,dec1 : float or array-like\n The first coordinate's right ascension and declination value(s) in\n decimal degrees.\n\n ra2,dec2 : float or array-like\n The second coordinate's right ascension and declination value(s) in\n decimal degrees.\n\n Returns\n -------\n\n float or array-like\n Great circle distance between the two coordinates in arseconds.\n\n Notes\n -----\n\n If (`ra1`, `dec1`) is scalar and (`ra2`, `dec2`) is scalar: the result is a\n float distance in arcseconds.\n\n If (`ra1`, `dec1`) is scalar and (`ra2`, `dec2`) is array-like: the result\n is an np.array with distance in arcseconds between (`ra1`, `dec1`) and each\n element of (`ra2`, `dec2`).\n\n If (`ra1`, `dec1`) is array-like and (`ra2`, `dec2`) is scalar: the result\n is an np.array with distance in arcseconds between (`ra2`, `dec2`) and each\n element of (`ra1`, `dec1`).\n\n If (`ra1`, `dec1`) and (`ra2`, `dec2`) are both array-like: the result is an\n np.array with the pair-wise distance in arcseconds between each element of\n the two coordinate lists. In this case, if the input array-likes are not the\n same length, then excess elements of the longer one will be ignored."} {"_id": "q_9191", "text": "This calculates the total proper motion of an object.\n\n Parameters\n ----------\n\n pmra : float or array-like\n The proper motion(s) in right ascension, measured in mas/yr.\n\n pmdecl : float or array-like\n The proper motion(s) in declination, measured in mas/yr.\n\n decl : float or array-like\n The declination of the object(s) in decimal degrees.\n\n Returns\n -------\n\n float or array-like\n The total proper motion(s) of the object(s) in mas/yr."} {"_id": "q_9192", "text": "This converts from galactic coords to equatorial coordinates.\n\n Parameters\n ----------\n\n gl : float or array-like\n Galactic longitude values(s) in decimal degrees.\n\n gb : float or array-like\n Galactic latitude value(s) in decimal degrees.\n\n Returns\n -------\n\n tuple of (float, float) or tuple of (np.array, np.array)\n The equatorial coordinates (RA, DEC) for each element of the input\n (`gl`, `gb`) in decimal degrees. These are reported in the ICRS frame."} {"_id": "q_9193", "text": "This returns the image-plane projected xi-eta coords for inra, indecl.\n\n Parameters\n ----------\n\n inra,indecl : array-like\n The equatorial coordinates to get the xi, eta coordinates for in decimal\n degrees or radians.\n\n incenterra,incenterdecl : float\n The center coordinate values to use to calculate the plane-projected\n coordinates around.\n\n deg : bool\n If this is True, the input angles are assumed to be in degrees and the\n output is in degrees as well.\n\n Returns\n -------\n\n tuple of np.arrays\n This is the (`xi`, `eta`) coordinate pairs corresponding to the\n image-plane projected coordinates for each pair of input equatorial\n coordinates in (`inra`, `indecl`)."} {"_id": "q_9194", "text": "This generates fake planet transit light curves.\n\n Parameters\n ----------\n\n times : np.array\n This is an array of time values that will be used as the time base.\n\n mags,errs : np.array\n These arrays will have the model added to them. If either is\n None, `np.full_like(times, 0.0)` will used as a substitute and the model\n light curve will be centered around 0.0.\n\n paramdists : dict\n This is a dict containing parameter distributions to use for the\n model params, containing the following keys ::\n\n {'transitperiod', 'transitdepth', 'transitduration'}\n\n The values of these keys should all be 'frozen' scipy.stats distribution\n objects, e.g.:\n\n https://docs.scipy.org/doc/scipy/reference/stats.html#continuous-distributions\n The variability epoch will be automatically chosen from a uniform\n distribution between `times.min()` and `times.max()`.\n\n The ingress duration will be automatically chosen from a uniform\n distribution ranging from 0.05 to 0.5 of the transitduration.\n\n The transitdepth will be flipped automatically as appropriate if\n `magsarefluxes=True`.\n\n magsarefluxes : bool\n If the generated time series is meant to be a flux time-series, set this\n to True to get the correct sign of variability amplitude.\n\n Returns\n -------\n\n dict\n A dict of the form below is returned::\n\n {'vartype': 'planet',\n 'params': {'transitperiod': generated value of period,\n 'transitepoch': generated value of epoch,\n 'transitdepth': generated value of transit depth,\n 'transitduration': generated value of transit duration,\n 'ingressduration': generated value of transit ingress\n duration},\n 'times': the model times,\n 'mags': the model mags,\n 'errs': the model errs,\n 'varperiod': the generated period of variability == 'transitperiod'\n 'varamplitude': the generated amplitude of\n variability == 'transitdepth'}"} {"_id": "q_9195", "text": "This generates fake flare light curves.\n\n Parameters\n ----------\n\n times : np.array\n This is an array of time values that will be used as the time base.\n\n mags,errs : np.array\n These arrays will have the model added to them. If either is\n None, `np.full_like(times, 0.0)` will used as a substitute and the model\n light curve will be centered around 0.0.\n\n paramdists : dict\n This is a dict containing parameter distributions to use for the\n model params, containing the following keys ::\n\n {'amplitude', 'nflares', 'risestdev', 'decayconst'}\n\n The values of these keys should all be 'frozen' scipy.stats distribution\n objects, e.g.:\n\n https://docs.scipy.org/doc/scipy/reference/stats.html#continuous-distributions\n The `flare_peak_time` for each flare will be generated automatically\n between `times.min()` and `times.max()` using a uniform distribution.\n\n The `amplitude` will be flipped automatically as appropriate if\n `magsarefluxes=True`.\n\n magsarefluxes : bool\n If the generated time series is meant to be a flux time-series, set this\n to True to get the correct sign of variability amplitude.\n\n Returns\n -------\n\n dict\n A dict of the form below is returned::\n\n {'vartype': 'flare',\n 'params': {'amplitude': generated value of flare amplitudes,\n 'nflares': generated value of number of flares,\n 'risestdev': generated value of stdev of rise time,\n 'decayconst': generated value of decay constant,\n 'peaktime': generated value of flare peak time},\n 'times': the model times,\n 'mags': the model mags,\n 'errs': the model errs,\n 'varamplitude': the generated amplitude of\n variability == 'amplitude'}"} {"_id": "q_9196", "text": "This wraps `process_fakelc` for `make_fakelc_collection` below.\n\n Parameters\n ----------\n\n task : tuple\n This is of the form::\n\n task[0] = lcfile\n task[1] = outdir\n task[2] = magrms\n task[3] = dict with keys: {'lcformat', 'timecols', 'magcols',\n 'errcols', 'randomizeinfo'}\n\n Returns\n -------\n\n tuple\n This returns a tuple of the form::\n\n (fakelc_fpath,\n fakelc_lcdict['columns'],\n fakelc_lcdict['objectinfo'],\n fakelc_lcdict['moments'])"} {"_id": "q_9197", "text": "This adds variability and noise to all fake LCs in `simbasedir`.\n\n If an object is marked as variable in the `fakelcs-info`.pkl file in\n `simbasedir`, a variable signal will be added to its light curve based on\n its selected type, default period and amplitude distribution, the\n appropriate params, etc. the epochs for each variable object will be chosen\n uniformly from its time-range (and may not necessarily fall on a actual\n observed time). Nonvariable objects will only have noise added as determined\n by their params, but no variable signal will be added.\n\n Parameters\n ----------\n\n simbasedir : str\n The directory containing the fake LCs to process.\n\n override_paramdists : dict\n This can be used to override the stored variable parameters in each fake\n LC. It should be a dict of the following form::\n\n {'': {': a scipy.stats distribution function or\n the np.random.randint function,\n .\n .\n .\n ': a scipy.stats distribution function\n or the np.random.randint function}\n\n for any vartype in VARTYPE_LCGEN_MAP. These are used to override the\n default parameter distributions for each variable type.\n\n overwrite_existingvar : bool\n If this is True, then will overwrite any existing variability in the\n input fake LCs in `simbasedir`.\n\n Returns\n -------\n\n dict\n This returns a dict containing the fake LC filenames as keys and\n variability info for each as values."} {"_id": "q_9198", "text": "This finds flares in time series using the method in Walkowicz+ 2011.\n\n FIXME: finish this.\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The input time-series to find flares in.\n\n smoothbinsize : int\n The number of consecutive light curve points to smooth over in the time\n series using a Savitsky-Golay filter. The smoothed light curve is then\n subtracted from the actual light curve to remove trends that potentially\n last `smoothbinsize` light curve points. The default value is chosen as\n ~6.5 hours (97 x 4 minute cadence for HATNet/HATSouth).\n\n flare_minsigma : float\n The minimum sigma above the median LC level to designate points as\n belonging to possible flares.\n\n flare_maxcadencediff : int\n The maximum number of light curve points apart each possible flare event\n measurement is allowed to be. If this is 1, then we'll look for\n consecutive measurements.\n\n flare_mincadencepoints : int\n The minimum number of light curve points (each `flare_maxcadencediff`\n points apart) required that are at least `flare_minsigma` above the\n median light curve level to call an event a flare.\n\n magsarefluxes: bool\n If True, indicates that mags is actually an array of fluxes.\n\n savgol_polyorder: int\n The polynomial order of the function used by the Savitsky-Golay filter.\n\n savgol_kwargs : extra kwargs\n Any remaining keyword arguments are passed directly to the\n `savgol_filter` function from `scipy.signal`.\n\n Returns\n -------\n\n (nflares, flare_indices) : tuple\n Returns the total number of flares found and their time-indices (start,\n end) as tuples."} {"_id": "q_9199", "text": "This calculates the relative peak heights for first npeaks in ACF.\n\n Usually, the first peak or the second peak (if its peak height > first peak)\n corresponds to the correct lag. When we know the correct lag, the period is\n then::\n\n bestperiod = time[lags == bestlag] - time[0]\n\n Parameters\n ----------\n\n lags : np.array\n An array of lags that the ACF is calculated at.\n\n acf : np.array\n The array containing the ACF values.\n\n npeaks : int\n THe maximum number of peaks to consider when finding peak heights.\n\n searchinterval : int\n From `scipy.signal.argrelmax`: \"How many points on each side to use for\n the comparison to consider comparator(n, n+x) to be True.\" This\n effectively sets how many points on each of the current peak will be\n used to check if the current peak is the local maximum.\n\n Returns\n -------\n\n dict\n This returns a dict of the following form::\n\n {'maxinds':the indices of the lag array where maxes are,\n 'maxacfs':the ACF values at each max,\n 'maxlags':the lag values at each max,\n 'mininds':the indices of the lag array where mins are,\n 'minacfs':the ACF values at each min,\n 'minlags':the lag values at each min,\n 'relpeakheights':the relative peak heights of each rel. ACF peak,\n 'relpeaklags':the lags at each rel. ACF peak found,\n 'peakindices':the indices of arrays where each rel. ACF peak is,\n 'bestlag':the lag value with the largest rel. ACF peak height,\n 'bestpeakheight':the largest rel. ACF peak height,\n 'bestpeakindex':the largest rel. ACF peak's number in all peaks}"} {"_id": "q_9200", "text": "This is yet another alternative to calculate the autocorrelation.\n\n Taken from: `Bayesian Methods for Hackers by Cameron Pilon `_\n\n (This should be the fastest method to calculate ACFs.)\n\n Parameters\n ----------\n\n mags : np.array\n This is the magnitudes array. MUST NOT have any nans.\n\n lag : float\n The specific lag value to calculate the auto-correlation for. This MUST\n be less than total number of observations in `mags`.\n\n maglen : int\n The number of elements in the `mags` array.\n\n magmed : float\n The median of the `mags` array.\n\n magstd : float\n The standard deviation of the `mags` array.\n\n Returns\n -------\n\n float\n The auto-correlation at this specific `lag` value."} {"_id": "q_9201", "text": "This calculates the ACF of a light curve.\n\n This will pre-process the light curve to fill in all the gaps and normalize\n everything to zero. If `fillgaps = 'noiselevel'`, fills the gaps with the\n noise level obtained via the procedure above. If `fillgaps = 'nan'`, fills\n the gaps with `np.nan`.\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The measurement time-series and associated errors.\n\n maxlags : int\n The maximum number of lags to calculate.\n\n func : Python function\n This is a function to calculate the lags.\n\n fillgaps : 'noiselevel' or float\n This sets what to use to fill in gaps in the time series. If this is\n 'noiselevel', will smooth the light curve using a point window size of\n `filterwindow` (this should be an odd integer), subtract the smoothed LC\n from the actual LC and estimate the RMS. This RMS will be used to fill\n in the gaps. Other useful values here are 0.0, and npnan.\n\n filterwindow : int\n The light curve's smoothing filter window size to use if\n `fillgaps='noiselevel`'.\n\n forcetimebin : None or float\n This is used to force a particular cadence in the light curve other than\n the automatically determined cadence. This effectively rebins the light\n curve to this cadence. This should be in the same time units as `times`.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n magsarefluxes : bool\n If your input measurements in `mags` are actually fluxes instead of\n mags, set this is True.\n\n verbose : bool\n If True, will indicate progress and report errors.\n\n Returns\n -------\n\n dict\n A dict of the following form is returned::\n\n {'itimes': the interpolated time values after gap-filling,\n 'imags': the interpolated mag/flux values after gap-filling,\n 'ierrs': the interpolated mag/flux values after gap-filling,\n 'cadence': the cadence of the output mag/flux time-series,\n 'minitime': the minimum value of the interpolated times array,\n 'lags': the lags used to calculate the auto-correlation function,\n 'acf': the value of the ACF at each lag used}"} {"_id": "q_9202", "text": "This calculates the harmonic AoV theta statistic for a frequency.\n\n This is a mostly faithful translation of the inner loop in `aovper.f90`. See\n the following for details:\n\n - http://users.camk.edu.pl/alex/\n - Schwarzenberg-Czerny (`1996\n `_)\n\n Schwarzenberg-Czerny (1996) equation 11::\n\n theta_prefactor = (K - 2N - 1)/(2N)\n theta_top = sum(c_n*c_n) (from n=0 to n=2N)\n theta_bot = variance(timeseries) - sum(c_n*c_n) (from n=0 to n=2N)\n\n theta = theta_prefactor * (theta_top/theta_bot)\n\n N = number of harmonics (nharmonics)\n K = length of time series (times.size)\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The input time-series to calculate the test statistic for. These should\n all be of nans/infs and be normalized to zero.\n\n frequency : float\n The test frequency to calculate the statistic for.\n\n nharmonics : int\n The number of harmonics to calculate up to.The recommended range is 4 to\n 8.\n\n magvariance : float\n This is the (weighted by errors) variance of the magnitude time\n series. We provide it as a pre-calculated value here so we don't have to\n re-calculate it for every worker.\n\n Returns\n -------\n\n aov_harmonic_theta : float\n THe value of the harmonic AoV theta for the specified test `frequency`."} {"_id": "q_9203", "text": "This opens a new database connection.\n\n Parameters\n ----------\n\n database : str\n Name of the database to connect to.\n\n user : str\n User name of the database server user.\n\n password : str\n Password for the database server user.\n\n host : str\n Database hostname or IP address to connect to."} {"_id": "q_9204", "text": "This xmatches external catalogs to a collection of checkplots.\n\n Parameters\n ----------\n\n cplist : list of str\n This is the list of checkplot pickle files to process.\n\n xmatchpkl : str\n The filename of a pickle prepared beforehand with the\n `checkplot.pkl_xmatch.load_xmatch_external_catalogs` function,\n containing collected external catalogs to cross-match the objects in the\n input `cplist` against.\n\n xmatchradiusarcsec : float\n The match radius to use for the cross-match in arcseconds.\n\n updateexisting : bool\n If this is True, will only update the `xmatch` dict in each checkplot\n pickle with any new cross-matches to the external catalogs. If False,\n will overwrite the `xmatch` dict with results from the current run.\n\n resultstodir : str or None\n If this is provided, then it must be a directory to write the resulting\n checkplots to after xmatch is done. This can be used to keep the\n original checkplots in pristine condition for some reason.\n\n Returns\n -------\n\n dict\n Returns a dict with keys = input checkplot pickle filenames and vals =\n xmatch status dict for each checkplot pickle."} {"_id": "q_9205", "text": "This xmatches external catalogs to all checkplots in a directory.\n\n Parameters\n -----------\n\n cpdir : str\n This is the directory to search in for checkplots.\n\n xmatchpkl : str\n The filename of a pickle prepared beforehand with the\n `checkplot.pkl_xmatch.load_xmatch_external_catalogs` function,\n containing collected external catalogs to cross-match the objects in the\n input `cplist` against.\n\n cpfileglob : str\n This is the UNIX fileglob to use in searching for checkplots.\n\n xmatchradiusarcsec : float\n The match radius to use for the cross-match in arcseconds.\n\n updateexisting : bool\n If this is True, will only update the `xmatch` dict in each checkplot\n pickle with any new cross-matches to the external catalogs. If False,\n will overwrite the `xmatch` dict with results from the current run.\n\n resultstodir : str or None\n If this is provided, then it must be a directory to write the resulting\n checkplots to after xmatch is done. This can be used to keep the\n original checkplots in pristine condition for some reason.\n\n Returns\n -------\n\n dict\n Returns a dict with keys = input checkplot pickle filenames and vals =\n xmatch status dict for each checkplot pickle."} {"_id": "q_9206", "text": "This makes color-mag diagrams for all checkplot pickles in the provided\n list.\n\n Can make an arbitrary number of CMDs given lists of x-axis colors and y-axis\n mags to use.\n\n Parameters\n ----------\n\n cplist : list of str\n This is the list of checkplot pickles to process.\n\n outpkl : str\n The filename of the output pickle that will contain the color-mag\n information for all objects in the checkplots specified in `cplist`.\n\n color_mag1 : list of str\n This a list of the keys in each checkplot's `objectinfo` dict that will\n be used as color_1 in the equation::\n\n x-axis color = color_mag1 - color_mag2\n\n color_mag2 : list of str\n This a list of the keys in each checkplot's `objectinfo` dict that will\n be used as color_2 in the equation::\n\n x-axis color = color_mag1 - color_mag2\n\n yaxis_mag : list of str\n This is a list of the keys in each checkplot's `objectinfo` dict that\n will be used as the (absolute) magnitude y-axis of the color-mag\n diagrams.\n\n Returns\n -------\n\n str\n The path to the generated CMD pickle file for the collection of objects\n in the input checkplot list.\n\n Notes\n -----\n\n This can make many CMDs in one go. For example, the default kwargs for\n `color_mag`, `color_mag2`, and `yaxis_mag` result in two CMDs generated and\n written to the output pickle file:\n\n - CMD1 -> gaiamag - kmag on the x-axis vs gaia_absmag on the y-axis\n - CMD2 -> sdssg - kmag on the x-axis vs rpmj (J reduced PM) on the y-axis"} {"_id": "q_9207", "text": "This makes CMDs for all checkplot pickles in the provided directory.\n\n Can make an arbitrary number of CMDs given lists of x-axis colors and y-axis\n mags to use.\n\n Parameters\n ----------\n\n cpdir : list of str\n This is the directory to get the list of input checkplot pickles from.\n\n outpkl : str\n The filename of the output pickle that will contain the color-mag\n information for all objects in the checkplots specified in `cplist`.\n\n cpfileglob : str\n The UNIX fileglob to use to search for checkplot pickle files.\n\n color_mag1 : list of str\n This a list of the keys in each checkplot's `objectinfo` dict that will\n be used as color_1 in the equation::\n\n x-axis color = color_mag1 - color_mag2\n\n color_mag2 : list of str\n This a list of the keys in each checkplot's `objectinfo` dict that will\n be used as color_2 in the equation::\n\n x-axis color = color_mag1 - color_mag2\n\n yaxis_mag : list of str\n This is a list of the keys in each checkplot's `objectinfo` dict that\n will be used as the (absolute) magnitude y-axis of the color-mag\n diagrams.\n\n Returns\n -------\n\n str\n The path to the generated CMD pickle file for the collection of objects\n in the input checkplot directory.\n\n Notes\n -----\n\n This can make many CMDs in one go. For example, the default kwargs for\n `color_mag`, `color_mag2`, and `yaxis_mag` result in two CMDs generated and\n written to the output pickle file:\n\n - CMD1 -> gaiamag - kmag on the x-axis vs gaia_absmag on the y-axis\n - CMD2 -> sdssg - kmag on the x-axis vs rpmj (J reduced PM) on the y-axis"} {"_id": "q_9208", "text": "This adds CMDs for each object in cplist.\n\n Parameters\n ----------\n\n cplist : list of str\n This is the input list of checkplot pickles to add the CMDs to.\n\n cmdpkl : str\n This is the filename of the CMD pickle created previously.\n\n require_cmd_magcolor : bool\n If this is True, a CMD plot will not be made if the color and mag keys\n required by the CMD are not present or are nan in each checkplot's\n objectinfo dict.\n\n save_cmd_pngs : bool\n If this is True, then will save the CMD plots that were generated and\n added back to the checkplotdict as PNGs to the same directory as\n `cpx`.\n\n Returns\n -------\n\n Nothing."} {"_id": "q_9209", "text": "This adds CMDs for each object in cpdir.\n\n Parameters\n ----------\n\n cpdir : list of str\n This is the directory to search for checkplot pickles.\n\n cmdpkl : str\n This is the filename of the CMD pickle created previously.\n\n cpfileglob : str\n The UNIX fileglob to use when searching for checkplot pickles to operate\n on.\n\n require_cmd_magcolor : bool\n If this is True, a CMD plot will not be made if the color and mag keys\n required by the CMD are not present or are nan in each checkplot's\n objectinfo dict.\n\n save_cmd_pngs : bool\n If this is True, then will save the CMD plots that were generated and\n added back to the checkplotdict as PNGs to the same directory as\n `cpx`.\n\n Returns\n -------\n\n Nothing."} {"_id": "q_9210", "text": "This updates objectinfo for a list of checkplots.\n\n Useful in cases where a previous round of GAIA/finderchart/external catalog\n acquisition failed. This will preserve the following keys in the checkplots\n if they exist:\n\n comments\n varinfo\n objectinfo.objecttags\n\n Parameters\n ----------\n\n cplist : list of str\n A list of checkplot pickle file names to update.\n\n liststartindex : int\n The index of the input list to start working at.\n\n maxobjects : int\n The maximum number of objects to process in this run. Use this with\n `liststartindex` to effectively distribute working on a large list of\n input checkplot pickles over several sessions or machines.\n\n nworkers : int\n The number of parallel workers that will work on the checkplot\n update process.\n\n fast_mode : bool or float\n This runs the external catalog operations in a \"fast\" mode, with short\n timeouts and not trying to hit external catalogs that take a long time\n to respond. See the docstring for\n `checkplot.pkl_utils._pkl_finder_objectinfo` for details on how this\n works. If this is True, will run in \"fast\" mode with default timeouts (5\n seconds in most cases). If this is a float, will run in \"fast\" mode with\n the provided timeout value in seconds.\n\n findercmap : str or matplotlib.cm.Colormap object\n\n findercmap : str or matplotlib.cm.ColorMap object\n The Colormap object to use for the finder chart image.\n\n finderconvolve : astropy.convolution.Kernel object or None\n If not None, the Kernel object to use for convolving the finder image.\n\n deredden_objects : bool\n If this is True, will use the 2MASS DUST service to get extinction\n coefficients in various bands, and then try to deredden the magnitudes\n and colors of the object already present in the checkplot's objectinfo\n dict.\n\n custom_bandpasses : dict\n This is a dict used to provide custom bandpass definitions for any\n magnitude measurements in the objectinfo dict that are not automatically\n recognized by the `varclass.starfeatures.color_features` function. See\n its docstring for details on the required format.\n\n gaia_submit_timeout : float\n Sets the timeout in seconds to use when submitting a request to look up\n the object's information to the GAIA service. Note that if `fast_mode`\n is set, this is ignored.\n\n gaia_submit_tries : int\n Sets the maximum number of times the GAIA services will be contacted to\n obtain this object's information. If `fast_mode` is set, this is\n ignored, and the services will be contacted only once (meaning that a\n failure to respond will be silently ignored and no GAIA data will be\n added to the checkplot's objectinfo dict).\n\n gaia_max_timeout : float\n Sets the timeout in seconds to use when waiting for the GAIA service to\n respond to our request for the object's information. Note that if\n `fast_mode` is set, this is ignored.\n\n gaia_mirror : str\n This sets the GAIA mirror to use. This is a key in the\n `services.gaia.GAIA_URLS` dict which defines the URLs to hit for each\n mirror.\n\n complete_query_later : bool\n If this is True, saves the state of GAIA queries that are not yet\n complete when `gaia_max_timeout` is reached while waiting for the GAIA\n service to respond to our request. A later call for GAIA info on the\n same object will attempt to pick up the results from the existing query\n if it's completed. If `fast_mode` is True, this is ignored.\n\n lclistpkl : dict or str\n If this is provided, must be a dict resulting from reading a catalog\n produced by the `lcproc.catalogs.make_lclist` function or a str path\n pointing to the pickle file produced by that function. This catalog is\n used to find neighbors of the current object in the current light curve\n collection. Looking at neighbors of the object within the radius\n specified by `nbrradiusarcsec` is useful for light curves produced by\n instruments that have a large pixel scale, so are susceptible to\n blending of variability and potential confusion of neighbor variability\n with that of the actual object being looked at. If this is None, no\n neighbor lookups will be performed.\n\n nbrradiusarcsec : float\n The radius in arcseconds to use for a search conducted around the\n coordinates of this object to look for any potential confusion and\n blending of variability amplitude caused by their proximity.\n\n maxnumneighbors : int\n The maximum number of neighbors that will have their light curves and\n magnitudes noted in this checkplot as potential blends with the target\n object.\n\n plotdpi : int\n The resolution in DPI of the plots to generate in this function\n (e.g. the finder chart, etc.)\n\n findercachedir : str\n The path to the astrobase cache directory for finder chart downloads\n from the NASA SkyView service.\n\n verbose : bool\n If True, will indicate progress and warn about potential problems.\n\n Returns\n -------\n\n list of str\n Paths to the updated checkplot pickle file."} {"_id": "q_9211", "text": "This updates the objectinfo for a directory of checkplot pickles.\n\n Useful in cases where a previous round of GAIA/finderchart/external catalog\n acquisition failed. This will preserve the following keys in the checkplots\n if they exist:\n\n comments\n varinfo\n objectinfo.objecttags\n\n Parameters\n ----------\n\n cpdir : str\n The directory to look for checkplot pickles in.\n\n cpglob : str\n The UNIX fileglob to use when searching for checkplot pickle files.\n\n liststartindex : int\n The index of the input list to start working at.\n\n maxobjects : int\n The maximum number of objects to process in this run. Use this with\n `liststartindex` to effectively distribute working on a large list of\n input checkplot pickles over several sessions or machines.\n\n nworkers : int\n The number of parallel workers that will work on the checkplot\n update process.\n\n fast_mode : bool or float\n This runs the external catalog operations in a \"fast\" mode, with short\n timeouts and not trying to hit external catalogs that take a long time\n to respond. See the docstring for\n `checkplot.pkl_utils._pkl_finder_objectinfo` for details on how this\n works. If this is True, will run in \"fast\" mode with default timeouts (5\n seconds in most cases). If this is a float, will run in \"fast\" mode with\n the provided timeout value in seconds.\n\n findercmap : str or matplotlib.cm.Colormap object\n\n findercmap : str or matplotlib.cm.ColorMap object\n The Colormap object to use for the finder chart image.\n\n finderconvolve : astropy.convolution.Kernel object or None\n If not None, the Kernel object to use for convolving the finder image.\n\n deredden_objects : bool\n If this is True, will use the 2MASS DUST service to get extinction\n coefficients in various bands, and then try to deredden the magnitudes\n and colors of the object already present in the checkplot's objectinfo\n dict.\n\n custom_bandpasses : dict\n This is a dict used to provide custom bandpass definitions for any\n magnitude measurements in the objectinfo dict that are not automatically\n recognized by the `varclass.starfeatures.color_features` function. See\n its docstring for details on the required format.\n\n gaia_submit_timeout : float\n Sets the timeout in seconds to use when submitting a request to look up\n the object's information to the GAIA service. Note that if `fast_mode`\n is set, this is ignored.\n\n gaia_submit_tries : int\n Sets the maximum number of times the GAIA services will be contacted to\n obtain this object's information. If `fast_mode` is set, this is\n ignored, and the services will be contacted only once (meaning that a\n failure to respond will be silently ignored and no GAIA data will be\n added to the checkplot's objectinfo dict).\n\n gaia_max_timeout : float\n Sets the timeout in seconds to use when waiting for the GAIA service to\n respond to our request for the object's information. Note that if\n `fast_mode` is set, this is ignored.\n\n gaia_mirror : str\n This sets the GAIA mirror to use. This is a key in the\n `services.gaia.GAIA_URLS` dict which defines the URLs to hit for each\n mirror.\n\n complete_query_later : bool\n If this is True, saves the state of GAIA queries that are not yet\n complete when `gaia_max_timeout` is reached while waiting for the GAIA\n service to respond to our request. A later call for GAIA info on the\n same object will attempt to pick up the results from the existing query\n if it's completed. If `fast_mode` is True, this is ignored.\n\n lclistpkl : dict or str\n If this is provided, must be a dict resulting from reading a catalog\n produced by the `lcproc.catalogs.make_lclist` function or a str path\n pointing to the pickle file produced by that function. This catalog is\n used to find neighbors of the current object in the current light curve\n collection. Looking at neighbors of the object within the radius\n specified by `nbrradiusarcsec` is useful for light curves produced by\n instruments that have a large pixel scale, so are susceptible to\n blending of variability and potential confusion of neighbor variability\n with that of the actual object being looked at. If this is None, no\n neighbor lookups will be performed.\n\n nbrradiusarcsec : float\n The radius in arcseconds to use for a search conducted around the\n coordinates of this object to look for any potential confusion and\n blending of variability amplitude caused by their proximity.\n\n maxnumneighbors : int\n The maximum number of neighbors that will have their light curves and\n magnitudes noted in this checkplot as potential blends with the target\n object.\n\n plotdpi : int\n The resolution in DPI of the plots to generate in this function\n (e.g. the finder chart, etc.)\n\n findercachedir : str\n The path to the astrobase cache directory for finder chart downloads\n from the NASA SkyView service.\n\n verbose : bool\n If True, will indicate progress and warn about potential problems.\n\n Returns\n -------\n\n list of str\n Paths to the updated checkplot pickle file."} {"_id": "q_9212", "text": "This gets the required keys from the requested file.\n\n Parameters\n ----------\n\n task : tuple\n Task is a two element tuple::\n\n - task[0] is the dict to work on\n\n - task[1] is a list of lists of str indicating all the key address to\n extract items from the dict for\n\n Returns\n -------\n\n list\n This is a list of all of the items at the requested key addresses."} {"_id": "q_9213", "text": "This is a double inverted gaussian.\n\n Parameters\n ----------\n\n x : np.array\n The items at which the Gaussian is evaluated.\n\n amp1,amp2 : float\n The amplitude of Gaussian 1 and Gaussian 2.\n\n loc1,loc2 : float\n The central value of Gaussian 1 and Gaussian 2.\n\n std1,std2 : float\n The standard deviation of Gaussian 1 and Gaussian 2.\n\n Returns\n -------\n\n np.array\n Returns a double inverted Gaussian function evaluated at the items in\n `x`, using the provided parameters of `amp`, `loc`, and `std` for two\n component Gaussians 1 and 2."} {"_id": "q_9214", "text": "This returns a double eclipse shaped function.\n\n Suitable for first order modeling of eclipsing binaries.\n\n Parameters\n ----------\n\n ebparams : list of float\n This contains the parameters for the eclipsing binary::\n\n ebparams = [period (time),\n epoch (time),\n pdepth: primary eclipse depth (mags),\n pduration: primary eclipse duration (phase),\n psdepthratio: primary-secondary eclipse depth ratio,\n secondaryphase: center phase of the secondary eclipse]\n\n `period` is the period in days.\n\n `epoch` is the time of minimum in JD.\n\n `pdepth` is the depth of the primary eclipse.\n\n - for magnitudes -> pdepth should be < 0\n - for fluxes -> pdepth should be > 0\n\n `pduration` is the length of the primary eclipse in phase.\n\n `psdepthratio` is the ratio in the eclipse depths:\n `depth_secondary/depth_primary`. This is generally the same as the ratio\n of the `T_effs` of the two stars.\n\n `secondaryphase` is the phase at which the minimum of the secondary\n eclipse is located. This effectively parameterizes eccentricity.\n\n All of these will then have fitted values after the fit is done.\n\n times,mags,errs : np.array\n The input time-series of measurements and associated errors for which\n the eclipse model will be generated. The times will be used to generate\n model mags, and the input `times`, `mags`, and `errs` will be resorted\n by model phase and returned.\n\n Returns\n -------\n\n (modelmags, phase, ptimes, pmags, perrs) : tuple\n Returns the model mags and phase values. Also returns the input `times`,\n `mags`, and `errs` sorted by the model's phase."} {"_id": "q_9215", "text": "Converts given J, H, Ks mags to a B magnitude value.\n\n Parameters\n ----------\n\n jmag,hmag,kmag : float\n 2MASS J, H, Ks mags of the object.\n\n Returns\n -------\n\n float\n The converted B band magnitude."} {"_id": "q_9216", "text": "Converts given J, H, Ks mags to a V magnitude value.\n\n Parameters\n ----------\n\n jmag,hmag,kmag : float\n 2MASS J, H, Ks mags of the object.\n\n Returns\n -------\n\n float\n The converted V band magnitude."} {"_id": "q_9217", "text": "Converts given J, H, Ks mags to an R magnitude value.\n\n Parameters\n ----------\n\n jmag,hmag,kmag : float\n 2MASS J, H, Ks mags of the object.\n\n Returns\n -------\n\n float\n The converted R band magnitude."} {"_id": "q_9218", "text": "Converts given J, H, Ks mags to an I magnitude value.\n\n Parameters\n ----------\n\n jmag,hmag,kmag : float\n 2MASS J, H, Ks mags of the object.\n\n Returns\n -------\n\n float\n The converted I band magnitude."} {"_id": "q_9219", "text": "Converts given J, H, Ks mags to an SDSS u magnitude value.\n\n Parameters\n ----------\n\n jmag,hmag,kmag : float\n 2MASS J, H, Ks mags of the object.\n\n Returns\n -------\n\n float\n The converted SDSS u band magnitude."} {"_id": "q_9220", "text": "Converts given J, H, Ks mags to an SDSS g magnitude value.\n\n Parameters\n ----------\n\n jmag,hmag,kmag : float\n 2MASS J, H, Ks mags of the object.\n\n Returns\n -------\n\n float\n The converted SDSS g band magnitude."} {"_id": "q_9221", "text": "Converts given J, H, Ks mags to an SDSS i magnitude value.\n\n Parameters\n ----------\n\n jmag,hmag,kmag : float\n 2MASS J, H, Ks mags of the object.\n\n Returns\n -------\n\n float\n The converted SDSS i band magnitude."} {"_id": "q_9222", "text": "Converts given J, H, Ks mags to an SDSS z magnitude value.\n\n Parameters\n ----------\n\n jmag,hmag,kmag : float\n 2MASS J, H, Ks mags of the object.\n\n Returns\n -------\n\n float\n The converted SDSS z band magnitude."} {"_id": "q_9223", "text": "Calculates the Schwarzenberg-Czerny AoV statistic at a test frequency.\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The input time-series and associated errors.\n\n frequency : float\n The test frequency to calculate the theta statistic at.\n\n binsize : float\n The phase bin size to use.\n\n minbin : int\n The minimum number of items in a phase bin to consider in the\n calculation of the statistic.\n\n Returns\n -------\n\n theta_aov : float\n The value of the AoV statistic at the specified `frequency`."} {"_id": "q_9224", "text": "This just puts all of the period-finders on a single periodogram.\n\n This will renormalize all of the periodograms so their values lie between 0\n and 1, with values lying closer to 1 being more significant. Periodograms\n that give the same best periods will have their peaks line up together.\n\n Parameters\n ----------\n\n pflist : list of dict\n This is a list of result dicts from any of the period-finders in\n periodbase. To use your own period-finders' results here, make sure the\n result dict is of the form and has at least the keys below::\n\n {'periods': np.array of all periods searched by the period-finder,\n 'lspvals': np.array of periodogram power value for each period,\n 'bestperiod': a float value that is the period with the highest\n peak in the periodogram, i.e. the most-likely actual\n period,\n 'method': a three-letter code naming the period-finder used; must\n be one of the keys in the\n `astrobase.periodbase.METHODLABELS` dict,\n 'nbestperiods': a list of the periods corresponding to periodogram\n peaks (`nbestlspvals` below) to annotate on the\n periodogram plot so they can be called out\n visually,\n 'nbestlspvals': a list of the power values associated with\n periodogram peaks to annotate on the periodogram\n plot so they can be called out visually; should be\n the same length as `nbestperiods` above,\n 'kwargs': dict of kwargs passed to your own period-finder function}\n\n outfile : str\n This is the output file to write the output to. NOTE: EPS/PS won't work\n because we use alpha transparency to better distinguish between the\n various periodograms.\n\n addmethods : bool\n If this is True, will add all of the normalized periodograms together,\n then renormalize them to between 0 and 1. In this way, if all of the\n period-finders agree on something, it'll stand out easily. FIXME:\n implement this kwarg.\n\n Returns\n -------\n\n str\n The name of the generated plot file."} {"_id": "q_9225", "text": "This returns a BATMAN planetary transit model.\n\n Parameters\n ----------\n\n times : np.array\n The times at which the model will be evaluated.\n\n t0 : float\n The time of periastron for the transit.\n\n per : float\n The orbital period of the planet.\n\n rp : float\n The stellar radius of the planet's star (in Rsun).\n\n a : float\n The semi-major axis of the planet's orbit (in Rsun).\n\n inc : float\n The orbital inclination (in degrees).\n\n ecc : float\n The eccentricity of the orbit.\n\n w : float\n The longitude of periastron (in degrees).\n\n u : list of floats\n The limb darkening coefficients specific to the limb darkening model\n used.\n\n limb_dark : {\"uniform\", \"linear\", \"quadratic\", \"square-root\", \"logarithmic\", \"exponential\", \"power2\", \"custom\"}\n The type of limb darkening model to use. See the full list here:\n\n https://www.cfa.harvard.edu/~lkreidberg/batman/tutorial.html#limb-darkening-options\n\n exp_time_minutes : float\n The amount of time to 'smear' the transit LC points over to simulate a\n long exposure time.\n\n supersample_factor: int\n The number of supersampled time data points to average the lightcurve\n model over.\n\n Returns\n -------\n\n (params, batman_model) : tuple\n The returned tuple contains the params list and the generated\n `batman.TransitModel` object."} {"_id": "q_9226", "text": "Assume priors on all parameters have uniform probability."} {"_id": "q_9227", "text": "This runs the TRILEGAL query for decimal equatorial coordinates.\n\n Parameters\n ----------\n\n ra,decl : float\n These are the center equatorial coordinates in decimal degrees\n\n filtersystem : str\n This is a key in the TRILEGAL_FILTER_SYSTEMS dict. Use the function\n :py:func:`astrobase.services.trilegal.list_trilegal_filtersystems` to\n see a nicely formatted table with the key and description for each of\n these.\n\n field_deg2 : float\n The area of the simulated field in square degrees. This is in the\n Galactic coordinate system.\n\n usebinaries : bool\n If this is True, binaries will be present in the model results.\n\n extinction_sigma : float\n This is the applied std dev around the `Av_extinction` value for the\n galactic coordinates requested.\n\n magnitude_limit : float\n This is the limiting magnitude of the simulation in the\n `maglim_filtercol` band index of the filter system chosen.\n\n maglim_filtercol : int\n The index in the filter system list of the magnitude limiting band.\n\n trilegal_version : float\n This is the the version of the TRILEGAL form to use. This can usually be\n left as-is.\n\n extraparams : dict or None\n This is a dict that can be used to override parameters of the model\n other than the basic ones used for input to this function. All\n parameters are listed in `TRILEGAL_DEFAULT_PARAMS` above. See:\n\n http://stev.oapd.inaf.it/cgi-bin/trilegal\n\n for explanations of these parameters.\n\n forcefetch : bool\n If this is True, the query will be retried even if cached results for\n it exist.\n\n cachedir : str\n This points to the directory where results will be downloaded.\n\n verbose : bool\n If True, will indicate progress and warn of any issues.\n\n timeout : float\n This sets the amount of time in seconds to wait for the service to\n respond to our initial request.\n\n refresh : float\n This sets the amount of time in seconds to wait before checking if the\n result file is available. If the results file isn't available after\n `refresh` seconds have elapsed, the function will wait for `refresh`\n seconds continuously, until `maxtimeout` is reached or the results file\n becomes available.\n\n maxtimeout : float\n The maximum amount of time in seconds to wait for a result to become\n available after submitting our query request.\n\n Returns\n -------\n\n dict\n This returns a dict of the form::\n\n {'params':the input param dict used,\n 'extraparams':any extra params used,\n 'provenance':'cached' or 'new download',\n 'tablefile':the path on disk to the downloaded model text file}"} {"_id": "q_9228", "text": "This reads a downloaded TRILEGAL model file.\n\n Parameters\n ----------\n\n modelfile : str\n Path to the downloaded model file to read.\n\n Returns\n -------\n\n np.recarray\n Returns the model table as a Numpy record array."} {"_id": "q_9229", "text": "This compares two values in constant time.\n\n Taken from tornado:\n\n https://github.com/tornadoweb/tornado/blob/\n d4eb8eb4eb5cc9a6677e9116ef84ded8efba8859/tornado/web.py#L3060"} {"_id": "q_9230", "text": "Overrides the default serializer for `JSONEncoder`.\n\n This can serialize the following objects in addition to what\n `JSONEncoder` can already do.\n\n - `np.array`\n - `bytes`\n - `complex`\n - `np.float64` and other `np.dtype` objects\n\n Parameters\n ----------\n\n obj : object\n A Python object to serialize to JSON.\n\n Returns\n -------\n\n str\n A JSON encoded representation of the input object."} {"_id": "q_9231", "text": "This handles GET requests to the index page.\n\n TODO: provide the correct baseurl from the checkplotserver options dict,\n so the frontend JS can just read that off immediately."} {"_id": "q_9232", "text": "This handles GET requests for the current checkplot-list.json file.\n\n Used with AJAX from frontend."} {"_id": "q_9233", "text": "This smooths the magseries with a Savitsky-Golay filter.\n\n Parameters\n ----------\n\n mags : np.array\n The input mags/flux time-series to smooth.\n\n windowsize : int\n This is a odd integer containing the smoothing window size.\n\n polyorder : int\n This is an integer containing the polynomial degree order to use when\n generating the Savitsky-Golay filter.\n\n Returns\n -------\n\n np.array\n The smoothed mag/flux time-series array."} {"_id": "q_9234", "text": "Detrends a magnitude series given in mag using accompanying values of S in\n fsv, D in fdv, K in fkv, x coords in xcc, y coords in ycc, background in\n bgv, and background error in bge. smooth is used to set a smoothing\n parameter for the fit function. Does EPD voodoo."} {"_id": "q_9235", "text": "This is the EPD function to fit using a smoothed mag-series."} {"_id": "q_9236", "text": "Detrends a magnitude series using External Parameter Decorrelation.\n\n Requires a set of external parameters similar to those present in HAT light\n curves. At the moment, the HAT light-curve-specific external parameters are:\n\n - S: the 'fsv' column in light curves,\n - D: the 'fdv' column in light curves,\n - K: the 'fkv' column in light curves,\n - x coords: the 'xcc' column in light curves,\n - y coords: the 'ycc' column in light curves,\n - background value: the 'bgv' column in light curves,\n - background error: the 'bge' column in light curves,\n - hour angle: the 'iha' column in light curves,\n - zenith distance: the 'izd' column in light curves\n\n S, D, and K are defined as follows:\n\n - S -> measure of PSF sharpness (~1/sigma^2 sosmaller S = wider PSF)\n - D -> measure of PSF ellipticity in xy direction\n - K -> measure of PSF ellipticity in cross direction\n\n S, D, K are related to the PSF's variance and covariance, see eqn 30-33 in\n A. Pal's thesis: https://arxiv.org/abs/0906.3486\n\n NOTE: The errs are completely ignored and returned unchanged (except for\n sigclip and finite filtering).\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The input mag/flux time-series to detrend.\n\n fsv : np.array\n Array containing the external parameter `S` of the same length as times.\n\n fdv : np.array\n Array containing the external parameter `D` of the same length as times.\n\n fkv : np.array\n Array containing the external parameter `K` of the same length as times.\n\n xcc : np.array\n Array containing the external parameter `x-coords` of the same length as\n times.\n\n ycc : np.array\n Array containing the external parameter `y-coords` of the same length as\n times.\n\n bgv : np.array\n Array containing the external parameter `background value` of the same\n length as times.\n\n bge : np.array\n Array containing the external parameter `background error` of the same\n length as times.\n\n iha : np.array\n Array containing the external parameter `hour angle` of the same length\n as times.\n\n izd : np.array\n Array containing the external parameter `zenith distance` of the same\n length as times.\n\n magsarefluxes : bool\n Set this to True if `mags` actually contains fluxes.\n\n epdsmooth_sigclip : float or int or sequence of two floats/ints or None\n This specifies how to sigma-clip the input LC before fitting the EPD\n function to it.\n\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n epdsmooth_windowsize : int\n This is the number of LC points to smooth over to generate a smoothed\n light curve that will be used to fit the EPD function.\n\n epdsmooth_func : Python function\n This sets the smoothing filter function to use. A Savitsky-Golay filter\n is used to smooth the light curve by default. The functions that can be\n used with this kwarg are listed in `varbase.trends`. If you want to use\n your own function, it MUST have the following signature::\n\n def smoothfunc(mags_array, window_size, **extraparams)\n\n and return a numpy array of the same size as `mags_array` with the\n smoothed time-series. Any extra params can be provided using the\n `extraparams` dict.\n\n epdsmooth_extraparams : dict\n This is a dict of any extra filter params to supply to the smoothing\n function.\n\n Returns\n -------\n\n dict\n Returns a dict of the following form::\n\n {'times':the input times after non-finite elems removed,\n 'mags':the EPD detrended mag values (the EPD mags),\n 'errs':the errs after non-finite elems removed,\n 'fitcoeffs':EPD fit coefficient values,\n 'fitinfo':the full tuple returned by scipy.leastsq,\n 'fitmags':the EPD fit function evaluated at times,\n 'mags_median': this is median of the EPD mags,\n 'mags_mad': this is the MAD of EPD mags}"} {"_id": "q_9237", "text": "This uses a `RandomForestRegressor` to de-correlate the given magseries.\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The input mag/flux time-series to run EPD on.\n\n externalparam_arrs : list of np.arrays\n This is a list of ndarrays of external parameters to decorrelate\n against. These should all be the same size as `times`, `mags`, `errs`.\n\n epdsmooth : bool\n If True, sets the training LC for the RandomForestRegress to be a\n smoothed version of the sigma-clipped light curve provided in `times`,\n `mags`, `errs`.\n\n epdsmooth_sigclip : float or int or sequence of two floats/ints or None\n This specifies how to sigma-clip the input LC before smoothing it and\n fitting the EPD function to it. The actual LC will not be sigma-clipped.\n\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n epdsmooth_windowsize : int\n This is the number of LC points to smooth over to generate a smoothed\n light curve that will be used to fit the EPD function.\n\n epdsmooth_func : Python function\n This sets the smoothing filter function to use. A Savitsky-Golay filter\n is used to smooth the light curve by default. The functions that can be\n used with this kwarg are listed in `varbase.trends`. If you want to use\n your own function, it MUST have the following signature::\n\n def smoothfunc(mags_array, window_size, **extraparams)\n\n and return a numpy array of the same size as `mags_array` with the\n smoothed time-series. Any extra params can be provided using the\n `extraparams` dict.\n\n epdsmooth_extraparams : dict\n This is a dict of any extra filter params to supply to the smoothing\n function.\n\n rf_subsample : float\n Defines the fraction of the size of the `mags` array to use for\n training the random forest regressor.\n\n rf_ntrees : int\n This is the number of trees to use for the `RandomForestRegressor`.\n\n rf_extraprams : dict\n This is a dict of any extra kwargs to provide to the\n `RandomForestRegressor` instance used.\n\n Returns\n -------\n\n dict\n Returns a dict with decorrelated mags and the usual info from the\n `RandomForestRegressor`: variable importances, etc."} {"_id": "q_9238", "text": "This calculates the Stellingwerf PDM theta value at a test frequency.\n\n Parameters\n ----------\n\n times,mags,errs : np.array\n The input time-series and associated errors.\n\n frequency : float\n The test frequency to calculate the theta statistic at.\n\n binsize : float\n The phase bin size to use.\n\n minbin : int\n The minimum number of items in a phase bin to consider in the\n calculation of the statistic.\n\n Returns\n -------\n\n theta_pdm : float\n The value of the theta statistic at the specified `frequency`."} {"_id": "q_9239", "text": "Converts magnitude measurements in Kepler band to SDSS r band.\n\n Parameters\n ----------\n\n keplermag : float or array-like\n The Kepler magnitude value(s) to convert to fluxes.\n\n kic_sdssg,kic_sdssr : float or array-like\n The SDSS g and r magnitudes of the object(s) from the Kepler Input\n Catalog. The .llc.fits MAST light curve file for a Kepler object\n contains these values in the FITS extension 0 header.\n\n Returns\n -------\n\n float or array-like\n SDSS r band magnitude(s) converted from the Kepler band magnitude."} {"_id": "q_9240", "text": "This filters the Kepler `lcdict`, removing nans and bad\n observations.\n\n By default, this function removes points in the Kepler LC that have ANY\n quality flags set.\n\n Parameters\n ----------\n\n lcdict : lcdict\n An `lcdict` produced by `consolidate_kepler_fitslc` or\n `read_kepler_fitslc`.\n\n filterflags : bool\n If True, will remove any measurements that have non-zero quality flags\n present. This usually indicates an issue with the instrument or\n spacecraft.\n\n nanfilter : {'sap','pdc','sap,pdc'}\n Indicates the flux measurement type(s) to apply the filtering to.\n\n timestoignore : list of tuples or None\n This is of the form::\n\n [(time1_start, time1_end), (time2_start, time2_end), ...]\n\n and indicates the start and end times to mask out of the final\n lcdict. Use this to remove anything that wasn't caught by the quality\n flags.\n\n Returns\n -------\n\n lcdict\n Returns an `lcdict` (this is useable by most astrobase functions for LC\n processing). The `lcdict` is filtered IN PLACE!"} {"_id": "q_9241", "text": "After running `detrend_centroid`, this gets positions of centroids during\n transits, and outside of transits.\n\n These positions can then be used in a false positive analysis.\n\n This routine requires knowing the ingress and egress times for every\n transit of interest within the quarter this routine is being called for.\n There is currently no astrobase routine that automates this for periodic\n transits (it must be done in a calling routine).\n\n To get out of transit centroids, this routine takes points outside of the\n \"buffer\" set by `oot_buffer_time`, sampling 3x as many points on either\n side of the transit as are in the transit (or however many are specified by\n `sample_factor`).\n\n Parameters\n ----------\n\n lcd : lcdict\n An `lcdict` generated by the `read_kepler_fitslc` function. We assume\n that the `detrend_centroid` function has been run on this `lcdict`.\n\n t_ing_egr : list of tuples\n This is of the form::\n\n [(ingress time of i^th transit, egress time of i^th transit)]\n\n for i the transit number index in this quarter (starts at zero at the\n beginning of every quarter). Assumes units of BJD.\n\n oot_buffer_time : float\n Number of days away from ingress and egress times to begin sampling \"out\n of transit\" centroid points. The number of out of transit points to take\n per transit is 3x the number of points in transit.\n\n sample_factor : float\n The size of out of transit window from which to sample.\n\n Returns\n -------\n\n dict\n This is a dictionary keyed by transit number (i.e., the same index as\n `t_ing_egr`), where each key contains the following value::\n\n {'ctd_x_in_tra':ctd_x_in_tra,\n 'ctd_y_in_tra':ctd_y_in_tra,\n 'ctd_x_oot':ctd_x_oot,\n 'ctd_y_oot':ctd_y_oot,\n 'npts_in_tra':len(ctd_x_in_tra),\n 'npts_oot':len(ctd_x_oot),\n 'in_tra_times':in_tra_times,\n 'oot_times':oot_times}"} {"_id": "q_9242", "text": "This is a helper function for centroid detrending."} {"_id": "q_9243", "text": "This bins the given light curve file in time using the specified bin size.\n\n Parameters\n ----------\n\n lcfile : str\n The file name to process.\n\n binsizesec : float\n The time bin-size in seconds.\n\n outdir : str or None\n If this is a str, the output LC will be written to `outdir`. If this is\n None, the output LC will be written to the same directory as `lcfile`.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curve file.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n timecols,magcols,errcols : lists of str\n The keys in the lcdict produced by your light curve reader function that\n correspond to the times, mags/fluxes, and associated measurement errors\n that will be used as inputs to the binning process. If these are None,\n the default values for `timecols`, `magcols`, and `errcols` for your\n light curve format will be used here.\n\n minbinelems : int\n The minimum number of time-bin elements required to accept a time-bin as\n valid for the output binned light curve.\n\n Returns\n -------\n\n str\n The name of the output pickle file with the binned LC.\n\n Writes the output binned light curve to a pickle that contains the\n lcdict with an added `lcdict['binned'][magcol]` key, which contains the\n binned times, mags/fluxes, and errs as\n `lcdict['binned'][magcol]['times']`, `lcdict['binned'][magcol]['mags']`,\n and `lcdict['epd'][magcol]['errs']` for each `magcol` provided in the\n input or default `magcols` value for this light curve format."} {"_id": "q_9244", "text": "This time bins all the light curves in the specified directory.\n\n Parameters\n ----------\n\n lcdir : list of str\n Directory containing the input LCs to process.\n\n binsizesec : float\n The time bin size to use in seconds.\n\n maxobjects : int or None\n If provided, LC processing will stop at `lclist[maxobjects]`.\n\n outdir : str or None\n The directory where output LCs will be written. If None, will write to\n the same directory as the input LCs.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curve file.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n timecols,magcols,errcols : lists of str\n The keys in the lcdict produced by your light curve reader function that\n correspond to the times, mags/fluxes, and associated measurement errors\n that will be used as inputs to the binning process. If these are None,\n the default values for `timecols`, `magcols`, and `errcols` for your\n light curve format will be used here.\n\n minbinelems : int\n The minimum number of time-bin elements required to accept a time-bin as\n valid for the output binned light curve.\n\n nworkers : int\n Number of parallel workers to launch.\n\n maxworkertasks : int\n The maximum number of tasks a parallel worker will complete before being\n replaced to guard against memory leaks.\n\n Returns\n -------\n\n dict\n The returned dict contains keys = input LCs, vals = output LCs."} {"_id": "q_9245", "text": "This runs variable feature extraction in parallel for all LCs in `lclist`.\n\n Parameters\n ----------\n\n lclist : list of str\n The list of light curve file names to process.\n\n outdir : str\n The directory where the output varfeatures pickle files will be written.\n\n maxobjects : int\n The number of LCs to process from `lclist`.\n\n timecols : list of str or None\n The timecol keys to use from the lcdict in calculating the features.\n\n magcols : list of str or None\n The magcol keys to use from the lcdict in calculating the features.\n\n errcols : list of str or None\n The errcol keys to use from the lcdict in calculating the features.\n\n mindet : int\n The minimum number of LC points required to generate variability\n features.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n nworkers : int\n The number of parallel workers to launch.\n\n Returns\n -------\n\n dict\n A dict with key:val pairs of input LC file name : the generated\n variability features pickles for each of the input LCs, with results for\n each magcol in the input `magcol` or light curve format's default\n `magcol` list."} {"_id": "q_9246", "text": "This runs parallel variable feature extraction for a directory of LCs.\n\n Parameters\n ----------\n\n lcdir : str\n The directory of light curve files to process.\n\n outdir : str\n The directory where the output varfeatures pickle files will be written.\n\n fileglob : str or None\n The file glob to use when looking for light curve files in `lcdir`. If\n None, the default file glob associated for this LC format will be used.\n\n maxobjects : int\n The number of LCs to process from `lclist`.\n\n timecols : list of str or None\n The timecol keys to use from the lcdict in calculating the features.\n\n magcols : list of str or None\n The magcol keys to use from the lcdict in calculating the features.\n\n errcols : list of str or None\n The errcol keys to use from the lcdict in calculating the features.\n\n mindet : int\n The minimum number of LC points required to generate variability\n features.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n nworkers : int\n The number of parallel workers to launch.\n\n Returns\n -------\n\n dict\n A dict with key:val pairs of input LC file name : the generated\n variability features pickles for each of the input LCs, with results for\n each magcol in the input `magcol` or light curve format's default\n `magcol` list."} {"_id": "q_9247", "text": "This is just a shortened form of the function above for convenience.\n\n This only handles pickle files as input.\n\n Parameters\n ----------\n\n checkplotin : str\n File name of a checkplot pickle file to convert to a PNG.\n\n extrarows : list of tuples\n This is a list of 4-element tuples containing paths to PNG files that\n will be added to the end of the rows generated from the checkplotin\n pickle/dict. Each tuple represents a row in the final output PNG\n file. If there are less than 4 elements per tuple, the missing elements\n will be filled in with white-space. If there are more than 4 elements\n per tuple, only the first four will be used.\n\n The purpose of this kwarg is to incorporate periodograms and phased LC\n plots (in the form of PNGs) generated from an external period-finding\n function or program (like VARTOOLS) to allow for comparison with\n astrobase results.\n\n NOTE: the PNG files specified in `extrarows` here will be added to those\n already present in the input `checkplotdict['externalplots']` if that is\n None because you passed in a similar list of external plots to the\n :py:func:`astrobase.checkplot.pkl.checkplot_pickle` function earlier. In\n this case, `extrarows` can be used to add even more external plots if\n desired.\n\n Each external plot PNG will be resized to 750 x 480 pixels to fit into\n an output image cell.\n\n By convention, each 4-element tuple should contain:\n\n - a periodiogram PNG\n - phased LC PNG with 1st best peak period from periodogram\n - phased LC PNG with 2nd best peak period from periodogram\n - phased LC PNG with 3rd best peak period from periodogram\n\n Example of extrarows::\n\n [('/path/to/external/bls-periodogram.png',\n '/path/to/external/bls-phasedlc-plot-bestpeak.png',\n '/path/to/external/bls-phasedlc-plot-peak2.png',\n '/path/to/external/bls-phasedlc-plot-peak3.png'),\n ('/path/to/external/pdm-periodogram.png',\n '/path/to/external/pdm-phasedlc-plot-bestpeak.png',\n '/path/to/external/pdm-phasedlc-plot-peak2.png',\n '/path/to/external/pdm-phasedlc-plot-peak3.png'),\n ...]\n\n Returns\n -------\n\n str\n The absolute path to the generated checkplot PNG."} {"_id": "q_9248", "text": "This is a flare model function, similar to Kowalski+ 2011.\n\n From the paper by Pitkin+ 2014:\n http://adsabs.harvard.edu/abs/2014MNRAS.445.2268P\n\n Parameters\n ----------\n\n flareparams : list of float\n This defines the flare model::\n\n [amplitude,\n flare_peak_time,\n rise_gaussian_stdev,\n decay_time_constant]\n\n where:\n\n `amplitude`: the maximum flare amplitude in mags or flux. If flux, then\n amplitude should be positive. If mags, amplitude should be negative.\n\n `flare_peak_time`: time at which the flare maximum happens.\n\n `rise_gaussian_stdev`: the stdev of the gaussian describing the rise of\n the flare.\n\n `decay_time_constant`: the time constant of the exponential fall of the\n flare.\n\n times,mags,errs : np.array\n The input time-series of measurements and associated errors for which\n the model will be generated. The times will be used to generate\n model mags.\n\n Returns\n -------\n\n (modelmags, times, mags, errs) : tuple\n Returns the model mags evaluated at the input time values. Also returns\n the input `times`, `mags`, and `errs`."} {"_id": "q_9249", "text": "This checks the AWS instance data URL to see if there's a pending\n shutdown for the instance.\n\n This is useful for AWS spot instances. If there is a pending shutdown posted\n to the instance data URL, we'll use the result of this function break out of\n the processing loop and shut everything down ASAP before the instance dies.\n\n Returns\n -------\n\n bool\n - True if the instance is going to die soon.\n - False if the instance is still safe."} {"_id": "q_9250", "text": "This wraps the function above to allow for loading previous state from a\n file.\n\n Parameters\n ----------\n\n use_saved_state : str or None\n This is the path to the saved state pickle file produced by a previous\n run of `runcp_producer_loop`. Will get all of the arguments to run\n another instance of the loop from that pickle file. If this is None, you\n MUST provide all of the appropriate arguments to that function.\n\n lightcurve_list : str or list of str or None\n This is either a string pointing to a file containing a list of light\n curves filenames to process or the list itself. The names must\n correspond to the full filenames of files stored on S3, including all\n prefixes, but not include the 's3:///' bit (these will be\n added automatically).\n\n input_queue : str or None\n This is the name of the SQS queue which will receive processing tasks\n generated by this function. The queue URL will automatically be obtained\n from AWS.\n\n input_bucket : str or None\n The name of the S3 bucket containing the light curve files to process.\n\n result_queue : str or None\n This is the name of the SQS queue that this function will listen to for\n messages from the workers as they complete processing on their input\n elements. This function will attempt to match input sent to the\n `input_queue` with results coming into the `result_queue` so it knows\n how many objects have been successfully processed. If this function\n receives task results that aren't in its own input queue, it will\n acknowledge them so they complete successfully, but not download them\n automatically. This handles leftover tasks completing from a previous\n run of this function.\n\n result_bucket : str or None\n The name of the S3 bucket which will receive the results from the\n workers.\n\n pfresult_list : list of str or None\n This is a list of periodfinder result pickle S3 URLs associated with\n each light curve. If provided, this will be used to add in phased light\n curve plots to each checkplot pickle. If this is None, the worker loop\n will produce checkplot pickles that only contain object information,\n neighbor information, and unphased light curves.\n\n runcp_kwargs : dict or None\n This is a dict used to pass any extra keyword arguments to the\n `lcproc.checkplotgen.runcp` function that will be run by the worker\n loop.\n\n process_list_slice : list or None\n This is used to index into the input light curve list so a subset of the\n full list can be processed in this specific run of this function.\n\n Use None for a slice index elem to emulate single slice spec behavior:\n\n process_list_slice = [10, None] -> lightcurve_list[10:]\n process_list_slice = [None, 500] -> lightcurve_list[:500]\n\n purge_queues_when_done : bool or None\n If this is True, and this function exits (either when all done, or when\n it is interrupted with a Ctrl+C), all outstanding elements in the\n input/output queues that have not yet been acknowledged by workers or by\n this function will be purged. This effectively cancels all outstanding\n work.\n\n delete_queues_when_done : bool or None\n If this is True, and this function exits (either when all done, or when\n it is interrupted with a Ctrl+C'), all outstanding work items will be\n purged from the input/queues and the queues themselves will be deleted.\n\n download_when_done : bool or None\n If this is True, the generated checkplot pickle for each input work item\n will be downloaded immediately to the current working directory when the\n worker functions report they're done with it.\n\n save_state_when_done : bool or None\n If this is True, will save the current state of the work item queue and\n the work items acknowledged as completed to a pickle in the current\n working directory. Call the `runcp_producer_loop_savedstate` function\n below to resume processing from this saved state later.\n\n s3_client : boto3.Client or None\n If None, this function will instantiate a new `boto3.Client` object to\n use in its S3 download operations. Alternatively, pass in an existing\n `boto3.Client` instance to re-use it here.\n\n sqs_client : boto3.Client or None\n If None, this function will instantiate a new `boto3.Client` object to\n use in its SQS operations. Alternatively, pass in an existing\n `boto3.Client` instance to re-use it here.\n\n Returns\n -------\n\n dict or str\n Returns the current work state as a dict or str path to the generated\n work state pickle depending on if `save_state_when_done` is True."} {"_id": "q_9251", "text": "This is the worker for running checkplots.\n\n Parameters\n ----------\n\n task : tuple\n This is of the form: (pfpickle, outdir, lcbasedir, kwargs).\n\n Returns\n -------\n\n list of str\n The list of checkplot pickles returned by the `runcp` function."} {"_id": "q_9252", "text": "This drives the parallel execution of `runcp` for a list of periodfinding\n result pickles.\n\n Parameters\n ----------\n\n pfpicklelist : list of str or list of Nones\n This is the list of the filenames of the period-finding result pickles\n to process. To make checkplots using the light curves directly, set this\n to a list of Nones with the same length as the list of light curve files\n that you provide in `lcfnamelist`.\n\n outdir : str\n The directory the checkplot pickles will be written to.\n\n lcbasedir : str\n The base directory that this function will look in to find the light\n curves pointed to by the period-finding result files. If you're using\n `lcfnamelist` to provide a list of light curve filenames directly, this\n arg is ignored.\n\n lcfnamelist : list of str or None\n If this is provided, it must be a list of the input light curve\n filenames to process. These can either be associated with each input\n period-finder result pickle, or can be provided standalone to make\n checkplots without phased LC plots in them. In the second case, you must\n set `pfpicklelist` to a list of Nones that matches the length of\n `lcfnamelist`.\n\n cprenorm : bool\n Set this to True if the light curves should be renormalized by\n `checkplot.checkplot_pickle`. This is set to False by default because we\n do our own normalization in this function using the light curve's\n registered normalization function and pass the normalized times, mags,\n errs to the `checkplot.checkplot_pickle` function.\n\n lclistpkl : str or dict\n This is either the filename of a pickle or the actual dict produced by\n lcproc.make_lclist. This is used to gather neighbor information.\n\n nbrradiusarcsec : float\n The radius in arcseconds to use for a search conducted around the\n coordinates of this object to look for any potential confusion and\n blending of variability amplitude caused by their proximity.\n\n maxnumneighbors : int\n The maximum number of neighbors that will have their light curves and\n magnitudes noted in this checkplot as potential blends with the target\n object.\n\n makeneighborlcs : bool\n If True, will make light curve and phased light curve plots for all\n neighbors found in the object collection for each input object.\n\n fast_mode : bool or float\n This runs the external catalog operations in a \"fast\" mode, with short\n timeouts and not trying to hit external catalogs that take a long time\n to respond.\n\n If this is set to True, the default settings for the external requests\n will then become::\n\n skyview_lookup = False\n skyview_timeout = 10.0\n skyview_retry_failed = False\n dust_timeout = 10.0\n gaia_submit_timeout = 7.0\n gaia_max_timeout = 10.0\n gaia_submit_tries = 2\n complete_query_later = False\n search_simbad = False\n\n If this is a float, will run in \"fast\" mode with the provided timeout\n value in seconds and the following settings::\n\n skyview_lookup = True\n skyview_timeout = fast_mode\n skyview_retry_failed = False\n dust_timeout = fast_mode\n gaia_submit_timeout = 0.66*fast_mode\n gaia_max_timeout = fast_mode\n gaia_submit_tries = 2\n complete_query_later = False\n search_simbad = False\n\n gaia_max_timeout : float\n Sets the timeout in seconds to use when waiting for the GAIA service to\n respond to our request for the object's information. Note that if\n `fast_mode` is set, this is ignored.\n\n gaia_mirror : str or None\n This sets the GAIA mirror to use. This is a key in the\n `services.gaia.GAIA_URLS` dict which defines the URLs to hit for each\n mirror.\n\n xmatchinfo : str or dict\n This is either the xmatch dict produced by the function\n `load_xmatch_external_catalogs` above, or the path to the xmatch info\n pickle file produced by that function.\n\n xmatchradiusarcsec : float\n This is the cross-matching radius to use in arcseconds.\n\n minobservations : int\n The minimum of observations the input object's mag/flux time-series must\n have for this function to plot its light curve and phased light\n curve. If the object has less than this number, no light curves will be\n plotted, but the checkplotdict will still contain all of the other\n information.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n timecols : list of str or None\n The timecol keys to use from the lcdict in generating this checkplot.\n\n magcols : list of str or None\n The magcol keys to use from the lcdict in generating this checkplot.\n\n errcols : list of str or None\n The errcol keys to use from the lcdict in generating this checkplot.\n\n skipdone : bool\n This indicates if this function will skip creating checkplots that\n already exist corresponding to the current `objectid` and `magcol`. If\n `skipdone` is set to True, this will be done.\n\n done_callback : Python function or None\n This is used to provide a function to execute after the checkplot\n pickles are generated. This is useful if you want to stream the results\n of checkplot making to some other process, e.g. directly running an\n ingestion into an LCC-Server collection. The function will always get\n the list of the generated checkplot pickles as its first arg, and all of\n the kwargs for runcp in the kwargs dict. Additional args and kwargs can\n be provided by giving a list in the `done_callbacks_args` kwarg and a\n dict in the `done_callbacks_kwargs` kwarg.\n\n NOTE: the function you pass in here should be pickleable by normal\n Python if you want to use it with the parallel_cp and parallel_cp_lcdir\n functions below.\n\n done_callback_args : tuple or None\n If not None, contains any args to pass into the `done_callback`\n function.\n\n done_callback_kwargs : dict or None\n If not None, contains any kwargs to pass into the `done_callback`\n function.\n\n liststartindex : int\n The index of the `pfpicklelist` (and `lcfnamelist` if provided) to start\n working at.\n\n maxobjects : int\n The maximum number of objects to process in this run. Use this with\n `liststartindex` to effectively distribute working on a large list of\n input period-finding result pickles (and light curves if `lcfnamelist`\n is also provided) over several sessions or machines.\n\n nworkers : int\n The number of parallel workers that will work on the checkplot\n generation process.\n\n Returns\n -------\n\n dict\n This returns a dict with keys = input period-finding pickles and vals =\n list of the corresponding checkplot pickles produced."} {"_id": "q_9253", "text": "This drives the parallel execution of `runcp` for a directory of\n periodfinding pickles.\n\n Parameters\n ----------\n\n pfpickledir : str\n This is the directory containing all of the period-finding pickles to\n process.\n\n outdir : str\n The directory the checkplot pickles will be written to.\n\n lcbasedir : str\n The base directory that this function will look in to find the light\n curves pointed to by the period-finding result files. If you're using\n `lcfnamelist` to provide a list of light curve filenames directly, this\n arg is ignored.\n\n pkpickleglob : str\n This is a UNIX file glob to select period-finding result pickles in the\n specified `pfpickledir`.\n\n lclistpkl : str or dict\n This is either the filename of a pickle or the actual dict produced by\n lcproc.make_lclist. This is used to gather neighbor information.\n\n cprenorm : bool\n Set this to True if the light curves should be renormalized by\n `checkplot.checkplot_pickle`. This is set to False by default because we\n do our own normalization in this function using the light curve's\n registered normalization function and pass the normalized times, mags,\n errs to the `checkplot.checkplot_pickle` function.\n\n nbrradiusarcsec : float\n The radius in arcseconds to use for a search conducted around the\n coordinates of this object to look for any potential confusion and\n blending of variability amplitude caused by their proximity.\n\n maxnumneighbors : int\n The maximum number of neighbors that will have their light curves and\n magnitudes noted in this checkplot as potential blends with the target\n object.\n\n makeneighborlcs : bool\n If True, will make light curve and phased light curve plots for all\n neighbors found in the object collection for each input object.\n\n fast_mode : bool or float\n This runs the external catalog operations in a \"fast\" mode, with short\n timeouts and not trying to hit external catalogs that take a long time\n to respond.\n\n If this is set to True, the default settings for the external requests\n will then become::\n\n skyview_lookup = False\n skyview_timeout = 10.0\n skyview_retry_failed = False\n dust_timeout = 10.0\n gaia_submit_timeout = 7.0\n gaia_max_timeout = 10.0\n gaia_submit_tries = 2\n complete_query_later = False\n search_simbad = False\n\n If this is a float, will run in \"fast\" mode with the provided timeout\n value in seconds and the following settings::\n\n skyview_lookup = True\n skyview_timeout = fast_mode\n skyview_retry_failed = False\n dust_timeout = fast_mode\n gaia_submit_timeout = 0.66*fast_mode\n gaia_max_timeout = fast_mode\n gaia_submit_tries = 2\n complete_query_later = False\n search_simbad = False\n\n gaia_max_timeout : float\n Sets the timeout in seconds to use when waiting for the GAIA service to\n respond to our request for the object's information. Note that if\n `fast_mode` is set, this is ignored.\n\n gaia_mirror : str or None\n This sets the GAIA mirror to use. This is a key in the\n `services.gaia.GAIA_URLS` dict which defines the URLs to hit for each\n mirror.\n\n xmatchinfo : str or dict\n This is either the xmatch dict produced by the function\n `load_xmatch_external_catalogs` above, or the path to the xmatch info\n pickle file produced by that function.\n\n xmatchradiusarcsec : float\n This is the cross-matching radius to use in arcseconds.\n\n minobservations : int\n The minimum of observations the input object's mag/flux time-series must\n have for this function to plot its light curve and phased light\n curve. If the object has less than this number, no light curves will be\n plotted, but the checkplotdict will still contain all of the other\n information.\n\n sigclip : float or int or sequence of two floats/ints or None\n If a single float or int, a symmetric sigma-clip will be performed using\n the number provided as the sigma-multiplier to cut out from the input\n time-series.\n\n If a list of two ints/floats is provided, the function will perform an\n 'asymmetric' sigma-clip. The first element in this list is the sigma\n value to use for fainter flux/mag values; the second element in this\n list is the sigma value to use for brighter flux/mag values. For\n example, `sigclip=[10., 3.]`, will sigclip out greater than 10-sigma\n dimmings and greater than 3-sigma brightenings. Here the meaning of\n \"dimming\" and \"brightening\" is set by *physics* (not the magnitude\n system), which is why the `magsarefluxes` kwarg must be correctly set.\n\n If `sigclip` is None, no sigma-clipping will be performed, and the\n time-series (with non-finite elems removed) will be passed through to\n the output.\n\n lcformat : str\n This is the `formatkey` associated with your light curve format, which\n you previously passed in to the `lcproc.register_lcformat`\n function. This will be used to look up how to find and read the light\n curves specified in `basedir` or `use_list_of_filenames`.\n\n lcformatdir : str or None\n If this is provided, gives the path to a directory when you've stored\n your lcformat description JSONs, other than the usual directories lcproc\n knows to search for them in. Use this along with `lcformat` to specify\n an LC format JSON file that's not currently registered with lcproc.\n\n timecols : list of str or None\n The timecol keys to use from the lcdict in generating this checkplot.\n\n magcols : list of str or None\n The magcol keys to use from the lcdict in generating this checkplot.\n\n errcols : list of str or None\n The errcol keys to use from the lcdict in generating this checkplot.\n\n skipdone : bool\n This indicates if this function will skip creating checkplots that\n already exist corresponding to the current `objectid` and `magcol`. If\n `skipdone` is set to True, this will be done.\n\n done_callback : Python function or None\n This is used to provide a function to execute after the checkplot\n pickles are generated. This is useful if you want to stream the results\n of checkplot making to some other process, e.g. directly running an\n ingestion into an LCC-Server collection. The function will always get\n the list of the generated checkplot pickles as its first arg, and all of\n the kwargs for runcp in the kwargs dict. Additional args and kwargs can\n be provided by giving a list in the `done_callbacks_args` kwarg and a\n dict in the `done_callbacks_kwargs` kwarg.\n\n NOTE: the function you pass in here should be pickleable by normal\n Python if you want to use it with the parallel_cp and parallel_cp_lcdir\n functions below.\n\n done_callback_args : tuple or None\n If not None, contains any args to pass into the `done_callback`\n function.\n\n done_callback_kwargs : dict or None\n If not None, contains any kwargs to pass into the `done_callback`\n function.\n\n maxobjects : int\n The maximum number of objects to process in this run.\n\n nworkers : int\n The number of parallel workers that will work on the checkplot\n generation process.\n\n Returns\n -------\n\n dict\n This returns a dict with keys = input period-finding pickles and vals =\n list of the corresponding checkplot pickles produced."} {"_id": "q_9254", "text": "This runs the runpf function."} {"_id": "q_9255", "text": "This collects variability features into arrays for use with the classifer.\n\n Parameters\n ----------\n\n featuresdir : str\n This is the directory where all the varfeatures pickles are. Use\n `pklglob` to specify the glob to search for. The `varfeatures` pickles\n contain objectids, a light curve magcol, and features as dict\n key-vals. The :py:mod:`astrobase.lcproc.lcvfeatures` module can be used\n to produce these.\n\n magcol : str\n This is the key in each varfeatures pickle corresponding to the magcol\n of the light curve the variability features were extracted from.\n\n outfile : str\n This is the filename of the output pickle that will be written\n containing a dict of all the features extracted into np.arrays.\n\n pklglob : str\n This is the UNIX file glob to use to search for varfeatures pickle files\n in `featuresdir`.\n\n featurestouse : list of str\n Each varfeatures pickle can contain any combination of non-periodic,\n stellar, and periodic features; these must have the same names as\n elements in the list of strings provided in `featurestouse`. This tries\n to get all the features listed in NONPERIODIC_FEATURES_TO_COLLECT by\n default. If `featurestouse` is provided as a list, gets only the\n features listed in this kwarg instead.\n\n maxobjects : int or None\n The controls how many pickles from the featuresdir to process. If None,\n will process all varfeatures pickles.\n\n labeldict : dict or None\n If this is provided, it must be a dict with the following key:val list::\n\n '':