Product Feedback

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. Ability to manage notebook/folder permissions with custom ACL Groups

    Define groups with a list of users, so we can set permission on a notebook/folder for that group.

    Currently, whenever we add user to a team there is a lot of manual adding of the user to many notebook/folders.

    26 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  2. Create jobs with REST API

    We'd like to create jobs with the REST API.

    6 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  REST API  ·  Flag idea as inappropriate…  ·  Admin →
    completed  ·  Rakesh responded

    This is now available with our new REST APIs.

  3. 3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  REST API  ·  Flag idea as inappropriate…  ·  Admin →
    completed  ·  Rakesh responded

    This is now available with the new REST APIs

  4. we could spin up GPU-capable clusters.

    Would love to have access to GPU instances for deep learning.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →
  5. I could cancel my account.

    There is no obvious way to cancel an active account. Please fix this.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Account management  ·  Flag idea as inappropriate…  ·  Admin →
  6. Ability to spin up clusters with smaller EC2 instance types for prototyping purposes

    It will be great for cost management if we ave the ability to spin up clusters with smaller EC2 instance types. This will allow the use of cheaper clusters for non-production purposes such as prototyping and investigations

    5 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →
  7. 19 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →
  8. Improve cluster status visibility

    Subject:
    Improve cluster status visibility

    What is the idea?
    Expose more visibility during the cluster creation process, today there is no indication of the current progress and sometimes and takes a while to spin up instances (more than 10 min), which is fine but it would be better to expose more statuses to improve the system interaction with the user.

    For example, in case of spot instances you can expose the below statuses:
    - fulfilled
    - pending-evaluation
    - pending-fulfillment
    - price-too-low

    (Inspired by AWS spot market, more info here http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-bid-status.html#spot-instance-bid-status-understand)

    Why it matters?

    Better visibility is always better.
    Today,…

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →
  9. 1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    completed  ·  0 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  10. 4 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Other  ·  Flag idea as inappropriate…  ·  Admin →
  11. A cluster can scale down to 1 worker if idle for 30 minutes

    If a cluster is idle for > 30 minutes or some number, would be great if it scaled down automatically to just one worker.

    33 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →

    Auto-scaling is not ready (both up-scaling and down-scaling). Due to legacy contracts, older customers need to contact their SA to discuss how to get this enabled.

  12. Allow a repository branch to be specified with the GitHub integration

    Allow a repository branch to be specified with the GitHub integration. For more complicated Git workflows it isn't always best to make changes to the master branch. Being able to specify the a custom branch will allow users to create a dev branch for testing and merge it into the master branch when the notebook has been completed.

    28 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  13. I could see which notebooks are currently running any command

    For small, one-off tasks we use a shared default cluster. When we need to restart that cluster we do not want to kill any running tasks, so we need to go to Cluster UI, expand the cluster in question and open each notebook attached to that cluster to check whether it's running any command. This is tedious if we have a lot of notebooks attached to the cluster.

    It would be cool if the UI showed whether the notebook is running any command or not.

    22 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →

    This is now available.
    1. Click on Clusters in the navigation bar.
    2. Then click on the relevant cluster in the cluster list. This takes you to a detailed page about the selected cluster.
    3. Now click on the “Notebooks” tab. The middle “status” column will either say “Idle” or “Running” depending on if the listed notebook is currently running or not.

  14. Support ability to group Spark Tables into "Databases" for folders.

    Support the ability to group Spark Tables into folders or "databases" so we can organize our datasets easier per client. Right now its a long list of tables. Hard to manage.

    29 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Other  ·  Flag idea as inappropriate…  ·  Admin →

    Happy to announce that our third most requested feature has been released. You can now browse all databases in the UI as well as all the tabes inside those databases. Thus, you can organize your tables into different databases.

  15. 1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →

    Cluster ACLs have been released. In the clusters’ page, you can click on the right-most arrow/more button and click on “Permissions” to control who can run notebooks and other things on the cluster.

  16. Default permissions for Home dirs

    Allow specifying default permissions for home directories of newly added users, when using ACLs. We would like to add an Everyone/Read permission by default.

    2 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Notebooks  ·  Flag idea as inappropriate…  ·  Admin →
  17. A user had more spot pricing functionality

    The ability to set spot pricing when deploying a cluster. Also, the ability to choose an "automatic" or "no preference" for availability zone.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →

    You can now set custom spot bidding price when you create a cluster (in clusters page, in jobs page, or through the REST API). In the UI, look under advanced settings and AWS.

  18. "Recent History" for clusters would show system activity (like cluster terminated due to spot price being too high) along with user activity

    "Recent History" for clusters would show system activity (like cluster terminated due to spot price being too high) along with user activity.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Cluster management  ·  Flag idea as inappropriate…  ·  Admin →
  19. selecting notebooks didn't automatically hide the file structure (makes scrolling through notebooks impossible)

    Right now, scrolling through notebooks is impossible since the file structure automatically is hidden every time you select a notebook. Can we at least have it be an option for customized UI settings? This really makes me steer away from using the environment overall.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Navigation UI  ·  Flag idea as inappropriate…  ·  Admin →

    You can pin the file browser. There is a small pin icon at the bottom right of the left menu. Once it’s pinned, you can browse on different notebooks and they’ll preview without closing the file browser. Please let us know if this helps.

  20. chapter 2 notebook shold have right path: "val lines = sc.textFile("file:///dbfs/learning-spark-master/README.md") // Create an RDD ..."

    Learning Spark Chapter 2 Examples in Scala

    Example 2-2. Scala line count

    should have right path
    val lines = sc.textFile("file:///dbfs/learning-spark-master/README.md") // Create an RDD called lines

    Because this was executed from Initial Setup in the Overview notebook when the following was evaluated:
    os.system("cd /dbfs; rm master.zip; rm -rf learning-spark-master; wget https://github.com/databricks/learning-spark/archive/master.zip; unzip master.zip;ls")

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Facebook Google
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Other  ·  Flag idea as inappropriate…  ·  Admin →
  • Don't see your idea?

Feedback and Knowledge Base